为何curl或file_get_contents采集url时k数过高则不能获取?
之前通过http://bbs.csdn.net/topics/390572750得知了获取方式,但有的链接如下:
http://www.autohome.com.cn/77/options.html
http://www.autohome.com.cn/59/options.html
都可正常获取
但以下链接:
http://www.autohome.com.cn/146/options.html
http://www.autohome.com.cn/317/options.html
则获取为空,不知道是何原因,目前个人看是能获取的页面K数要小于不能获取的页
求各位高手帮忙,看是什么问题? 小弟用的是lnmp
回复讨论(解决方案)
确认html获取到了,但通过正则不能获取到指定部分(url html字节大时)
都在30~35k,不可能是获取的问题,肯定是你的正则写得有问题
看下你的正则吧,是不是有些情况无法匹配。你自己都说已经获取到HTML了,那问题很好定位了

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

11 Best PHP URL Shortener Scripts (Free and Premium)

Working with Flash Session Data in Laravel

Build a React App With a Laravel Back End: Part 2, React

Simplified HTTP Response Mocking in Laravel Tests

cURL in PHP: How to Use the PHP cURL Extension in REST APIs

12 Best PHP Chat Scripts on CodeCanyon

Announcement of 2025 PHP Situation Survey
