Home > Backend Development > PHP Tutorial > 请问 寻找PHP采集大量网页高效可行的方法

请问 寻找PHP采集大量网页高效可行的方法

WBOY
Release: 2016-06-13 12:05:33
Original
803 people have browsed it

请教 寻找PHP采集大量网页高效可行的方法

本帖最后由 oasisxp 于 2014-08-25 13:45:08 编辑 想用PHP的CURL采集虾米网的音乐信息。
但是很慢,采集到50个左右的时候就会停掉,然后网页卡住,第二次运行的时候就无法采集,应该是根据IP识别后,不允许采集了吧,所以基本上采集数据非常慢。
请问这种大数据的采集应该怎么做?
也有可能是我代码的问题。
以下是部分代码。
$j=0;<br />	//起始ID<br />	$id = 200000;<br />	//采集1000条<br />	//保存采集的数据<br />	$data = array();<br />	while($j<1000){<br />		$url = 'http://www.xiami.com/song/'.($id++);<br />		$ch = curl_init();<br />		$status = curl_getinfo($ch);<br />		///$status['redirect_url'] ;// 跳转到的新地址<br />		$header[]='Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8';<br />		$header[]='Accept-Encoding:gzip,deflate,sdch';<br />		$header[]='Accept-Language:zh-CN,zh;q=0.8';<br />		$header[]='Cache-Control:max-age=0';<br />		$header[]='Connection:keep-alive';<br />		$header[]='Cookie:_unsign_token=a35437bd35c221c09a0e6f564e17c225; __gads=ID=7fcc242f6fd63d77:T=1408774454:S=ALNI_Mae8MH6vL5z6q4NlGYzyqgD4jHeEg; bdshare_firstime=1408774454639; _xiamitoken=3541aab48832ba3ceb089de7f39b9b0f; pnm_cku822=211n%2BqZ9mgNqgJnCG0Zu8%2BzyLTPuc%2B7wbrff98%3D%7CnOiH84T3jPCG%2FIr%2BiPOG8lI%3D%7CneiHGXz6UeRW5k4rRCFXIkcoTdd7ym3fZdO2FrY%3D%7Cmu6b9JHlkuGa5pDqnOie5ZDkmeqb4ZTule6V7ZjjlOib7JrmkvdX%7Cm%2B%2BT%2FGIUew96DXsUYBd4HawbrTOXOVI4iyOLIYUqT%2B9P%7CmO6BH2wDcB9rHGsYdwRrH2gfbAN%2FDH8QZBNkF3gDeQqqCg%3D%3D%7Cme6d7oHyneiH84Twn%2BmR64TzUw%3D%3D; CNZZDATA921634=cnzz_eid%3D1437506062-1408774274-%26ntime%3D1408937320; CNZZDATA2629111=cnzz_eid%3D2021816723-1408774274-%26ntime%3D1408937320; isg=075E6FBDF77039CEB63A1BA239420244';<br />		$header[]='Host:www.xiami.com';<br />		$header[]='User-Agent:Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1653.0 Safari/537.36';<br /><br />		curl_setopt($ch, CURLOPT_URL, $url);	//要访问的地址<br />		curl_setopt($ch, CURLOPT_HTTPHEADER, $header);	//设置http头<br />		curl_setopt($ch, CURLOPT_HEADER, 0);	//显示返回的Header区域内容<br />		curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);	//获取的信息以文件流的形式返回<br />		curl_setopt($ch, CURLOPT_TIMEOUT, 20);	//设置超时限制防止死循环<br />		$content = curl_exec($ch);	//执行操作<br />		$curl_errno = curl_errno($ch);<br />		$curl_error = curl_error($ch);<br />		curl_close($ch);	//关闭CURL会话<br />		preg_match('/name="description"\s+content="《(.+)》演唱者(.+),所属专辑《(.+)》/', $content,$matches);<br />		//如果歌曲名字为空,跳过<br />		if(empty($matches[1]) || trim($matches[1]) == ''){<br />			continue;<br />		}<br />		<br />		//匹配出的数据<br />		$data[$id]['song'] = empty($matches[1])?' ':$matches[1];<br />		$data[$id]['songer'] = empty($matches[2])?' ':$matches[2];<br />		$data[$id]['album'] = empty($matches[3])?' ':$matches[3];<br />		<br />		preg_match('/album\/(\d+)/', $content,$matches);<br /><br />		$data[$id]['albumId'] = empty($matches[1])?0:$matches[1];<br /><br />		preg_match('/\/artist\/(\d+)/', $content,$matches);<br />		$data[$id]['songerId'] = empty($matches[1])?0:$matches[1];<br /><br />		//歌词<div class="lrc_main"><br />		preg_match('/<div class="lrc_main">(.*)<\/div>/Us', $content,$matches);<br />		$data[$id]['lrc'] =  empty($matches[1])?' ':addslashes($matches[1]);<br />		//分享 分享<em>(3269)</em><br />		preg_match('/分享<em>\((\d+)\)<\/em>/Us', $content,$matches);<br />		$data[$id]['share'] =  empty($matches[1]) ? 0:$matches[1];<br />		//评论次数 <p class="wall_list_count"><span>920<br />		preg_match('/<p class="wall_list_count"><span>(\d+)<\/span>/Us', $content,$matches);<br />		$data[$id]['comment_count'] =  empty($matches[1])?0:$matches[1];<br /><br /><br />		//入库操作<br />		//print_r($data);<br />		//_____________________________<br />		$j++;<br />		usleep(3000);<br />	}
Copy after login





------解决方案--------------------
亲,用snoopy类吧
------解决方案--------------------
本帖最后由 PhpNewnew 于 2014-08-27 22:09:20 编辑

亲用 Ruby 或者 Go 吧

开玩笑,就算你要跑好歹你也弄成命令行的模式跑呀....
------解决方案--------------------
应该是xiami.com服务器有限制,禁止采集吧

1,每个url请求只采10-20打,然后做个跳转在继续采集,这样也可以防止页面超时,如果你在虚机上运行,长时间点用cpu,进程可能会被kill.

2,每次url请求header中的user-agent,cookies,最好都能改一下。

3,如果还不行,用火车头试试吧!

4,如果火车也不行,那就放弃这个站吧!
------解决方案--------------------
把foreach拆分成循环执行同一页面。
第一次浏览器或者cronrab定时执行 http://localhost/caiji.php?num=1 每次完成后,$_GET['num']+1;curl 重复l执行同一脚本,当$_GET['num']==1000后,退出,不再执行curl。

if($_GET['num']){<br />$url = 'http://www.xiami.com/song/'.$_GET['num'];<br />//你的代码<br />$_GET['num'])++;<br />}<br />if($_GET['num']<1001){<br />        $ch = curl_init();<br />	curl_setopt($ch, CURLOPT_URL,"http://localhost/caiji.php?num=".$_GET['num']));<br />	curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);<br />	curl_setopt($ch, CURLOPT_CONNECTTIMEOUT ,2); <br />	curl_setopt($ch, CURLOPT_TIMEOUT ,2);<br />	curl_exec($ch);<br />	curl_close($ch);<br />}else{<br />   exit;<br />}<br />
Copy after login


Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template