Personally think the reason for the impact:
The matched image URL is not a valid URL. The article simply judges whether it is a relative path, but some URLs are invalid
The solution: add a new judgment to determine whether It is a picture of a real and valid URL
Mainly use the get_headers function to obtain http request information, determine the server response status (200) and determine whether the url is real and valid.
Test collecting pictures again
The results are worse than before and the operation is even slower.
The reason for the test is:
Although the get_headers function can determine whether the URL is real and valid, if it encounters a very slow URL resource, because there is no time limit for the get-heades request, this thread will be occupied and subsequent requests will be blocked. Blocking
file_get_content function has the same reason as above. Since some slow URL resources are occupied for a long time, the process behind the blocking is occupied. If blocked for a long time, the CPU usage will also increase
Solution;
Use curl Multi-threading. In addition, curl can set the request time. When encountering a very slow URL resource, you can give up decisively, so there is no blocking. In addition, multi-threaded requests should be more efficient. Reference: "The Learning and Application of CURL [Attached to Multi-Threading" ]》, let’s test it again;
Core code:
foreach($array as $k=>$url){
$conn[$k]=curl_init($url);//初始化
curl_setopt($conn[$k], CURLOPT_TIMEOUT, $timeout);//设置超时时间
curl_setopt($conn[$k], CURLOPT_USERAGENT, 'Mozilla/5.0 (compatible; MSIE 5.01; Windows NT 5.0)');
curl_setopt($conn[$k], CURLOPT_MAXREDIRS, 7);//HTTp定向级别 ,7最高
curl_setopt($conn[$k], CURLOPT_HEADER, false);//这里不要header,加块效率
curl_setopt($conn[$k], CURLOPT_FOLLOWLOCATION, 1); // 302 redirect
curl_setopt($conn[$k], CURLOPT_RETURNTRANSFER,1);//要求结果为字符串且输出到屏幕上
curl_setopt($conn[$k], CURLOPT_HTTPGET, true);
curl_multi_add_handle ($mh,$conn[$k]);
}
//防止死循环耗死cpu 这段是根据网上的写法
do {
$mrc = curl_multi_exec($mh,$active);//当无数据,active=true
} while ($mrc == CURLM_CALL_MULTI_PERFORM);//当正在接受数据时
while ($active and $mrc == CURLM_OK) {//当无数据时或请求暂停时,active=true
if (curl_multi_select($mh) != -1) {
do {
$mrc = curl_multi_exec($mh, $active);
} while ($mrc == CURLM_CALL_MULTI_PERFORM);
}
}
foreach ($array as $k => $url) {
if(!curl_errno($conn[$k])){
$data[$k]=curl_multi_getcontent($conn[$k]);//数据转换为array
$header[$k]=curl_getinfo($conn[$k]);//返回http头信息
curl_close($conn[$k]);//关闭语柄
curl_multi_remove_handle($mh , $conn[$k]); //释放资源
}else{
unset($k,$url);
}
}
curl_multi_close($mh);
return $data;
}
//参数接收
$callback = $_GET['callback'];
$hrefs = $_GET['hrefs'];
$urlarray = explode(',',trim($hrefs,','));
$date = date('Ymd',time());
//实例化
$img = new HttpImg();
$stime = $img->getMicrotime();//开始时间
$data = $img->Curl_http($urlarray,'20');//List data
mkdir('./img/'.$date,0777);
foreach ((array )$data as $k=>$v){
preg_match_all("/(href|src)=(["|']?)([^ "'>]+.(jpg|png|PNG |JPG|gif))2/i", $v, $matches[$k]);
if(count($matches[$k][3])>0){
$dataimg = $img->Curl_http($matches[$k][3],'20');//All image data binary
$j = 0;
foreach ((array)$dataimg as $kk=>$vv){
if($vv !=''){
$rand = rand(1000,9999);
$basename = time()."_".$ rand.".".jpg;//Save the file in jpg format
$fname = './img/'.$date."/"."$basename";
file_put_contents($fname, $ vv);
$j++;
echo "Create the ".$j." picture"."$fname"."
";
}else{
unset ($kk,$vv);
}
}
}else{
unset($matches);
}
}
$etime = $img-> getMicrotime();//End time
echo "time".($etime-$stime)."seconds";
exit;
Test the effect
It takes about 260 seconds to collect 337 pictures. Basically, it can collect one picture in one second. It is also found that the more pictures there are, the more obvious the collection speed is.
We can take a look at the file naming: that is, we can generate 10 pictures at the same time,
Due to the 20-second request time limit, some pictures are obviously incomplete after being generated, that is, the picture resources are not fully collected within 20 seconds. You can set this time yourself.