To capture remote content, I have been using the file_get_content function before. In fact, I have known about the existence of such a good thing as curl, but after taking a look, I feel that its use is quite complicated. It is not as simple as file_get_content, and there are no requirements. Big, so no learning to use curl.
Until recently, when I was trying to make a web thief program, I discovered that file_get_content could no longer meet the needs. I think that when reading remote content, except that file_get_content is more convenient to use than curl, it is not as good as curl.
Some comparisons between curl and file_get_content in php
Main differences:
After studying, I discovered that curl supports many protocols, including FTP, FTPS, HTTP, HTTPS, GOPHER, TELNET, DICT, FILE and LDAP. In other words, it can do many things that file_get_content cannot do. Curl can achieve remote acquisition and collection of content in PHP; implement FTP upload and download of the PHP web version; implement simulated login; implement interface docking (API), data transmission; implement simulated cookies; download file breakpoint resumption, etc., the function is very powerful .
After understanding some basic uses of curl, I found that it is not difficult. It is just a little harder to remember some of the setting parameters, but we can just remember a few commonly used ones.
Enable curl:
Because PHP does not support the curl function by default, if you want to use curl, you first need to enable this function in php.ini, that is, remove the semicolon in front of ;extension= php_curl.dll, then save and restart apache/ iis is just fine.
Basic syntax:
Recently I need to obtain music data from other people’s websites. I used the file_get_contents function, but I always encountered the problem of failure to obtain it. Although I set the timeout according to the examples in the manual, it does not work most of the time:
$config['context'] = stream_context_create(array('http' => array('method' => "GET",
'timeout' => 5//This timeout is unstable , often does not work
)
));
At this time, if I look at the server’s connection pool, I will find a bunch of similar errors, which gives me a headache:
file_get_contents(http://***): failed to open stream…
Now I use the curl library and write a function replacement:
function curl_file_get_contents($durl){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $durl);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_USERAGENT, _USERAGENT_);
curl_setopt($ch, CURLOPT_REFERER,_REFERER_);
curl_setopt($ch, CURLOPT_RETURNT RANSFER, 1 );
$r = curl_exec($ch);
curl_close($ch);
return $r;
}
So no more problems other than real network issues.
This is a test about curl and file_get_contents done by others:
The number of seconds it takes for file_get_contents to crawl google.com:
2.31319094
2.30374217
2.21512604
3.30553889
2.30124092
Time used by curl:
0.68719101
0.64675593
0.64326
0.81983113
0.63956594
Is there a big gap? Haha, from my experience, these two tools are not only different in speed, but also in stability.
It is recommended that friends who have high requirements for the stability of network data capture use the curl_file_get_contents function above. It is not only stable and fast, but also can fake the browser to spoof the target address!
Method 1: Use file_get_contents to get the content in get mode
Method 2: Use fopen to open the url and get the content using get method
Method 3: Use the file_get_contents function to get the url in post mode
$opts = array (
'http' => array (
'method' => 'POST',
'header'=> "Content-type: application/x- www-form-urlencodedrn" .
"Content-Length: " . strlen($data) . "rn",
'content' => $data
)
);
$context = stream_context_create($opts);
$html = file_get_contents('http://localhost/e/admin/test.html', false, $context);
echo $html;
? >
Method 4: Use the fsockopen function to open the url and obtain the complete data in get mode, including header and body
Method 5: Use the fsockopen function to open the url and obtain the complete data in POST mode, including header and body
// Building referrer
if($referrer=="") // if not given use this script as referrer
$referrer="111″;
// making string from $data
foreach($data as $key=>$value)
$values[]="$key=".urlencode($value);
$ data_string=implode("&",$values);
// Find out which port is needed – if not given use standard (=80)
if(!isset($URL_Info["port"]))
$URL_Info["port"]=80 ;
// building POST-request:
$request.="POST ".$URL_Info["path"]." HTTP/1.1n";
$request.="Host: ".$URL_Info ["host"]."n";
$request.="Referer: $referern";
$request.="Content-type: application/x-www-form-urlencodedn";
$request.="Content-length: ".strlen($data_string)."n";
$request.="Connection: closen";
$request.="Cookie: $cookien";
$request.="n";
$request.=$data_string."n";
$fp = fsockopen($URL_Info["host"],$URL_Info["port"]);
fputs($fp, $request);
while(!feof($fp)) {
$result .= fgets($fp, 1024);
}
fclose($fp);
return $result;
}
?>
Method 6: Use the curl library. Before using the curl library, you may need to check whether the curl extension has been turned on in php.ini
The three functions of curl, fsockopen and file_get_contents in PHP can all realize the collection of simulated speeches. What is the difference between the three, or is there anything to pay attention to
Zhao Yongbin:
Sometimes when file_get_contents() is used to call external files, it is easy to report an error due to timeout. Just switch to curl. The specific reason is unclear
curl is more efficient than file_get_contents() and fsockopen() because CURL will automatically cache DNS information (the highlight is for me to test personally)
Fan Jiapeng:
file_get_contents curl fsockopen
Selective operation under the current requested environment, no generalization:
Based on the KBI application developed by our company:
Just started using: file_get_contents
Later adopted: fsockopen
Finally adopted till now: curl
(Remote) The expression I personally understand is as follows (please point out if it is wrong, please add if it is not correct)
file_get_contents needs to enable allow_url_fopen in php.ini. When requesting http, http_fopen_wrapper is used, and keeplive.curl is not OK.
file_get_contents() has high single execution efficiency and returns information without header.
This is no problem when reading ordinary files, but problems will occur when reading remote files.
If you want to make a continuous connection, request multiple pages multiple times. Then there will be problems with file_get_contents and fopen.
The obtained content may also be incorrect. So when doing some similar collection work, there will definitely be a problem.
Sock is lower-level, troublesome to configure and difficult to operate. Return complete information.
Pan Shaoning-Tencent:
file_get_contents Although you can get the content of a certain URL, you cannot post get it.
curl can post and get. You can also get head information
while socket is at the lower level. You can set up interaction based on UDP or TCP protocol
File_get_contents and curl can do it, and socket can do it.
What socket can do, curl may not do.
file_get_contents more often just pulls data. It is more efficient and simpler.
I have also encountered Zhao's situation. I set the host through CURL and it was OK. This has something to do with the network environment