Home > Backend Development > PHP Tutorial > PHP collection class Snoopy capture image example_PHP tutorial

PHP collection class Snoopy capture image example_PHP tutorial

WBOY
Release: 2016-07-13 10:24:36
Original
701 people have browsed it

I have been using PHP’s Snoopy class for two days and found it very useful. To get all the links in the requested web page, just use fetchlinks directly. To get all the text information, use fetchtext (it still uses regular expressions for processing internally), and there are many other functions, such as simulating form submission, etc.


How to use:

Download the Snoopy class first, download address: http://sourceforge.net/projects/snoopy/
First instantiate an object, and then call the corresponding method to obtain the crawled web page information

Copy code The code is as follows:

include 'snoopy/Snoopy.class.php';
 
$snoopy = new Snoopy();
 
$sourceURL = "http://www.jb51.net";
$snoopy->fetchlinks($sourceURL);
 
$a = $snoopy->results;

It does not provide a method to obtain the addresses of all images in a web page. I have a need to obtain the image addresses of all articles in a page. Then I wrote one myself, mainly because the matching of regular expressions is important.

Copy code The code is as follows:

//Regular expression to match images
$reTag = "//i";


Because the needs are quite special, we only need to capture the images that begin with http:// (the images from external sites may prevent hotlinking, so we want to capture them locally first)

1. Crawl the specified web page and filter out all expected article addresses;

2. Loop to grab the article address in the first step, and then use the regular expression to match the image to get all the image addresses that match the rules on the page;

3. Save the image according to the image suffix and ID (only gif, jpg here) --- If this image file exists, delete it first and then save it.

Copy code The code is as follows:


Include 'snoopy/Snoopy.class.php';
 
$snoopy = new Snoopy();
 
$sourceURL = "http://xxxxx";
$snoopy->fetchlinks($sourceURL);
 
$a = $snoopy->results;
$re = "/d+.html$/";
 
//Filter the request to obtain the specified file address
foreach ($a as $tmp) {
If (preg_match($re, $tmp)) {
                 getImgURL($tmp);
}
}
 
Function getImgURL($siteName) {
          $snoopy = new Snoopy();
          $snoopy->fetch($siteName);
                             
$fileContent = $snoopy->results;
                             
//Regular expression to match images
$reTag = "//i";
                             
If (preg_match($reTag, $fileContent)) {
                 $ret = preg_match_all($reTag, $fileContent, $matchResult);
                                                                      for ($i = 0, $len = count($matchResult[1]); $i < $len; ++$i) {
                      saveImgURL($matchResult[1][$i], $matchResult[2][$i]);
            }
}
}
 
Function saveImgURL($name, $suffix) {
$url = $name.".".$suffix;
                             
echo "Requested image address: ".$url."
";
                             
​​​​ $imgSavePath = "E:/xxx/style/images/";
          $imgId = preg_replace("/^.+/(d+)$/", "1", $name);
If ($suffix == "gif") {
                $imgSavePath .= "emotion";
         } else {
                $imgSavePath .= "topic";
}
          $imgSavePath .= ("/".$imgId.".".$suffix);
                             
If (is_file($imgSavePath)) {
               unlink($imgSavePath);
                   echo "

The file ".$imgSavePath." already exists and will be deleted

";
}
                             
          $imgFile = file_get_contents($url);
          $flag = file_put_contents($imgSavePath, $imgFile);
                             
           if ($flag) {
                    echo "

File".$imgSavePath."Save successfully

";
}
}
?>

When using PHP to crawl web pages: content, pictures, and links, I think the most important thing is regularity (obtaining the desired data based on the crawled content and specified rules). The ideas are actually relatively simple. Use There are not many methods, just a few (and to capture content, you can just directly call the methods in the class written by others)

But what I thought before is that PHP does not seem to implement the following method. For example, if there are N lines in a file (N is very large), the content of the lines that conform to the rules needs to be replaced. For example, the 3rd line is aaa and needs to be converted. into bbbbb. Common practices when you need to modify files:

1. Read the entire file at once (or read line by line), then use a temporary file to save the final conversion result, and then replace the original file

2. Read line by line, use fseek to control the position of the file pointer, and then fwrite to write

Option 1 is not advisable to read in one go when the file is large (reading line by line, then writing to a temporary file and then replacing the original file is not efficient), option 2 is when the length of the replaced string is less than or equal to There is no problem when the target value is exceeded, but there will be problems if it exceeds it. It will "cross the boundary" and disrupt the data in the next row (it cannot be replaced with new content like the concept of "selection" in JavaScript).

The following is the code for testing using option 2:

Copy code The code is as follows:

$mode = "r+";
$filename = "d:/file.txt";
$fp = fopen($filename, $mode);
if ($fp) {
$i = 1;
while (!feof($fp)) {
$str = fgets($fp);
echo $str;
If ($i == 1) {
$len = strlen($str);
​​​fseek($fp, -$len, SEEK_CUR);//Move the pointer forward
​​​fwrite($fp, "123");
}
i++;
}
fclose($fp);
}
?>

Read a line first. At this time, the file pointer actually points to the beginning of the next line. Use fseek to move the file pointer back to the beginning of the previous line, and then use fwrite to perform the replacement operation. Because it is a replacement operation, if you do not specify In the case of length, it will affect the data of the next row, and what I want is to only operate on this row, such as deleting this row or replacing the entire row with only one 1. The above example does not meet the requirements. Maybe I haven't found the right method yet...

www.bkjia.comtruehttp: //www.bkjia.com/PHPjc/825392.htmlTechArticleI have been using the Snoopy class of php for two days and found it very useful. To get all the links in the requested web page, just use fetchlinks directly. To get all the text information, use fetchtext (it also has...
Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template