Most of the data collection uses regular expressions. I will briefly introduce the idea of how to implement the collection. What I am talking about here is the implementation of PHP. It is usually run on the local machine. It is unwise to put it in the space, because not only It consumes a lot of resources and needs to support remote crawling functions, such as file_get_contents($urls)file($url), etc.
1. Automatic switching of the article list page and obtaining the article path.
2. Obtain: title, content
3. Storage
4. Question
1. Automatic switching of the article list page and obtaining the article path.
a. Automatic switching of list pages generally relies on dynamic pages. For example,
http://www.phpfirst.com/foru... d=1&page=$i
It can be implemented later by using the automatic increase or range of $i, such as $i++;
It can also be like the one demonstrated by penzi. From the page to the page, you can control the range of $i in code.
b. There are two types of scoring points for article paths: those that require filling in regular rules and those that do not require filling in regular rules:
1) No need to fill in the regular rules to get all the links to the article list page above
But it is best to filter and process the connections---determine duplicate connections, leave only one, process relative paths, and turn them into absolute paths. For example../ and ./, etc.
The following is the messy implementation function I wrote:
PHP:
------------------------------------------------ ----------------------------------
//$e=clinchgeturl("http://phpfirst.com/forumdisplay.php?fid=1");
//var_dump($e);
Function clinchgeturl($url)
{
//$url="http://127.0.0.1/1.htm";
//$rootpath="http://fsrootpathfsfsf/yyyyyy/";
//var_dump($rrr);
if(eregi((.)*[.](.)*,$url)){
$roopath=split("/",$url);
$rootpath="http://".$roopath[2]."/";
$nnn=count($roopath)-1;for($yu=3;$yu<$nnn;$yu++){$rootpath.=$roopath[$yu]."/";}
// var_dump($rootpath); //http: ,,127.0.0.1,xnml,index.php
}
else{$rootpath=$url; //var_dump($rootpath);
}
if(isset($url)){
echo "$url has the following link:
";
$fcontents = file($url);
while(list(,$line)=each($fcontents)){
while(eregi((href[[:space:]]*=[[:space:]]*"?[[:alnum:]:@/._-]+[?]?[^"] *"?),$line,$regs)){
//$regs[1] = eregi_replace((href[[:space:]]*=[[:space:]]*"?)([[:alnum:]:@/._-]+ )("?),"2",$regs[1]);
$regs[1] = eregi_replace((href[[:space:]]*=[[:space:]]*["]?)([[:alnum:]:@/._-]+ [?]?[^"]*)(.*)[^"/]*(["]?),"2",$regs[1]);
if(!eregi(^http://,$regs[1])){
if(eregi(^..,$regs[1])){
// $roopath=eregi_replace((http://)?([[:alnum:]:@/._-]+)[[:alnum:]+](.*)[[:alnum: ]+],"http://2",$url);
$roopath=split("/",$rootpath);
$rootpath="http://".$roopath[2]."/";
//echo "This is fundamental: "." ";
$nnn=count($roopath)-1;for($yu=3;$yu<$nnn;$yu++){$rootpath.=$roopath[$yu]."/";}
//var_dump($rootpath);
if(eregi(^..[/[:alnum:]],$regs[1])){
//echo "This is ../directory/: "." ";
//$regs[1]="../xx/xxxxxx.xx";
// $rr=split("/",$regs[1]);
//for($oooi=1;$oooi
$rrr=$regs[1];
// {$rrr.="/".$rr[$oooi];
$rrr = eregi_replace("^[.][.][/]",,$rrr); //}
$regs[1]=$rootpath.$rrr;
}
}else{
if(eregi(^[[:alnum:]],$regs[1])){ $regs[1]=$rootpath.$regs[1]; }
else{ $regs[1] = eregi_replace("^[/]",,$regs[1]); $regs[1]=$rootpath.$regs[1]; }
}
}
$line = $regs[2];
if(eregi((.)*[.](htm|shtm|html|asp|aspx|php|jsp|cgi)(.)*,$regs[1])){
$out[0][]=$regs[1]; }
}
}
}for ($ouou=0;$ouou
{
if($out[0][$ouou]==$out[0][$ouou+1]){
$sameurlsum=1;
//echo "sameurlsum=1:";
for($sameurl=1;$sameurl
if($out[0][$ouou+$sameurl]==$out[0][$ouou+$sameurl+1]){$sameurlsum++;}
else{break;}
}
for($p=$ouou;$p
{ $out[0][$p]=$out[0][$p+$sameurlsum];}
}
}
$i=0;
while($out[0][++$i]) {
//echo $root.$out[0][$i]." ";
$outed[0][$i]=$out[0][$i];
}
unset($out);
$out=$outed; return $out;
}
?>
The things above can only be zended, otherwise they will hinder the appearance of the city: (
After getting all the unique connections, put them in the array
2) Processing that requires filling in regular expressions
If you want to accurately get the article link you need, use this method
Follow Ketle’s idea
Use
PHP:
------------------------------------------------ ----------------------------------
Function cut($file,$from,$end){
$message=explode($from,$file);
$message=explode($end,$message[1]);
return $message[0];
}
$from is the html code in front of the list
$end is the html code behind the list
The above parameters can be submitted through the form.
Remove the parts of the list page that are not lists, and the rest are the required connections,
Just use the following regular expression to get:
PHP:
------------------------------------------------ ----------------------------------
preg_match("/^(http://)?(.*)/i",
$url, $matches);
return $matches[2];
2. Obtain: title, content
a First, use the obtained article path to read the target path
You can use the following functions:
PHP:
------------------------------------------------ ----------------------------------
Function getcontent($url) {
if($handle = fopen ($url, "rb")){
$contents = "";
do {
$data = fread($handle, 2048);
if (strlen($data) == 0) {
break;
}
$contents .= $data;
} while(true);
fclose ($handle);
}
else
exit(".....");
return $contents;
}
Or directly
PHP:
------------------------------------------------ ----------------------------------
file_get_contents($urls);
The latter is more convenient, but the shortcomings can be seen by comparing the above.
b, then get the title:
Generally use this implementation:
PHP:
------------------------------------------------ ----------------------------------
preg_match("||",$allcontent,$title);
The parts inside are obtained by submitting the form.
You can also use a series of cut functions
For example, the function cut($file,$from,$end) mentioned above, specific string cutting can be achieved through character processing function cutting, and "getting content" will be discussed in detail later.
c, get content
The idea of getting the content is the same as getting the title, but the situation is more complicated because the content is not that simple.
1) Characteristic strings near the content such as double quotes, spaces, newlines, etc. are big obstacles
The double quotes need to be changed to "" which can be processed by addslashes()
To remove the newline symbol, you can pass
PHP:
------------------------------------------------ ----------------------------------
$a=ereg_replace(" ", , $a);
$a=ereg_replace("", , $a);
Remove.
2) Idea 2, using a lot of cutting-related functions to extract content requires a lot of practice and debugging. I am working on this, but I haven’t made any breakthrough~~~~~~~~
3. Storage
a. Make sure your database can be inserted
For example, I can insert it directly like this:
PHP:
------------------------------------------------ ----------------------------------
$sql="INSERT INTO $articles VALUES (, $title, , $article,, , clinch, from, keywords, 1, $column id, $time, 1); ";
PHP:
------------------------------------------------ ----------------------------------
(,
It is automatically in ascending order