Home > Backend Development > PHP Tutorial > Blog crawling system, blog crawling_PHP tutorial

Blog crawling system, blog crawling_PHP tutorial

WBOY
Release: 2016-07-13 10:08:56
Original
900 people have browsed it

Blog crawling system, blog crawling

Introduction

I had nothing to do on the weekend and was bored, so I made a blog crawling system using php. I often visit cnblogs. Of course, I started from the blog park (see I still like the blog park). My crawling comparison Simple, get the content of the web page, and then use regular matching to get what you want, and then save the database. Of course, you will encounter some problems in the actual process. I have already thought about it before doing this, and I want it to be expandable. If I want to add csdn, 51cto, Sina blog and other content in the future, it can be easily expanded.

Those things can be grabbed?

First of all, I want to say something. This is a simple crawl. Not everything you see on the web page can be crawled. Some things cannot be crawled, like the following

Blog crawling system, blog crawling_PHP tutorialFor example, start crawling from link a. If the depth is 1, just get the content of the current link. If the depth is 2, then match the link according to the specified rules from the content of link a. The matched link is also processed with a depth of 1, and so on. Depth is the depth and level of the link. Only in this way can the crawler "crawl".

Of course, if you use a link to crawl specific content, the things you can crawl are very limited, or you may die before crawling (the subsequent levels do not match the content), so when crawling You can set multiple starting links when fetching. Of course, you are likely to encounter many duplicate links when crawling, so you have to mark the crawled links to prevent repeated acquisition of the same content, causing redundancy. There are several variables to cache this information, the format is as follows

<p><span>第一,就是一个hash数组,键值是url的md5值,状态是0,维护一个不重复的url数组,形如下面的形式</span></p>

<pre class="code"><span>Array</span><span>
(
    [bc790cda87745fa78a2ebeffd8b48145] </span>=> 0<span>
    [9868e03f81179419d5b74b5ee709cdc2] </span>=> 0<span>
    [4a9506d20915a511a561be80986544be] </span>=> 0<span>
    [818bcdd76aaa0d41ca88491812559585] </span>=> 0<span>
    [9433c3f38fca129e46372282f1569757] </span>=> 0<span>
    [f005698a0706284d4308f7b9cf2a9d35] </span>=> 0<span>
    [e463afcf13948f0a36bf68b30d2e9091] </span>=> 0<span>
    [23ce4775bd2ce9c75379890e84fadd8e] </span>=> 0
    ......<span>
)</span>
Copy after login

<p><span>第二个就是要获取的url数组,这个地方还可以优化,我是将所有的链接链接全部获取到数组中,再去循环数组获取内容,就等于是说,所有最大深度减1的内容都获取了两次,这里可以直接在获取下一级内容的时候顺便把内容获取了,然后上面的数组中状态修改为1(已经获取),这样可以提高效率。先看看保存链接的数组内容:</span></p>

<pre class="code"><span>Array</span><span>
(
    [</span>0] => <span>Array</span><span>
        (
            [</span>0] => http:<span>//</span><span>zzk.cnblogs.com/s?t=b&w=php&p=1</span>
<span>        )
    [</span>1] => <span>Array</span><span>
        (
            [</span>0] => http:<span>//</span><span>www.cnblogs.com/baochuan/archive/2012/03/12/2391135.html</span>
            [1] => http:<span>//</span><span>www.cnblogs.com/ohmygirl/p/internal-variable-1.html</span>
            [2] => http:<span>//</span><span>www.cnblogs.com/zuoxiaolong/p/java1.html</span>
                ......<span>
        )

    [</span>2] => <span>Array</span><span>
        (
            [</span>0] => http:<span>//</span><span>www.cnblogs.com/ohmygirl/category/623392.html</span>
            [1] => http:<span>//</span><span>www.cnblogs.com/ohmygirl/category/619019.html</span>
            [2] => http:<span>//</span><span>www.cnblogs.com/ohmygirl/category/619020.html</span>
                ......<span>
        )

)</span>
Copy after login

Finally, all the links are combined into an array and returned, and the program loops to obtain the content in the connection. Just like the above acquisition level is 2, the link content of level 0 has been acquired, and it is only used to obtain the links in level 1. All the link content in level 1 has also been acquired, and it is only used to save the links in level 2. , when the content is actually obtained, the above content will be obtained again, and the status in the above hash array is not used. . . (To be optimized).

There is also a regular rule for obtaining articles. By analyzing the content of articles in the blog park, it is found that the title and body of the article can basically be obtained very regularly

<p><span>标题,标题html代码的形式都是下图的那种格式,可以很轻松的用下面的正则匹配到</span></p>

<pre class="code"><span>#</span><span><a\s*?id=\"cb_post_title_url\"[^>]*?>(.*?)<\/a>#is</span>
Copy after login
<p><img  alt="Blog crawling system, blog crawling_PHP tutorial" >正文,正文部分是可以通过正则表达式的高级特性平衡组很容易获取到的,但弄了半天发现php好像对平衡组支持的不是很好,所以放弃额平衡组,在html源码中发现通过下面的正则也可以很容易匹配到文章正文的内容,每篇文章基本都有下图中的内容</span></p>

<pre class="code"><span>#</span><span>(<div\s*?id=\"cnblogs_post_body\"[^>]*?>.*)<div\s*id=\"blog_post_info_block\">#is</span>
Copy after login

<p>开始:</p>
<p><img  alt="Blog crawling system, blog crawling_PHP tutorial" ><span>for</span>(<span>$i</span>=1;<span>$i</span><=100;<span>$i</span>++<span>){
            </span><span>echo</span> "PAGE{<span>$i</span>}*************************[begin]***************************\r"<span>;
            </span><span>$spidercnblogs</span> = <span>new</span> C\Spidercnblogs("http://zzk.cnblogs.com/s?t=b&w=php&p={$i}"<span>);
            </span><span>$urls</span> = <span>$spidercnblogs</span>-><span>spiderUrls();
            </span><span>die</span><span>();
            </span><span>foreach</span> (<span>$urls</span> <span>as</span> <span>$key</span> => <span>$value</span><span>) {
                </span><span>$cnblogs</span>->grap(<span>$value</span><span>);
                </span><span>$cnblogs</span>-><span>save();
            }
        }</span>
Copy after login

At this point, you can grab what you like. The grabbing speed is not very fast. I opened 10 processes on an ordinary PC and spent several hours grabbing more than 400,000 pieces of data. , let’s take a look at the display effect of the captured content after being slightly optimized. The basic css code of the blog garden is added here, and you can see the effect and

<p>抓取到的内容稍作修改:</p>
<p><img  alt="Blog crawling system, blog crawling_PHP tutorial" >
<p>原始内容</p>
<p><img  alt="Blog crawling system, blog crawling_PHP tutorial" >github&mdash;&mdash;myBlogs</span></strong></p>

Copy after login

The copyright of this article belongs to the author iforever (luluyrt@163.com). Any form of reprinting is prohibited without the consent of the author. After reprinting the article, the author and the original text link must be given in an obvious position on the article page, otherwise we will reserve the right to pursue it. Legal liability rights.

www.bkjia.comtruehttp: //www.bkjia.com/PHPjc/948224.htmlTechArticleBlog crawling system, blog crawling introduction I had nothing to do on the weekend, I was bored, so I made a blog crawling system using php , I often visit cnblogs, of course from the blog garden (see I still like it very much...
Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template