Home Backend Development Python Tutorial Detailed explanation of the method of crawling to the Encyclopedia of Embarrassing Things using Python's crawler technology

Detailed explanation of the method of crawling to the Encyclopedia of Embarrassing Things using Python's crawler technology

Mar 20, 2017 am 09:25 AM
python crawler

It was my first time to learn crawler technology. I read a joke on Zhihu about how to crawl to the Encyclopedia of Embarrassing Things, so I decided to make one myself.

Achieve goals: 1. Crawling to the jokes in the Encyclopedia of Embarrassing Things

2. Crawling one paragraph every time and crawling to the next page every time you press Enter

Technical implementation: Based on the implementation of python, using the Requests library, re library, and the BeautifulSoup method of the bs4 library to implement

Main content: First, we need to clarify the ideas for crawling implementation , let’s build the main framework. In the first step, we first write a method to obtain web pages using the Requests library. In the second step, we use the BeautifulSoup method of the bs4 library to analyze the obtained web page information and use regular expressions to match relevant paragraph information. . The third step is to print out the obtained information. We all execute the above methods through a main function .

First, import the relevant libraries

1

2

3

4

import requests

from bs4 import BeautifulSoup

import bs4

import  re

Copy after login

Second, first obtain the web page information

1

2

3

4

5

6

7

8

9

10

def getHTMLText(url):

    try:

        user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'

        headers = {'User-Agent': user_agent}

        r = requests.get(url,headers = headers)

        r.raise_for_status()

        r.encoding = r.apparent_encoding

        return r.text

    except:

        return ""

Copy after login

Third, put the information into r and then analyze it

1

soup = BeautifulSoup(html,"html.parser")

Copy after login

What we need is the content and publisher of the joke. By viewing the source code on the web page, we know that the publisher of the joke is:

1

'p', attrs={'class''content'}中

Copy after login

The content of the joke is in

1

'p', attrs={'class''author clearfix'}中

Copy after login

, so we pass bs4 Library method to extract the specific content of these two tags

1

2

3

4

5

def fillUnivlist(lis,li,html,count):

    soup = BeautifulSoup(html,"html.parser")

    try:

        a = soup.find_all('p', attrs={'class''content'})

        ll = soup.find_all('p', attrs={'class''author clearfix'})

Copy after login

Then obtain the information through specific regular expressions

1

2

3

4

5

6

7

8

9

for sp in a:

    patten = re.compile(r'<span>(.*?)</span>',re.S)

    Info = re.findall(patten,str(sp))

    lis.append(Info)

    count count + 1

for mc in ll:

    namePatten = re.compile(r'<h2>(.*?)</h2>', re.S)

    d = re.findall(namePatten, str(mc))

    li.append(d)

Copy after login

What we need to pay attention to is the return of find_all and re’s findall method They are all a list. When using regular expressions, we only roughly extract and do not remove the line breaks in the tags

Next, we only need to combine the contents of the two lists and output them

1

2

3

4

5

def printUnivlist(lis,li,count):

    for i in range(count):

        a = li[i][0]

        b = lis[i][0]

        print ("%s:"%a+"%s"%b)

Copy after login

Then I make an input control function, enter Q to return an error, exit, enter Enter to return correct, and load the next page of paragraphs

1

2

3

4

5

6

def input_enter():

    input1 = input()

    if input1 == 'Q':

        return False

    else:

        return True

Copy after login

We realize the input control through the main function. If If the control function returns an error, the output will not be performed. If the return value is correct, the output will continue. We load the next page through a for loop.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

def main():

    passage = 0

    enable = True

    for i in range(20):

        mc = input_enter()

        if mc==True:

            lit = []

            li = []

            count = 0

            passage = passage + 1

            qbpassage = passage

            print(qbpassage)

            url = 'http://www.qiushibaike.com/8hr/page/' + str(qbpassage) + '/?s=4966318'

            a = getHTMLText(url)

            fillUnivlist(lit, li, a, count)

            number = fillUnivlist(lit, li, a, count)

            printUnivlist(lit, li, number)

        else:

            break

Copy after login

Here we need to note that every for loop will refresh lis[] and li[], so that the paragraph content of the webpage can be correctly output every time

Here is the source code :

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

import requests

from bs4 import BeautifulSoup

import bs4

import  re

def getHTMLText(url):

    try:

        user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'

        headers = {'User-Agent': user_agent}

        r = requests.get(url,headers = headers)

        r.raise_for_status()

        r.encoding = r.apparent_encoding

        return r.text

    except:

        return ""

def fillUnivlist(lis,li,html,count):

    soup = BeautifulSoup(html,"html.parser")

    try:

        a = soup.find_all('p', attrs={'class''content'})

        ll = soup.find_all('p', attrs={'class''author clearfix'})

        for sp in a:

            patten = re.compile(r'<span>(.*?)</span>',re.S)

            Info = re.findall(patten,str(sp))

            lis.append(Info)

            count count + 1

        for mc in ll:

            namePatten = re.compile(r'<h2>(.*?)</h2>', re.S)

            d = re.findall(namePatten, str(mc))

            li.append(d)

    except:

        return ""

    return count

def printUnivlist(lis,li,count):

    for i in range(count):

        a = li[i][0]

        b = lis[i][0]

        print ("%s:"%a+"%s"%b)

def input_enter():

    input1 = input()

    if input1 == 'Q':

        return False

    else:

        return True

def main():

    passage = 0

    enable = True

    for i in range(20):

        mc = input_enter()

        if mc==True:

            lit = []

            li = []

            count = 0

            passage = passage + 1

            qbpassage = passage

            print(qbpassage)

            url = 'http://www.qiushibaike.com/8hr/page/' + str(qbpassage) + '/?s=4966318'

            a = getHTMLText(url)

            fillUnivlist(lit, li, a, count)

            number = fillUnivlist(lit, li, a, count)

            printUnivlist(lit, li, number)

        else:

            break

main()

Copy after login

This is my first time doing it and there are still many areas that can be optimized. I hope everyone can point it out.

The above is the detailed content of Detailed explanation of the method of crawling to the Encyclopedia of Embarrassing Things using Python's crawler technology. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to solve the permissions problem encountered when viewing Python version in Linux terminal? How to solve the permissions problem encountered when viewing Python version in Linux terminal? Apr 01, 2025 pm 05:09 PM

Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

How to efficiently copy the entire column of one DataFrame into another DataFrame with different structures in Python? How to efficiently copy the entire column of one DataFrame into another DataFrame with different structures in Python? Apr 01, 2025 pm 11:15 PM

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

How to teach computer novice programming basics in project and problem-driven methods within 10 hours? How to teach computer novice programming basics in project and problem-driven methods within 10 hours? Apr 02, 2025 am 07:18 AM

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How does Uvicorn continuously listen for HTTP requests without serving_forever()? How does Uvicorn continuously listen for HTTP requests without serving_forever()? Apr 01, 2025 pm 10:51 PM

How does Uvicorn continuously listen for HTTP requests? Uvicorn is a lightweight web server based on ASGI. One of its core functions is to listen for HTTP requests and proceed...

How to dynamically create an object through a string and call its methods in Python? How to dynamically create an object through a string and call its methods in Python? Apr 01, 2025 pm 11:18 PM

In Python, how to dynamically create an object through a string and call its methods? This is a common programming requirement, especially if it needs to be configured or run...

How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading? How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading? Apr 02, 2025 am 07:15 AM

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

What are some popular Python libraries and their uses? What are some popular Python libraries and their uses? Mar 21, 2025 pm 06:46 PM

The article discusses popular Python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, Django, Flask, and Requests, detailing their uses in scientific computing, data analysis, visualization, machine learning, web development, and H

See all articles