Python crawler implementation tutorial converted into PDF e-book

高洛峰
Release: 2017-02-21 17:06:15
Original
1785 people have browsed it

This article shares with you the method and code of using python crawler to convert "Liao Xuefeng's Python Tutorial" into PDF. Friends in need can refer to it.

It seems that there is no easier way to write a crawler than using Python. It's appropriate. There are so many crawler tools provided by the Python community that you will be dazzled. With various libraries that can be used directly, you can write a crawler in minutes. Today I am thinking about writing a crawler and crawling down Liao Xuefeng's Python tutorial. Create a PDF e-book for everyone to read offline.

Before we start writing the crawler, let’s first analyze the page structure of the website 1. The left side of the web page is the directory outline of the tutorial. Each URL corresponds to an article on the right. The upper right side is the article’s The title, in the middle is the text part of the article. The text content is the focus of our concern. The data we want to crawl is the text part of all web pages. Below is the user's comment area. The comment area is of no use to us, so we can ignore it.

python爬虫实现教程转换成 PDF 电子书

Tool preparation

After you have figured out the basic structure of the website, you can start preparing the tool kits that the crawler depends on. requests and beautifulsoup are two major artifacts of crawlers, reuqests is used for network requests, and beautifulsoup is used to operate html data. With these two shuttles, we can work quickly. We don't need crawler frameworks like scrapy. Using it in small programs is like killing a chicken with a sledgehammer. In addition, since you are converting html files to pdf, you must also have corresponding library support. wkhtmltopdf is a very good tool that can convert html to pdf suitable for multiple platforms. pdfkit is the Python package of wkhtmltopdf. First install the following dependency packages,

Then install wkhtmltopdf

pip install requests
pip install beautifulsoup
pip install pdfkit
Copy after login

Install wkhtmltopdf

For the Windows platform, download the stable version directly from the wkhtmltopdf official website 2 and install it. After the installation is completed, add the execution path of the program to the system environment $PATH variable. Otherwise, pdfkit cannot find wkhtmltopdf and the error "No wkhtmltopdf executable found" will appear. Ubuntu and CentOS can be installed directly using the command line

$ sudo apt-get install wkhtmltopdf # ubuntu
$ sudo yum intsall wkhtmltopdf   # centos
Copy after login

Crawler implementation

After everything is ready, you can go Code, but I still need to sort out my thoughts before writing code. The purpose of the program is to save the html text parts corresponding to all URLs locally, and then use pdfkit to convert these files into a pdf file. Let's split the task. First, save the html text corresponding to a certain URL locally, and then find all URLs and perform the same operation.

Use the Chrome browser to find the tag in the body part of the page, and press F12 to find the p tag corresponding to the body: <p >, where p is the body content of the web page. After using requests to load the entire page locally, you can use beautifulsoup to operate the HTML dom element to extract the text content.

python爬虫实现教程转换成 PDF 电子书
The specific implementation code is as follows: Use the soup.find_all function to find the text tag, and then save the content of the text part to the a.html file.

def parse_url_to_html(url):
  response = requests.get(url)
  soup = BeautifulSoup(response.content, "html5lib")
  body = soup.find_all(class_="x-wiki-content")[0]
  html = str(body)
  with open("a.html", &#39;wb&#39;) as f:
    f.write(html)
Copy after login

The second step is to parse out all the URLs on the left side of the page. Use the same method to find the left menu label <ul >

python爬虫实现教程转换成 PDF 电子书

##Specific code implementation logic: because there are two uk-nav on the page The class attribute of uk-nav-side, and the actual directory listing is the second one. All URLs have been obtained, and the function to convert URLs to HTML has been written in the first step.

def get_url_list():
  """
  获取所有URL目录列表
  """
  response = requests.get("http://www.liaoxuefeng.com/wiki/0014316089557264a6b348958f449949df42a6d3a2e542c000")
  soup = BeautifulSoup(response.content, "html5lib")
  menu_tag = soup.find_all(class_="uk-nav uk-nav-side")[1]
  urls = []
  for li in menu_tag.find_all("li"):
    url = "http://www.liaoxuefeng.com" + li.a.get(&#39;href&#39;)
    urls.append(url)
  return urls
Copy after login

The last step is to convert the html into a pdf file. Converting to a pdf file is very simple, because pdfkit has encapsulated all the logic. You only need to call the function pdfkit.from_file

def save_pdf(htmls):
  """
  把所有html文件转换成pdf文件
  """
  options = {
    &#39;page-size&#39;: &#39;Letter&#39;,
    &#39;encoding&#39;: "UTF-8",
    &#39;custom-header&#39;: [
      (&#39;Accept-Encoding&#39;, &#39;gzip&#39;)
    ]
  }
  pdfkit.from_file(htmls, file_name, options=options)
Copy after login

to execute the save_pdf function. The book pdf file is generated, the rendering:

python爬虫实现教程转换成 PDF 电子书

Summary

The total code amount adds up to less than 50 lines, however, Wait a minute, in fact, the code given above omits some details. For example, how to get the title of the article. The img tag of the text content uses a relative path. If you want to display the image normally in the PDF, you need to change the relative path to an absolute path. , and the saved temporary html files must be deleted

For more python crawler implementation tutorials converted into PDF e-books, please pay attention to the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!