Home > Web Front-end > JS Tutorial > How to implement information crawler using Node.js (detailed tutorial)

How to implement information crawler using Node.js (detailed tutorial)

亚连
Release: 2018-06-13 14:15:08
Original
1861 people have browsed it

This article mainly introduces the process of developing information crawlers using Node.js. The crawler process can be summarized as downloading the HTML of the target website to the local and then extracting the data. Please refer to this article for specific content details

The recent project needs some information, because the project is written in Node.js, so it is natural to use Node.js to write the crawler

Project address :github.com/mrtanweijie…, the project crawls the information content of Readhub, Open Source China, Developer Headlines, and 36Kr. Multiple pages are not processed for the time being, because the crawler will run once a day, and now it is obtained every time The latest one can meet the needs, and will be improved later.

The crawler process can be summarized as downloading the HTML of the target website to the local and then extracting the data.

1. Download page

Node.js has many http request libraries. request is used here. The main code is as follows:

requestDownloadHTML () {
 const options = {
  url: this.url,
  headers: {
  'User-Agent': this.randomUserAgent()
  }
 }
 return new Promise((resolve, reject) => {
  request(options, (err, response, body) => {
  if (!err && response.statusCode === 200) {
   return resolve(body)
  } else {
   return reject(err)
  }
  })
 })
 }
Copy after login

Use Promise for packaging so that async/await can be used later. Because many websites are rendered on the client, the downloaded pages may not necessarily contain the desired HTML content. We can use Google's puppeteer to download client-rendered website pages. As we all know, when using npm i, puppeteer may fail to install because it needs to download the Chrome kernel. Just try a few more times:)

puppeteerDownloadHTML () {
 return new Promise(async (resolve, reject) => {
  try {
  const browser = await puppeteer.launch({ headless: true })
  const page = await browser.newPage()
  await page.goto(this.url)
  const bodyHandle = await page.$('body')
  const bodyHTML = await page.evaluate(body => body.innerHTML, bodyHandle)
  return resolve(bodyHTML)
  } catch (err) {
  console.log(err)
  return reject(err)
  }
 })
 }
Copy after login

Of course, it is best to use the interface request directly for pages rendered by the client. way, so that the subsequent HTML parsing is not needed, just do a simple encapsulation, and then you can use it like this: #Funny:)

await new Downloader('http://36kr.com/newsflashes', DOWNLOADER.puppeteer).downloadHTML()
Copy after login

2. HTML content extraction

HTML content extraction is of course using the artifact cheerio. Cheerio exposes the same interface as jQuery and is very simple to use. Open the page F12 in the browser to view the extracted page element nodes, and then extract the content according to the needs

readHubExtract () {
 let nodeList = this.$('#itemList').find('.enableVisited')
 nodeList.each((i, e) => {
  let a = this.$(e).find('a')
  this.extractData.push(
  this.extractDataFactory(
   a.attr('href'),
   a.text(),
   '',
   SOURCECODE.Readhub
  )
  )
 })
 return this.extractData
 }
Copy after login

3. Scheduled tasks

cron 每天跑一跑 
function job () {
 let cronJob = new cron.CronJob({
 cronTime: cronConfig.cronTime,
 onTick: () => {
  spider()
 },
 start: false
 })
 cronJob.start()
}
Copy after login

4. Data persistence

Theoretically, data persistence should not be within the scope of crawler concern. Use mongoose to create Model

import mongoose from 'mongoose'
const Schema = mongoose.Schema
const NewsSchema = new Schema(
 {
 title: { type: 'String', required: true },
 url: { type: 'String', required: true },
 summary: String,
 recommend: { type: Boolean, default: false },
 source: { type: Number, required: true, default: 0 },
 status: { type: Number, required: true, default: 0 },
 createdTime: { type: Date, default: Date.now }
 },
 {
 collection: 'news'
 }
)
export default mongoose.model('news', NewsSchema)
Copy after login

Basic operations

import { OBJ_STATUS } from '../../Constants'
class BaseService {
 constructor (ObjModel) {
 this.ObjModel = ObjModel
 }

 saveObject (objData) {
 return new Promise((resolve, reject) => {
  this.ObjModel(objData).save((err, result) => {
  if (err) {
   return reject(err)
  }
  return resolve(result)
  })
 })
 }
}
export default BaseService
Copy after login

Information

import BaseService from './BaseService'
import News from '../models/News'
class NewsService extends BaseService {}
export default new NewsService(News)
Copy after login

Happily save data

await newsService.batchSave(newsListTem)
Copy after login

For more information, just go to Github and clone the project to see it.

Summary

The above is what I compiled for everyone. I hope it will be helpful to everyone in the future.

Related articles:

How to build a d3 force-directed graph using react (detailed tutorial)

How to implement instant messaging using nodejs

About axios issues related to Vue.use

The above is the detailed content of How to implement information crawler using Node.js (detailed tutorial). For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template