Home > Web Front-end > JS Tutorial > How to implement Baidu index crawler function

How to implement Baidu index crawler function

php中世界最好的语言
Release: 2018-04-12 16:45:48
Original
4693 people have browsed it

This time I will show you how to implement the Baidu index crawler function and what are the precautions to implement the Baidu index crawler function. The following is a practical case, let's take a look.

I have read an imaginative article before, which introduced the front-end anti-crawling techniques of various major manufacturers, but as this article said, there is no 100% anti-crawling method. This article introduces a simple method to bypass All these front-end anti-crawler measures.

The following code takes Baidu Index as an example. The code has been packaged into a Baidu Index crawler node library:

https://github.com/Coffcer/baidu-index-spider

note: Please do not abuse crawlers to cause trouble to others

Baidu Index’s anti-crawler strategy

Observe the interface of Baidu Index. The index data is a trend chart. When the mouse is hovered over a certain day, two requests will be triggered and the results will be displayed in the floating box

It can be found that Baidu Index has actually implemented certain anti-crawler strategies on the front end. When the mouse moves over the chart, two requests will be triggered, one request returns a piece of html, and one request returns a generated image. The html does not contain actual values, but displays the corresponding characters on the image by setting width and

margin-left. And request parameters contain parameters such as res and res1 that we don’t know how to simulate, so it is difficult to crawl to Baidu Index data using conventional simulated requests or HTML crawling methods.

Crawler Idea

How to break through Baidu's anti-crawler method is actually very simple, just don't care about how it anti-crawler. We only need to simulate user operations, screenshot the required values, and do image recognition. The steps are probably:

  1. Simulate login

  2. Open the index page

  3. Move the mouse to the specified date

  4. Wait for the request to end and intercept the numerical part of the picture

  5. Image recognition gets the value

  6. Loop through steps 3 to 5 to get the value corresponding to each date

This method can theoretically crawl the content of any website. Next, we will implement the crawler step by step. The following libraries will be used:

  1. puppeteer Simulate browser operation

  2. node-tesseract Encapsulation of tesseract, used for image recognition

  3. jimp Image cropping

Install Puppeteer, simulate user operations

Puppeteer is a Chrome automation tool produced by the Google Chrome team, used to control Chrome execution commands. You can simulate user operations, do automated testing, crawlers, etc. Usage is very simple. There are many introductory tutorials on the Internet. You can probably know how to use it after reading this article.

API documentation: https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md

Installation:

npm install --save puppeteer
Copy after login
Puppeteer automatically downloads Chromium during installation to ensure it runs properly. However, domestic networks may not be able to successfully download Chromium. If the download fails, you can use cnpm to install it, or change the download address to the Taobao mirror and then install it:

npm config set PUPPETEER_DOWNLOAD_HOST=https://npm.taobao.org/mirrors
npm install --save puppeteer
Copy after login
You can also skip the Chromium download during installation and run it by specifying the native Chrome path through code:

// npm
npm install --save puppeteer --ignore-scripts
// node
puppeteer.launch({ executablePath: '/path/to/Chrome' });
Copy after login

accomplish

To keep the layout tidy, only the main parts are listed below. The parts of the code involving the selector are replaced by.... For the complete code, please refer to the github repository at the top of the article.

Open Baidu Index page and simulate login

What is done here is to simulate user operations, click and input step by step. There is no handling of the login

Verification Code. Handling the verification code is another topic. If you have logged into Baidu locally, you generally do not need a verification code.

// 启动浏览器,
// headless参数如果设置为true,Puppeteer将在后台操作你Chromium,换言之你将看不到浏览器的操作过程
// 设为false则相反,会在你电脑上打开浏览器,显示浏览器每一操作。
const browser = await puppeteer.launch({headless:false});
const page = await browser.newPage();
// 打开百度指数
await page.goto(BAIDU_INDEX_URL);
// 模拟登陆
await page.click('...');
await page.waitForSelecto('...');
// 输入百度账号密码然后登录
await page.type('...','username');
await page.type('...','password');
await page.click('...');
await page.waitForNavigation();
console.log(':white_check_mark: 登录成功');
Copy after login

Simulate moving the mouse and obtain the required data

需要将页面滚动到趋势图的区域,然后移动鼠标到某个日期上,等待请求结束,tooltip显示数值,再截图保存图片。

// 获取chart第一天的坐标
const position = await page.evaluate(() => {
 const $image = document.querySelector('...');
 const $area = document.querySelector('...');
 const areaRect = $area.getBoundingClientRect();
 const imageRect = $image.getBoundingClientRect();
 // 滚动到图表可视化区域
 window.scrollBy(0, areaRect.top);
 return { x: imageRect.x, y: 200 };
});
// 移动鼠标,触发tooltip
await page.mouse.move(position.x, position.y);
await page.waitForSelector('...');
// 获取tooltip信息
const tooltipInfo = await page.evaluate(() => {
 const $tooltip = document.querySelector('...');
 const $title = $tooltip.querySelector('...');
 const $value = $tooltip.querySelector('...');
 const valueRect = $value.getBoundingClientRect();
 const padding = 5;
 return {
 title: $title.textContent.split(' ')[0],
 x: valueRect.x - padding,
 y: valueRect.y,
 width: valueRect.width + padding * 2,
 height: valueRect.height
 }
});
Copy after login

截图

计算数值的坐标,截图并用jimp对裁剪图片。

await page.screenshot({ path: imgPath });
// 对图片进行裁剪,只保留数字部分
const img = await jimp.read(imgPath);
await img.crop(tooltipInfo.x, tooltipInfo.y, tooltipInfo.width, tooltipInfo.height);
// 将图片放大一些,识别准确率会有提升
await img.scale(5);
await img.write(imgPath);
Copy after login

图像识别

这里我们用Tesseract来做图像识别,Tesseracts是Google开源的一款OCR工具,用来识别图片中的文字,并且可以通过训练提高准确率。github上已经有一个简单的node封装: node-tesseract ,需要你先安装Tesseract并设置到环境变量。

Tesseract.process(imgPath, (err, val) => {
if (err || val == null) {
 console.error(':x: 识别失败:' + imgPath);
 return;
}
console.log(val);
Copy after login

实际上未经训练的Tesseracts识别起来会有少数几个错误,比如把9开头的数字识别成`3,这里需要通过训练去提升Tesseracts的准确率,如果识别过程出现的问题都是一样的,也可以简单通过正则去修复这些问题。

封装

实现了以上几点后,只需组合起来就可以封装成一个百度指数爬虫node库。当然还有许多优化的方法,比如批量爬取,指定天数爬取等,只要在这个基础上实现都不难了。

const recognition = require('./src/recognition');
const Spider = require('./src/spider');
module.exports = {
 async run (word, options, puppeteerOptions = { headless: true }) {
 const spider = new Spider({ 
 imgDir, 
 ...options 
 }, puppeteerOptions);
 // 抓取数据
 await spider.run(word);
 // 读取抓取到的截图,做图像识别
 const wordDir = path.resolve(imgDir, word);
 const imgNames = fs.readdirSync(wordDir);
 const result = [];
 imgNames = imgNames.filter(item => path.extname(item) === '.png');
 for (let i = 0; i < imgNames.length; i++) {
 const imgPath = path.resolve(wordDir, imgNames[i]);
 const val = await recognition.run(imgPath);
 result.push(val);
 }
 return result;
 }
}
Copy after login

反爬虫

最后,如何抵挡这种爬虫呢,个人认为通过判断鼠标移动轨迹可能是一种方法。当然前端没有100%的反爬虫手段,我们能做的只是给爬虫增加一点难度。

相信看了本文案例你已经掌握了方法,更多精彩请关注php中文网其它相关文章!

推荐阅读:

easyui日期时间框在IE中的兼容性如何处理

vue判断input输入内容有否有空格

The above is the detailed content of How to implement Baidu index crawler function. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template