Home > Web Front-end > JS Tutorial > How to use node to simulate login and crawl the page based on puppeteer

How to use node to simulate login and crawl the page based on puppeteer

php中世界最好的语言
Release: 2018-05-30 15:08:26
Original
1533 people have browsed it

This time I will show you how to use node to simulate login and capture the page based on puppeteer, and use node to simulate login and capture the page based on puppeteer. What are the precautions?The following is a practical case, let's take a look. .

About heat map

In the website analysis industry, website heat map can well reflect the user's operating behavior on the website. Specific analysis User preferences, targeted optimization of the website, an example of a heat map (from ptengine)

You can clearly see in the above picture where the user’s focus is Well, we do not pay attention to the function of the heat map in the product. This article will make a simple analysis and summary of the implementation of the heat map.

Mainstream implementation methods of heat maps

Generally, the following stages are required to implement heat map display: 1. Obtain the website Page
2. Obtain processed user data
3. Draw heat map
This article mainly focuses on stage 1 to introduce in detail the mainstream implementation method of obtaining website pages in heat map
4. Use iframe to directly embed the user website
5. Grab the user page and save it locally, and embed local resources through iframe (the so-called local resources are considered to be the analysis tool side)

There are two ways. Advantages and Disadvantages of Each

First of all, the first one is to embed directly into the user website. This has certain restrictions. For example, if the user website does not allow iframe nesting in order to prevent iframe hijacking (set meta X-FRAME-OPTIONS to sameorgin Or set the http header directly, or even control it directly through js if(window.top !== window.self){ window.top.location = window.location;} ), in this case you need The client's website has to do some work before it can be loaded by the iframe of the analysis tool, and it may not be so convenient to use because not all website users who need to detect and analyze the website can manage the website.

The second method is to directly capture the website page to the local server, and then browse the page captured on the local server. In this case, the page has already come over, and we can do whatever we want. First, we Bypassing the problem that X-FRAME-OPTIONS is sameorgin, we only need to solve the problem of js control. For the captured pages, we can handle it through special correspondence (such as removing the corresponding js control, or adding our own js); however, this method also has many shortcomings: 1. It cannot crawl the spa page, cannot crawl the page that requires user login authorization, cannot crawl the page where the user has set a clear setting, etc.

Both methods have https and http resources. Another problem caused by the same-origin policy is that the https station cannot load http resources, so for the best compatibility, the heat map analysis tool needs to be applied with the http protocol. , of course, specific sub-station optimization can be carried out according to the customer websites visited.

How to optimize the crawling of website pages

Here we do some optimization based on puppeteer to improve the crawling of the problems encountered in crawling website pages The probability of success mainly optimizes the following two pages:

1.spa page

The spa page is considered mainstream on the current page, but it is always well-known for its Unfriendly to search engines; the usual page crawler is actually a simple crawler, and the process usually involves initiating an http get request to the user's website (should be the user's website server). This crawling method itself has problems. First of all, the direct request is to the user server. The user server should have many restrictions on non-browser agents and need to be bypassed. Secondly, the request returns the original content, which needs to be bypassed. The part rendered through js in the browser cannot be obtained (of course, after the iframe is embedded, js execution will still make up for this problem to a certain extent). Finally, if the page is a spa page, then only the template is obtained at this time. In the heat map The display effect is very unfriendly. In response to this situation, if it is done based on puppeteer, the process becomes

puppeteer starts the browser to open the user website-->page rendering-->returns the rendered result. Simply use pseudo code to implement it as follows:

const puppeteer = require('puppeteer');
async getHtml = (url) =>{
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto(url);
  return await page.content();
}
Copy after login

In this way, the content we get is the rendered content, regardless of the rendering method of the page (client-side rendering or server-side rendering)

Pages that require login

对于需要登录页面其实分为多种情况:

需要登录才可以查看页面,如果没有登录,则跳转到login页面(各种管理系统)

对于这种类型的页面我们需要做的就是模拟登录,所谓模拟登录就是让浏览器去登录,这里需要用户提供对应网站的用户名和密码,然后我们走如下的流程:

访问用户网站-->用户网站检测到未登录跳转到login-->puppeteer控制浏览器自动登录后跳转到真正需要抓取的页面,可用如下伪代码来说明:

const puppeteer = require("puppeteer");
async autoLogin =(url)=>{
   const browser = await puppeteer.launch();
   const page =await browser.newPage();
   await page.goto(url);
   await page.waitForNavigation();
   //登录
   await page.type('#username',"用户提供的用户名");
   await page.type('#password','用户提供的密码');
   await page.click('#btn_login');
  //页面登录成功后,需要保证redirect 跳转到请求的页面
   await page.waitForNavigation();
   return await page.content();
}
Copy after login

登录与否都可以查看页面,只是登录后看到内容会所有不同 (各种电商或者portal页面)

这种情况处理会比较简单一些,可以简单的认为是如下步骤:

通过puppeteer启动浏览器打开请求页面-->点击登录按钮-->输入用户名和密码登录 -->重新加载页面

基本代码如下图:

const puppeteer = require("puppeteer");
async autoLoginV2 =(url)=>{
   const browser = await puppeteer.launch();
   const page =await browser.newPage();
   await page.goto(url);
   await page.click('#btn_show_login');
   //登录
   await page.type('#username',"用户提供的用户名");
   await page.type('#password','用户提供的密码');
   await page.click('#btn_login');
  //页面登录成功后,是否需要reload 根据实际情况来确定
   await page.reload();
   return await page.content();
}
Copy after login

总结

明天总结吧,今天下班了。

补充(还昨天的债):基于puppeteer虽然可以很友好的抓取页面内容,但是也存在这很多的局限

1.抓取的内容为渲染后的原始html,即资源路径(css、image、javascript)等都是相对路径,保存到本地后无法正常显示,需要特殊处理(js不需要特殊处理,甚至可以移除,因为渲染的结构已经完成)

2.通过puppeteer抓取页面性能会比直接http get 性能会差一些,因为多了渲染的过程

3.同样无法保证页面的完整性,只是很大的提高了完整的概率,虽然通过page对象提供的各种wait 方法能够解决这个问题,但是网站不同,处理方式就会不同,无法复用。

相信看了本文案例你已经掌握了方法,更多精彩请关注php中文网其它相关文章!

推荐阅读:

jQuery实现模糊查询实战案例解析

怎样优化node Async/Await异步编程

The above is the detailed content of How to use node to simulate login and crawl the page based on puppeteer. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template