Home > Web Front-end > JS Tutorial > How to use Node.js crawler to implement web page requests

How to use Node.js crawler to implement web page requests

亚连
Release: 2018-06-12 14:54:49
Original
1529 people have browsed it

This article mainly introduces the web request module of Node.js crawler. Now I will share it with you and give you a reference.

This article introduces the web request module of Node.js crawler and shares it with everyone. The details are as follows:

Note: If you download the latest nodegrass version, since some methods have been updated, the examples in this article It is no longer suitable. Please see the examples in the open source address for details.

1. Why should I write such a module?

The author wants to use Node.js to write a crawler. Although the method provided by Node.js official API to request remote resources is very simple, please refer to

http:// nodejs.org/api/http.html Two methods are provided for Http requests: http.get(options, callback) and http.request(options, callback).

You will know by looking at the method, get The method is used for get requests, while the request method provides more parameters, such as other request methods, the port of the requesting host, etc. Requests for Https are similar to Http. The simplest example:

var https = require('https');
https.get('https://encrypted.google.com/', function(res) {
 console.log("statusCode: ", res.statusCode);
 console.log("headers: ", res.headers);

 res.on('data', function(d) {
  process.stdout.write(d);
 });

}).on('error', function(e) {
 console.error(e);
});
Copy after login

For the above code, we just want to request the remote host and get the response information, such as response status, response header, and response body content. The second parameter of the get method is a callback function. We obtain the response information asynchronously. Then, in the callback function, the res object listens to data. The second parameter in the on method is another callback, and you get d (the response information you requested), it is very likely that callbacks will be introduced again when operating it, layer by layer, and finally faint. . . Regarding asynchronous programming, some students who are used to writing code in a synchronous way are very confused. Of course, some excellent synchronization libraries have been provided at home and abroad, such as Lao Zhao's Wind.js... It seems It's a bit far-fetched. In fact, what we ultimately want to get when calling get is the response information, and we don't care about the monitoring process such as res.on because it is too lazy. I don’t want to res.on('data',func) every time, so the nodegrass I want to introduce today was born.

2. Nodegrass requests resources, like Jquery’s $.get(url,func)

The simplest example:

var nodegrass = require('nodegrass');
nodegrass.get("http://www.baidu.com",function(data,status,headers){
  console.log(status);
  console.log(headers);
  console.log(data);
},'gbk').on('error', function(e) {
  console.log("Got error: " + e.message);
});
Copy after login

What one Look, it’s no different from the official original get, it’s indeed almost the same =. =! It just lacks a layer of event monitoring callbacks of res.on('data',func). Believe it or not, I seem to feel much more comfortable anyway. The second parameter is also a callback function, in which the parameter data is the response body content, status is the response status, and headers are the response headers. After getting the response content, we can extract any information we are interested in from the obtained resources. Of course, in this example, it is just a simple printed console. The third parameter is the character encoding. Currently, Node.js does not support gbk. Nodegrass internally refers to iconv-lite for processing. Therefore, if the webpage encoding you request is gbk, such as Baidu. Just add this parameter.

So what about https requests? If it is an official api, you have to introduce the https module, but the request get method is similar to http, so nodegrass integrates them by the way. Look at the example:

var nodegrass = require('nodegrass');
nodegrass.get("https://github.com",function(data,status,headers){
  console.log(status);
  console.log(headers);
  console.log(data);
},'utf8').on('error', function(e) {
  console.log("Got error: " + e.message);
});
Copy after login

nodegrass will automatically identify whether it is http or https based on the url. Of course, your url must have it. You cannot just write www.baidu.com/ but http://www.baidu.com/ .

For post requests, nodegrass provides the post method. See example:

var ng=require('nodegrass');
ng.post("https://api.weibo.com/oauth2/access_token",function(data,status,headers){
  var accessToken = JSON.parse(data);
  var err = null;
  if(accessToken.error){
     err = accessToken;
  }
  callback(err,accessToken);
  },headers,options,'utf8');
Copy after login

The above is part of Sina Weibo Auth2.0 requesting accessToken, which uses nodegrass's post request access_token API.

Compared with the get method, the post method provides more headers request header parameters and options--post data. They are all types of object literals:

var headers = {
    'Content-Type': 'application/x-www-form-urlencoded',
    'Content-Length':data.length
  };

var options = {
       client_id : 'id',
     client_secret : 'cs',
     grant_type : 'authorization_code',
     redirect_uri : 'your callback url',
     code: acode
  };
Copy after login

3. Using nodegrass Be a proxy server? ……**

Look at the example:

var ng = require('nodegrass'),
   http=require('http'),
   url=require('url');

   http.createServer(function(req,res){
    var pathname = url.parse(req.url).pathname;
    
    if(pathname === '/'){
      ng.get('http://www.cnblogs.com/',function(data){
        res.writeHeader(200,{'Content-Type':'text/html;charset=utf-8'});
        res.write(data+"\n");
        res.end();
        },'utf8');
      }
   }).listen(8088);
   console.log('server listening 8088...');
Copy after login

It’s that simple. Of course, the proxy server is much more complicated. This does not count, but at least if you access the local port 8088, you will see Is it the page of the Blog Park?

The open source address of nodegrass: https://github.com/scottkiss/nodegrass

The above is what I compiled for everyone. I hope it will be helpful to everyone in the future.

Related articles:

JavaScript recursive traversal and non-recursive traversal

How to use the Upload upload component of element-ui in vue

How to implement calling between methods in vue

The above is the detailed content of How to use Node.js crawler to implement web page requests. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template