Home > Web Front-end > JS Tutorial > body text

Detailed explanation of the web request module of Node.js crawler

小云云
Release: 2018-01-13 09:07:03
Original
1517 people have browsed it

This article mainly introduces the web request module of Node.js crawler. The editor thinks it is quite good. Now I will share it with you and give it as a reference. Let’s follow the editor to take a look, I hope it can help everyone.

This article introduces the web request module of Node.js crawler and shares it with everyone. The details are as follows:

Note: If you download the latest nodegrass version, some methods have been updated. The examples in this article are no longer suitable. Please check the examples in the open source address for details.

1. Why should I write such a module?

The author wants to use Node.js to write a crawler. Although the method of requesting remote resources provided by the official Node.js API is very simple, please refer to

http:// nodejs.org/api/http.html Two methods are provided for Http requests: http.get(options, callback) and http.request(options, callback).

You will know by looking at the method, get The method is used for get requests, while the request method provides more parameters, such as other request methods, the port of the requesting host, etc. Requests for Https are similar to Http. The simplest example:


var https = require('https');
https.get('https://encrypted.google.com/', function(res) {
 console.log("statusCode: ", res.statusCode);
 console.log("headers: ", res.headers);

 res.on('data', function(d) {
  process.stdout.write(d);
 });

}).on('error', function(e) {
 console.error(e);
});
Copy after login

For the above code, we just want to request the remote host and get the response information, such as response status, response header, and response body content. The second parameter of the get method is a callback function. We obtain the response information asynchronously. Then, in the callback function, the res object listens to data. The second parameter in the on method is another callback, and you get d (the response information you requested), it is very likely that callbacks will be introduced again when operating it, layer by layer, and finally faint. . . Regarding asynchronous programming, some students who are used to writing code in a synchronous way are very confused. Of course, some excellent synchronization libraries have been provided at home and abroad, such as Lao Zhao's Wind.js... It seems It's a bit far-fetched. In fact, what we ultimately want to get when calling get is the response information, and we don't care about the monitoring process such as res.on because it is too lazy. I don’t want to res.on('data',func) every time, so the nodegrass I want to introduce today was born.

2. Nodegrass requests resources, like Jquery’s $.get(url,func)

The simplest example:


var nodegrass = require('nodegrass');
nodegrass.get("http://www.baidu.com",function(data,status,headers){
  console.log(status);
  console.log(headers);
  console.log(data);
},'gbk').on('error', function(e) {
  console.log("Got error: " + e.message);
});
Copy after login

At first glance, there is no difference from the official original get, it is indeed almost the same=. =! It just lacks a layer of event monitoring callbacks of res.on('data',func). Believe it or not, I seem to feel much more comfortable anyway. The second parameter is also a callback function, in which the parameter data is the response body content, status is the response status, and headers are the response headers. After getting the response content, we can extract any information we are interested in from the obtained resources. Of course, in this example, it is just a simple printed console. The third parameter is the character encoding. Currently, Node.js does not support gbk. Nodegrass internally refers to iconv-lite for processing. Therefore, if the webpage encoding you request is gbk, such as Baidu. Just add this parameter.

So what about https requests? If it is an official api, you have to introduce the https module, but the request get method is similar to http, so nodegrass integrates them by the way. Look at the example:


var nodegrass = require('nodegrass');
nodegrass.get("https://github.com",function(data,status,headers){
  console.log(status);
  console.log(headers);
  console.log(data);
},'utf8').on('error', function(e) {
  console.log("Got error: " + e.message);
});
Copy after login

nodegrass will automatically identify whether it is http or https based on the url. Of course, your url must have it. You cannot just write www.baidu.com/ but need http. ://www.baidu.com/.

For post requests, nodegrass provides the post method. See the example:


var ng=require('nodegrass');
ng.post("https://api.weibo.com/oauth2/access_token",function(data,status,headers){
  var accessToken = JSON.parse(data);
  var err = null;
  if(accessToken.error){
     err = accessToken;
  }
  callback(err,accessToken);
  },headers,options,'utf8');
Copy after login

The above is part of Sina Weibo Auth2.0 request accessToken, among which Use nodegrass's post to request access_token's api.

Compared with the get method, the post method provides more headers request header parameters and options--post data, which are all types of object literals:


var headers = {
    'Content-Type': 'application/x-www-form-urlencoded',
    'Content-Length':data.length
  };

var options = {
       client_id : 'id',
     client_secret : 'cs',
     grant_type : 'authorization_code',
     redirect_uri : 'your callback url',
     code: acode
  };
Copy after login

3. Use nodegrass as a proxy server? ……**

Look at the example:


var ng = require('nodegrass'),
   http=require('http'),
   url=require('url');

   http.createServer(function(req,res){
    var pathname = url.parse(req.url).pathname;
    
    if(pathname === '/'){
      ng.get('http://www.cnblogs.com/',function(data){
        res.writeHeader(200,{'Content-Type':'text/html;charset=utf-8'});
        res.write(data+"\n");
        res.end();
        },'utf8');
      }
   }).listen(8088);
   console.log('server listening 8088...');
Copy after login

It’s that simple. Of course, the proxy server is much more complicated. This is not Yes, but at least when you access the local port 8088, do you see the blog page?

The open source address of nodegrass: https://github.com/scottkiss/nodegrass

Related recommendations:

Node.js development information crawler process Code Sharing

NodeJS Encyclopedia Crawler Instance Tutorial

Related Problems Solving Crawler Problems


The above is the detailed content of Detailed explanation of the web request module of Node.js crawler. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template