Home > Java > javaTutorial > Efficient Java crawler practice: sharing of web data crawling techniques

Efficient Java crawler practice: sharing of web data crawling techniques

WBOY
Release: 2024-01-09 12:29:53
Original
1337 people have browsed it

Efficient Java crawler practice: sharing of web data crawling techniques

Java crawler practice: how to efficiently crawl web page data

Introduction:

With the rapid development of the Internet, a large amount of valuable data is stored in various web pages. To obtain this data, it is often necessary to manually access each web page and extract the information one by one, which is undoubtedly a tedious and time-consuming task. In order to solve this problem, people have developed various crawler tools, among which Java crawler is one of the most commonly used. This article will lead readers to understand how to use Java to write an efficient web crawler, and demonstrate the practice through specific code examples.

1. Basic principles of crawlers

The basic principles of web crawlers are to send HTTP requests by simulating a browser, and then parse the web page and extract the required data. The working process is roughly divided into the following steps:

  1. Send HTTP request: Use Java's network programming library, such as HttpURLConnection, HttpClient, etc., to construct an HTTP request and send it to the target web page.
  2. Webpage parsing: According to the structure of the webpage, use appropriate parsing libraries, such as Jsoup, XPath, etc., to parse webpages in HTML, XML or JSON format and extract the required data.
  3. Data processing and storage: Process the extracted data, such as cleaning, filtering, etc., and then store it in a database, file or memory for subsequent use.

2. Creation of crawler development environment

To start developing Java crawlers, you need to build a corresponding environment. First, ensure that the Java Development Kit (JDK) and Java Integrated Development Environment (IDE), such as Eclipse, IntelliJ IDEA, etc., are installed. Then, introduce the required network programming libraries into the project, such as HttpClient, Jsoup, etc.

3. Practical Exercise: Capturing Douban Movie Ranking Data

In order to practice the development process of the crawler, we chose to capture the data of Douban Movie Ranking as an example. Our goal is to extract the movie's name, rating, and number of reviewers.

  1. Send HTTP request

First, we need to use Java's network programming library to send an HTTP request to obtain the content of the web page. The following is a sample code that uses the HttpClient library to send a GET request:

import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;

public class HttpClientExample {
    public static void main(String[] args) {
        CloseableHttpClient httpClient = HttpClients.createDefault();
        HttpGet httpGet = new HttpGet("https://movie.douban.com/top250");
        
        try (CloseableHttpResponse response = httpClient.execute(httpGet)){
            HttpEntity entity = response.getEntity();
            String result = EntityUtils.toString(entity);
            System.out.println(result);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
Copy after login
  1. Web page analysis

By sending an HTTP request, we obtained the web content of the Douban movie rankings. Next, we need to use a parsing library to extract the required data. The following is a sample code for using the Jsoup library to parse HTML pages:

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class JsoupExample {
    public static void main(String[] args) {
        try {
            Document document = Jsoup.connect("https://movie.douban.com/top250").get();
            Elements elements = document.select("ol.grid_view li");
            
            for (Element element : elements) {
                String title = element.select(".title").text();
                String rating = element.select(".rating_num").text();
                String votes = element.select(".star span:nth-child(4)").text();
                
                System.out.println("电影名称:" + title);
                System.out.println("评分:" + rating);
                System.out.println("评价人数:" + votes);
                System.out.println("-------------------------");
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
Copy after login
  1. Data processing and storage

In actual applications, we may need to further process the extracted data Processing and storage. For example, we can store data in a database for subsequent use. The following is a sample code for using MySQL database to store data:

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;

public class DataProcessingExample {
    public static void main(String[] args) {
        String jdbcUrl = "jdbc:mysql://localhost:3306/spider";
        String username = "root";
        String password = "password";
        
        try (Connection conn = DriverManager.getConnection(jdbcUrl, username, password)) {
            String sql = "INSERT INTO movie (title, rating, votes) VALUES (?, ?, ?)";
            PreparedStatement statement = conn.prepareStatement(sql);
            
            // 假设从网页中获取到了以下数据
            String title = "肖申克的救赎";
            String rating = "9.7";
            String votes = "2404447";
            
            statement.setString(1, title);
            statement.setString(2, rating);
            statement.setString(3, votes);
            
            int rowsAffected = statement.executeUpdate();
            System.out.println("插入了 " + rowsAffected + " 条数据");
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}
Copy after login

IV. Summary

This article introduces the basic principles of Java crawlers and shows how to use Java to write efficient web pages through specific code examples. reptile. By learning these basic knowledge, readers can develop more complex and flexible crawler programs according to actual needs. In practical applications, you also need to pay attention to the legal use of crawlers and respect the privacy policy and terms of service of the website to avoid legal disputes. I hope this article will serve as a guide for readers in the learning and application of Java crawlers.

The above is the detailed content of Efficient Java crawler practice: sharing of web data crawling techniques. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template