Search engine implementation based on Linux
The search engine provides users with a tool to quickly obtain web page information. Its main function is that the system retrieves the back-end web page database through user input of keywords, and feeds back links and summary information of relevant web pages to user. From the scope of search, it is generally divided into site web search and global web search. With the rapid increase in the number of web pages, search engines have become a necessary means to query information on the Internet. All large websites have provided web page data search services, and many companies have emerged to provide professional search engine services for large websites, such as providing search for Yahoo. Google, which provides services, and Baidu, which provides services for domestic websites such as Sina and 263, etc. Professional search services are expensive and free search engine software is basically based on English searches, so they are not suitable for the needs of intranet environments (such as campus networks, etc.).
The basic components of a search engine are generally divided into three parts: webpage collection program, webpage back-end data organization and storage, and webpage data retrieval. The key factor that determines the quality of a search engine is the response time of data queries, that is, how to organize a large amount of web page data to meet the needs of full-text retrieval.
GNU/Linux is an excellent network operating system. Its distribution version integrates a large number of network application software, such as Web server (Apache + PHP), directory server (OpenLDAP), scripting language (Perl), and web page collection program. (Wget) etc. Therefore, by applying them together, a simple and efficient search engine server can be realized.
1. Basic composition and usage
1. Web page data collection
The Wget program is an excellent web page collection program. It uses a multi-threaded design to easily mirror website content to a local directory, and can Flexibly customize the type of collection web pages, recursive collection levels, directory limits, collection time, etc. The collection of web pages is completed through a dedicated collection program, which not only reduces the difficulty of design but also improves the performance of the system. In order to reduce the size of local data, you can only collect html files, txt files, script programs asp and php that can be queried, and only use the default results, without collecting graphics files or other data files.
2. Web page data filtering
Since there are a large number of tags in html files, such as