Web search engines are essential for indexing vast amounts of online information, making it accessible in milliseconds. In this project, I built a search engine in Go (Golang) named RelaxSearch. It combines web scraping, periodic data indexing, and search functionality by integrating with Elasticsearch—a powerful search and analytics engine. In this blog, I’ll walk you through the main components of RelaxSearch, the architecture, and how it efficiently scrapes and indexes data for fast, keyword-based search.
RelaxSearch is built around two primary modules:
Creating a search engine project from scratch is a great way to understand web scraping, data indexing, and efficient search techniques. I wanted to create a simple but functional search engine with fast data retrieval and easy extensibility, utilizing Go’s efficiency and Elasticsearch’s powerful indexing.
RelaxEngine is a web scraper written in Go that navigates web pages, extracting and storing content. It runs as a cron job, so it can operate at regular intervals (e.g., every 30 minutes) to keep the index updated with the latest web data. Here’s how it works:
RelaxWeb provides a RESTful API endpoint, making it easy to query and retrieve data stored in Elasticsearch. The API accepts several parameters, such as keywords, pagination, and date filtering, returning relevant content in JSON format.
Below are some important components and code excerpts from RelaxSearch to illustrate how it works.
The core functionality is in the main.go file, where RelaxEngine initializes a scheduler using gocron to manage cron jobs, sets up the Elasticsearch client, and begins crawling from the seed URL.
func main() { cfg := config.LoadConfig() esClient := crawler.NewElasticsearchClient(cfg.ElasticsearchURL) c := crawler.NewCrawler(cfg.DepthLimit, 5) seedURL := "https://example.com/" // Replace with starting URL s := gocron.NewScheduler(time.UTC) s.Every(30).Minutes().Do(func() { go c.StartCrawling(seedURL, 0, esClient) }) s.StartBlocking() }
The crawler.go file handles web page requests, extracts content, and indexes it. Using the elastic package, each scraped page is stored in Elasticsearch.
func (c *Crawler) StartCrawling(pageURL string, depth int, esClient *elastic.Client) { if depth > c.DepthLimit || c.isVisited(pageURL) { return } c.markVisited(pageURL) links, title, content, description, err := c.fetchAndParsePage(pageURL) if err == nil { pageData := PageData{URL: pageURL, Title: title, Content: content, Description: description} IndexPageData(esClient, pageData) } for _, link := range links { c.StartCrawling(link, depth+1, esClient) } }
In relaxweb service, an API endpoint provides full-text search capabilities. The endpoint /search receives requests and queries Elasticsearch, returning relevant content based on keywords.
func searchHandler(w http.ResponseWriter, r *http.Request) { keyword := r.URL.Query().Get("keyword") results := queryElasticsearch(keyword) json.NewEncoder(w).Encode(results) }
git clone https://github.com/Ravikisha/RelaxSearch.git cd RelaxSearch
Configuration
Update .env files for both RelaxEngine and RelaxWeb with Elasticsearch credentials.
Run with Docker
RelaxSearch uses Docker for easy setup. Simply run:
docker-compose up --build
RelaxSearch is an educational and practical demonstration of a basic search engine. While it is still a prototype, this project has been instrumental in understanding the fundamentals of web scraping, full-text search, and efficient data indexing with Go and Elasticsearch. It opens avenues for improvements and real-world application in scalable environments.
Explore the GitHub repository to try out RelaxSearch for yourself!
The above is the detailed content of Building a Web Search Engine in Go with Elasticsearch. For more information, please follow other related articles on the PHP Chinese website!