Home > System Tutorial > LINUX > body text

Detailed explanation of Elasticsearch's basic friend Logstash

WBOY
Release: 2024-07-18 06:56:47
Original
867 people have browsed it

Detailed explanation of Elasticsearchs basic friend Logstash

Logstash is a powerful data processing tool that can realize data transmission, format processing, formatted output, and has powerful plug-in functions, which are often used for log processing.

1. Principle

Input

Data can be extracted from files, storage, and databases. Input has two options. One is to hand it over to Filter for filtering and pruning. The other one is given directly to Output

Filter

Ability to dynamically transform and parse data. Data information can be filtered and pruned in a customized way

Output

Providing numerous output options, you can send data where you want it, with the flexibility to unlock numerous downstream use cases.

Detailed explanation of Elasticsearchs basic friend Logstash

2. Installation and use
1.Installation
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.1.rpm
yum install -y ./logstash-6.0.1.rpm
Copy after login
2.Logstash configuration file
vim /etc/logstash/logstash.yml
path.data: /var/lib/logstash                                    # 数据存放路径
path.config: /etc/logstash/conf.d/*.conf                        # 其他插件的配置文件,输入输出过滤等等
path.logs: /var/log/logstash                                    # 日志存放路径
Copy after login
3.JVM configuration file in Logstash

Logstash is a program developed based on Java and needs to run in the JVM. It can be set for the JVM by configuring jvm.options. For example, the maximum and minimum memory, garbage cleaning mechanism, etc. Here are just two of the most commonly used ones.

The memory allocation of JVM cannot be too large or too small. If it is too large, it will slow down the operating system. Too small to start.

vim /etc/logstash/jvm.options                               # logstash有关JVM的配置
-Xms256m                                                    # logstash最大最小使用内存
-Xmx1g
Copy after login
4. The simplest log collection configuration

Install an httpd for testing and configure Logstash to collect Apache’s accless.log log file

yum install httpd
echo "Hello world" > /var/www/html/index.html               # 安装httpd,创建首页用于测试
Copy after login
vim /etc/logstash/conf.d/test.conf
input {
    file {                                                  # 使用file作为数据输入
        path => ['/var/log/httpd/access_log']               # 设定读入数据的路径
        start_position => beginning                         # 从文件的开始处读取,end从文件末尾开始读取
    }
}
output {                                                    # 设定输出的位置
    stdout {
        codec => rubydebug                                  # 输出至屏幕
    }
}
Copy after login
5. Test configuration file

Logstash is a built-in command but it is not included in the environment variables, so you can only use the absolute path to use this command.

/usr/share/logstash/bin/logstash -t  -f  /etc/logstash/conf.d/test.conf   # 测试执行配置文件,-t要在-f前面
Configuration OK                                                          # 表示测试OK
Copy after login
6. Start logstash

After running logstash in the current session, do not close this session. Temporarily call it Session 1, and then open a new window as Session 2

/usr/share/logstash/bin/logstash  -f  /etc/logstash/conf.d/test.conf
Copy after login

After starting, use the curl command in session 2 to test

curl 172.18.68.14
Copy after login

Then you can see the output information when you return to the previous session 1

{
      "@version" => "1",
          "host" => "logstash.shuaiguoxia.com",
          "path" => "/var/log/httpd/access_log",
    "@timestamp" => 2017-12-10T14:07:07.682Z,
       "message" => "172.18.68.14 - - [10/Dec/2017:22:04:44 +0800] \"GET / HTTP/1.1\" 200 12 \"-\" \"curl/7.29.0\""
}
Copy after login

At this point, the simplest Logstash configuration has been completed. Here, the collected direct output is just collected without filtering or pruning.

3. Elasticsearch and Logstash

上面的配置时Logsatsh从日志文件中抽取数据,然后输出至屏幕。那么在生产中往往是将抽取的数据过滤后输出到Elasticsearch中。下面讲解Elasticsearch结合Logstash

Logstash抽取httpd的access.log文件,然后经过过滤(结构化)之后输出给Elasticsearch Cluster,在使用Head插件就可以看到抽取到的数据。(Elasticsearch Cluster与Head插件搭建请查看前两篇文章)

Detailed explanation of Elasticsearchs basic friend Logstash

配置Logstash

    vim /etc/logstash/conf.d/test.conf
    input {
    file {
        path => ['/var/log/httpd/access_log']
        start_position => "beginning"
    }
    }
    filter {
    grok {
        match => {
            "message" => "%{COMBINEDAPACHELOG}"
        }

        remove_field => "message"   
    }
    }
    output {
    elasticsearch {
        hosts => ["http://172.18.68.11:9200","http://172.18.68.12:9200","http://172.18.68.13:9200"]
        index => "logstash-%{+YYYY.MM.dd}"
        action => "index"
        document_type => "apache_logs"
    }
    }
Copy after login

启动Logstash

     /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf       # 测试配置文件
    Configuration OK
     /usr/share/logstash/bin/logstash  -f /etc/logstash/conf.d/test.conf         # 启动Logstash
Copy after login

测试

每个执行10次172.18.68.14,位Logstash的地址

    curl 127.0.0.1
    curl 172.18.68.14
Copy after login

验证数据

使用浏览器访问172.18.68.11:9100(Elastisearch 安装Head地址,前面文章有讲)

选择今天的日期,就能看到一天内访问的所有数据。

Detailed explanation of Elasticsearchs basic friend Logstash

四、监控其他

监控Nginx日志

仅仅列了filter配置块,input与output参考上一个配置

    filter {
        grok {
                match => {
                        "message" => "%{HTTPD_COMBINEDLOG} \"%{DATA:realclient}\""
                }
                remove_field => "message"
        }
        date {
                match => ["timestamp","dd/MMM/YYYY:H:m:s Z"]
                remove_field => "timestamp"
        }
    }
Copy after login

监控Tomcat

仅仅列了filter配置块,input与output参考上一个配置

    filter {
        grok {
                match => {
                        "message" => "%{HTTPD_COMMONLOG}"
                }
                remove_field => "message"
        }
        date {
                match => ["timestamp","dd/MMM/YYYY:H:m:s Z"]
                remove_field => "timestamp"
        }
    } 
Copy after login
五、Filebeat

现在已经搭建成在节点安装Logstash并发送到Elasticsearch中去,但是Logstash是基于Java开发需要运行在JVM中,所以是一个重量级采集工具,仅仅对于一个日志采集节点来说使用Logstash太过重量级,那么就可以使用一个轻量级日志收集工具Filebeat来收集日志信息,Filebeat同一交给Logstash进行过滤后再Elasticsearch。这些在接下来的文章在进行讲解,先放一张架构图吧。

Detailed explanation of Elasticsearchs basic friend Logstash

The above is the detailed content of Detailed explanation of Elasticsearch's basic friend Logstash. For more information, please follow other related articles on the PHP Chinese website!

source:linuxprobe.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!