There are more than 600 pieces of data in my log, but only more than 300 pieces are written to elasticsearch.
Does anyone know the reason for this?
This is my configuration
input {
file {
path => ["/usr/local/20170730.log"]
type => "log_test_events"
tags => ["log_tes_events"]
start_position => "beginning"
sincedb_path => "/data/logstash/sincedb/test.sincedb"
codec => "json"
close_older => "86400"
#1 day
ignore_older => "86400"
}
beats{port => 5044}
}
filter {
urldecode {
all_fields => true
}
}
output{
elasticsearch {
hosts => "localhost:9200"
index => "logstash_%{event_date}"
}
stdout { codec => json }
}
Because when reading the log, the ES template automatically creates a data type based on the format of the data. For example, the value of field a is int and string. The first index he creates reads a number, which is an int type index
Modify configuration and do mapping
output {
stat_day.json template format
"order" : 1,
"template" : "test1",
"mappings" : {
}
}