The methods I can think of so far:
Awk analyzes logs, sums conditions, and updates the database.
But when the log size is large, efficiency will be affected.
As long as the log is cut regularly, the files processed each time will not be very large.
Then I write a small program to do statistics, which is more efficient.
If you have more flexible query requirements, you can also record the log information into the database, establish an index based on time and necessary fields, and query directly with SQL.
Welcome to try our http_accounting module, which is available in the third-party module list on the nginx official website~
Let’s talk about our plan, the flow is 1.9kw
1. The front desk records the transmission log via <img src="/tj.html"/>
2.
ningx
单独记录tj.html
’s access log3.
syslog
Scheduled and divided into 1 minute intervals4.
cronjob
定时1
Minutes to process and analyze the divided logsNow we use an update every 1 minute
mysql
数据库,正在打算将当天的数据存储方式放到redis上,而将历史记录放到mongodb
UpAs long as the log is cut regularly, the files processed each time will not be very large.
Then I write a small program to do statistics, which is more efficient.
If you have more flexible query requirements, you can also record the log information into the database, establish an index based on time and necessary fields, and query directly with SQL.