Home Operation and Maintenance Linux Operation and Maintenance How to configure log management on Linux

How to configure log management on Linux

Jul 06, 2023 pm 04:25 PM
linux log management

How to configure log management on Linux

In Linux systems, logs are a key component that records important information such as system running status, application running information, errors and warnings. Properly configuring and managing logs is crucial for system monitoring and troubleshooting. This article will introduce you to how to configure log management on Linux and provide some code examples to help you better understand and practice.

1. Understand the types and locations of log files

First, we need to understand the common log file types and locations in the system. The following are several common log file types and their locations:

  1. System Log (System Log): This type of log records the running status, startup and shutdown information of the system, etc. Typically managed by rsyslog and stored in /var/log/syslog or /var/log/messages.
  2. Application Log: This type of log is generated by various applications and records the running information and error information of the application. These logs are typically stored in an application-specific directory, such as /var/log/nginx/access.log.
  3. Security Log: This type of log records system security events, such as login attempts, authorization requests, etc. In most Linux systems, security logs are recorded in /var/log/secure or /var/log/auth.log.

2. Configure log rotation

Log rotation refers to regularly archiving and compressing log files to prevent log files from being too large or taking up too much storage space. In Linux systems, logrotate is a commonly used log rotation tool.

  1. Install logrotate:
$ sudo apt-get install logrotate
Copy after login
  1. Configure logrotate:

Create a new configuration file so that we can customize logrotate the behavior of.

$ sudo nano /etc/logrotate.d/myapp
Copy after login

In the configuration file, you can specify parameters such as the log files to be rotated, the rotation interval, and the number of rotated files to retain. For example:

/var/log/myapp/*.log {
    weekly
    rotate 4
    compress
    delaycompress
    missingok
    notifempty
    sharedscripts
}
Copy after login

In the above example, /var/log/myapp/*.log specifies the path of the log file to be rotated, and weekly means weekly Rotate, rotate 4 means to keep the last four rotated files, compress means to compress the rotated files, delaycompress means delayed compression, missingok means if If the log file does not exist, it will be ignored. notifempty means that the log file will not be rotated when it is empty.

  1. Perform rotation manually:

You can perform rotation manually to verify that the configuration is correct.

$ sudo logrotate -vf /etc/logrotate.d/myapp
Copy after login

3. Configure log rotation and cleanup strategies

In addition to log rotation, we can also specify log rotation and cleanup strategies in the configuration file. In Linux systems, logrotate supports the following strategies:

  1. postrotate: This option specifies the command to be executed after rotation. You can perform operations such as log analysis and database backup under this option.
/var/log/myapp/*.log {
    ...
    postrotate
        /usr/bin/analyze_logs /var/log/myapp/*.log > /dev/null
    endscript
}
Copy after login
  1. prerotate: This option specifies the command to be executed before rotation. You can perform some preprocessing operations under this option.
/var/log/myapp/*.log {
    ...
    prerotate
        /usr/bin/sync_logs /var/log/myapp/*.log
    endscript
}
Copy after login
  1. size: This option specifies the size of the log file to trigger the rotation operation. The unit can be k (kilobytes) or M (Megabytes).
/var/log/myapp/*.log {
    ...
    size 10M
}
Copy after login
  1. maxage: This option specifies the maximum number of days for log file retention.
/var/log/myapp/*.log {
    ...
    maxage 30
}
Copy after login

4. Configure remote log collection

Sometimes, we need to send the contents of the log file to a remote server for central log collection and analysis. In Linux systems, rsyslog is a commonly used log collection and processing tool.

  1. Install rsyslog:
$ sudo apt-get install rsyslog
Copy after login
  1. Configure rsyslog:

Open the main configuration file of rsyslog and edit the following content:

$ sudo nano /etc/rsyslog.conf
Copy after login

Uncomment the following lines (remove the # at the beginning of the line):

#$ModLoad imudp
#$UDPServerRun 514
Copy after login

At the end of the file, add the following:

*.* @192.168.0.100:514
Copy after login

where, 192.168.0.100 is the IP address of the remote server, 514 is the port number for collecting logs.

  1. Restart rsyslog:
$ sudo systemctl restart rsyslog
Copy after login

With the above configuration, the log will be sent to the 514 port of the remote server through the UDP protocol.

Summary:

This article introduces how to configure log management on a Linux system. Starting from understanding log file types and locations, to configuring log rotation, configuring log rotation and cleaning policies, and configuring remote log collection, we provide relevant code examples to help you better understand and practice. Properly configuring and managing logs is crucial for system monitoring and troubleshooting. I hope this article will be helpful to you.

The above is the detailed content of How to configure log management on Linux. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Key Linux Operations: A Beginner's Guide Key Linux Operations: A Beginner's Guide Apr 09, 2025 pm 04:09 PM

Linux beginners should master basic operations such as file management, user management and network configuration. 1) File management: Use mkdir, touch, ls, rm, mv, and CP commands. 2) User management: Use useradd, passwd, userdel, and usermod commands. 3) Network configuration: Use ifconfig, echo, and ufw commands. These operations are the basis of Linux system management, and mastering them can effectively manage the system.

How to interpret the output results of Debian Sniffer How to interpret the output results of Debian Sniffer Apr 12, 2025 pm 11:00 PM

DebianSniffer is a network sniffer tool used to capture and analyze network packet timestamps: displays the time for packet capture, usually in seconds. Source IP address (SourceIP): The network address of the device that sent the packet. Destination IP address (DestinationIP): The network address of the device receiving the data packet. SourcePort: The port number used by the device sending the packet. Destinatio

Where to view the logs of Tigervnc on Debian Where to view the logs of Tigervnc on Debian Apr 13, 2025 am 07:24 AM

In Debian systems, the log files of the Tigervnc server are usually stored in the .vnc folder in the user's home directory. If you run Tigervnc as a specific user, the log file name is usually similar to xf:1.log, where xf:1 represents the username. To view these logs, you can use the following command: cat~/.vnc/xf:1.log Or, you can open the log file using a text editor: nano~/.vnc/xf:1.log Please note that accessing and viewing log files may require root permissions, depending on the security settings of the system.

How to check Debian OpenSSL configuration How to check Debian OpenSSL configuration Apr 12, 2025 pm 11:57 PM

This article introduces several methods to check the OpenSSL configuration of the Debian system to help you quickly grasp the security status of the system. 1. Confirm the OpenSSL version First, verify whether OpenSSL has been installed and version information. Enter the following command in the terminal: If opensslversion is not installed, the system will prompt an error. 2. View the configuration file. The main configuration file of OpenSSL is usually located in /etc/ssl/openssl.cnf. You can use a text editor (such as nano) to view: sudonano/etc/ssl/openssl.cnf This file contains important configuration information such as key, certificate path, and encryption algorithm. 3. Utilize OPE

How to use Debian Apache logs to improve website performance How to use Debian Apache logs to improve website performance Apr 12, 2025 pm 11:36 PM

This article will explain how to improve website performance by analyzing Apache logs under the Debian system. 1. Log Analysis Basics Apache log records the detailed information of all HTTP requests, including IP address, timestamp, request URL, HTTP method and response code. In Debian systems, these logs are usually located in the /var/log/apache2/access.log and /var/log/apache2/error.log directories. Understanding the log structure is the first step in effective analysis. 2. Log analysis tool You can use a variety of tools to analyze Apache logs: Command line tools: grep, awk, sed and other command line tools.

How debian readdir integrates with other tools How debian readdir integrates with other tools Apr 13, 2025 am 09:42 AM

The readdir function in the Debian system is a system call used to read directory contents and is often used in C programming. This article will explain how to integrate readdir with other tools to enhance its functionality. Method 1: Combining C language program and pipeline First, write a C program to call the readdir function and output the result: #include#include#include#includeintmain(intargc,char*argv[]){DIR*dir;structdirent*entry;if(argc!=2){

PostgreSQL performance optimization under Debian PostgreSQL performance optimization under Debian Apr 12, 2025 pm 08:18 PM

To improve the performance of PostgreSQL database in Debian systems, it is necessary to comprehensively consider hardware, configuration, indexing, query and other aspects. The following strategies can effectively optimize database performance: 1. Hardware resource optimization memory expansion: Adequate memory is crucial to cache data and indexes. High-speed storage: Using SSD SSD drives can significantly improve I/O performance. Multi-core processor: Make full use of multi-core processors to implement parallel query processing. 2. Database parameter tuning shared_buffers: According to the system memory size setting, it is recommended to set it to 25%-40% of system memory. work_mem: Controls the memory of sorting and hashing operations, usually set to 64MB to 256M

How to interpret warnings in Tomcat logs How to interpret warnings in Tomcat logs Apr 12, 2025 pm 11:45 PM

Warning messages in the Tomcat server logs indicate potential problems that may affect application performance or stability. To effectively interpret these warning information, you need to pay attention to the following key points: Warning content: Carefully study the warning information to clarify the type, cause and possible solutions. Warning information usually provides a detailed description. Log level: Tomcat logs contain different levels of information, such as INFO, WARN, ERROR, etc. "WARN" level warnings are non-fatal issues, but they need attention. Timestamp: Record the time when the warning occurs so as to trace the time point when the problem occurs and analyze its relationship with a specific event or operation. Context information: view the log content before and after warning information, obtain

See all articles