Table of Contents
Using Prometheus for application monitoring
Use ElasticSearch and Logstash for log management
Home Operation and Maintenance Linux Operation and Maintenance How to use Docker for application monitoring and log management

How to use Docker for application monitoring and log management

Nov 07, 2023 pm 04:58 PM
docker monitor Log management

How to use Docker for application monitoring and log management

Docker has become an essential technology in modern applications, but using Docker for application monitoring and log management is a challenge. With the continuous enhancement of Docker network functions, such as Service Discovery and Load Balancing, we increasingly need a complete, stable, and efficient application monitoring system.

In this article, we will briefly introduce the use of Docker for application monitoring and log management and give specific code examples.

Using Prometheus for application monitoring

Prometheus is an open source, Pull model-based service monitoring and warning tool developed by SoundCloud. It is written in Go language and is widely used in microservice solutions and cloud environments. As a monitoring tool, it can monitor Docker's CPU, memory, network and disk, etc., and also supports multi-dimensional data switching, flexible query, alarm and visualization functions, allowing you to react quickly and do things quickly. Make decisions.

Another thing to note is that Prometheus needs to sample through Pull mode, that is, access the /metrics interface in the monitored application to obtain monitoring data. Therefore, when starting the monitored application image, you need to first configure the IP and port that can access Prometheus into the /metrics interface. Below is a simple Node.js application.

const express = require('express')
const app = express()

app.get('/', (req, res) => {
  res.send('Hello World!')
})

app.get('/metrics', (req, res) => {
  res.send(`
    # HELP api_calls_total Total API calls
    # TYPE api_calls_total counter
    api_calls_total 100
  `)
})

app.listen(3000, () => {
  console.log('Example app listening on port 3000!')
})
Copy after login

In this code, we return an api_calls_total monitoring indicator through the /metrics interface.

Next, download the Docker image of Prometheus on the official website and create a docker-compose.yml file, and in this file, we obtain the data of the Node.js application.

version: '3'
services:
  node:
    image: node:lts
    command: node index.js
    ports:
      - 3000:3000

  prometheus:
    image: prom/prometheus:v2.25.2
    volumes:
      - ./prometheus:/etc/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.retention.time=15d'
    ports:
      - 9090:9090
Copy after login

In the docker-compose.yml file, we define two services, one is the Node service that runs the Node.js application, and the other is the Prometheus service for monitoring. Among them, the port published by the Node service is port 3000. Through port mapping, the /metrics interface of the Node application can be accessed through the IP and 3000 port in docker-compose.yml. Prometheus can access the corresponding monitoring indicator data through port 9090.

Finally, in the prometheus.yml file, we need to define the data source to be obtained.

global:
  scrape_interval:     15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'node-exporter'
    static_configs:
    - targets: ['node:9100']

  - job_name: 'node-js-app'
    static_configs:
    - targets: ['node:3000']
Copy after login

In this file, we define the indicators of all Node.js applications to be collected, where the targets parameter is the IP address of the Node.js application and its corresponding port number. Here, we are using node and port 3000.

Finally, run the docker-compose up command to start the entire application and its monitoring service, and view the member indicators in Prometheus.

Use ElasticSearch and Logstash for log management

In Docker, application log data is distributed in different Docker containers. If you want to manage these logs in a centralized place, you can use ElasticSearch and Logstash in ELK to centrally manage the logs to make it easier to monitor and analyze computer resources.

Before starting, you need to download the Docker images of Logstash and ElasticSearch and create a docker-compose.yml file.

In this file, we define three services, among which bls is an API service used to simulate business logs. After each response, a log will be recorded to stdout and log files. The logstash service is built from the Docker image officially provided by Logstash and is used to collect, filter and transmit logs. The ElasticSearch service is used to store and retrieve logs.

version: '3'
services:
  bls:
    image: nginx:alpine
    volumes:
      - ./log:/var/log/nginx
      - ./public:/usr/share/nginx/html:ro
    ports:
      - "8000:80"
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"

  logstash:
    image: logstash:7.10.1
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
    environment:
      - "ES_HOST=elasticsearch"
    depends_on:
      - elasticsearch

  elasticsearch:
    image: elasticsearch:7.10.1
    environment:
      - "http.host=0.0.0.0"
      - "discovery.type=single-node"
    volumes:
      - ./elasticsearch:/usr/share/elasticsearch/data
Copy after login

In the configuration file, we map the path in the container to the host's log file system. At the same time, through the logging option, the volume size and quantity of the log are defined to limit the storage occupied by the log.

In the logstash of the configuration file, we define a new pipeline named nginx_pipeline.conf. This file is used to handle the collection, filtering and transmission of nginx logs. Similar to how ELK works, logstash will process the received logs based on different conditions and send them to the already created Elasticsearch cluster. In this configuration file, we define the following processing logic:

input {
  file {
    path => "/var/log/nginx/access.log"
  }
}

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
}

output {
  elasticsearch {
    hosts => [ "${ES_HOST}:9200" ]
    index => "nginx_log_index"
  }
}
Copy after login

In this configuration file, we define an input named file, which means that we want to read data from the local Log file. Next, we introduced a filter that uses the grok library to parse logs that match a specific template. Finally, we define the output, which transfers data to the address of the Elasticsearch cluster, while passing retrieval and reporting into the container via the environment variable ES_HOST.

In the end, after completing the entire ELK configuration as above, we will get an efficient log management system. Each log will be sent to a centralized place and integrated together, allowing for easy search. Filtering and visualization operations.

The above is the detailed content of How to use Docker for application monitoring and log management. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Detailed explanation of docker principle Detailed explanation of docker principle Apr 14, 2025 pm 11:57 PM

Docker uses Linux kernel features to provide an efficient and isolated application running environment. Its working principle is as follows: 1. The mirror is used as a read-only template, which contains everything you need to run the application; 2. The Union File System (UnionFS) stacks multiple file systems, only storing the differences, saving space and speeding up; 3. The daemon manages the mirrors and containers, and the client uses them for interaction; 4. Namespaces and cgroups implement container isolation and resource limitations; 5. Multiple network modes support container interconnection. Only by understanding these core concepts can you better utilize Docker.

What is the CentOS MongoDB backup strategy? What is the CentOS MongoDB backup strategy? Apr 14, 2025 pm 04:51 PM

Detailed explanation of MongoDB efficient backup strategy under CentOS system This article will introduce in detail the various strategies for implementing MongoDB backup on CentOS system to ensure data security and business continuity. We will cover manual backups, timed backups, automated script backups, and backup methods in Docker container environments, and provide best practices for backup file management. Manual backup: Use the mongodump command to perform manual full backup, for example: mongodump-hlocalhost:27017-u username-p password-d database name-o/backup directory This command will export the data and metadata of the specified database to the specified backup directory.

How to import docker images How to import docker images Apr 15, 2025 am 08:24 AM

Importing images in Docker involves getting prebuilt container images from remote repositories and importing them into local repositories. The steps include: pull the image (docker pull) list the docker images (docker images) and import the image to the local repository (docker import)

How to solve the problem of Docker on CentOS How to solve the problem of Docker on CentOS Apr 14, 2025 pm 03:00 PM

Troubleshooting and Resolving CentOS System Docker Troubleshooting and Resolving Guide This article provides step-by-step guidance to help you diagnose and resolve common Docker problems in CentOS systems. 1. Verify Docker installation and version: First, confirm that Docker has correctly installed and run compatible versions. Use the following command to check Docker version: If dockerversion is not installed, please use the following command to install: sudoyumininstalldocker2. Check the status of Docker service: After the installation is completed, check whether the Docker service has been started: systemctlstatusdocker.service If the service is not enabled, please check whether the Docker service is started:

Docker restarts using GPU server Docker restarts using GPU server Apr 15, 2025 am 06:48 AM

Server restart when using Docker on a GPU server is caused by the following reasons: CUDA version conflict driver issue Memory allocation error Solution: Make sure the CUDA version matches the update driver limit GPU memory allocation

How to configure domain name access for docker applications How to configure domain name access for docker applications Apr 15, 2025 am 06:51 AM

Configuring an application to access a specific domain name in a Docker environment requires the following steps: Create a user-defined network and specify the network using the --network option. When running the container, use the --publish option to map the port of the application container to the host port. Add a DNS record in the host system's /etc/hosts file to resolve the custom domain name to the container's IP address. You can access the application using a custom domain name.

What platform Docker uses to manage public images What platform Docker uses to manage public images Apr 15, 2025 am 07:06 AM

The Docker image hosting platform is used to manage and store Docker images, making it easy for developers and users to access and use prebuilt software environments. Common platforms include: Docker Hub: officially maintained by Docker and has a huge mirror library. GitHub Container Registry: Integrates the GitHub ecosystem. Google Container Registry: Hosted by Google Cloud Platform. Amazon Elastic Container Registry: Hosted by AWS. Quay.io: By Red Hat

How to set up docker pulling mirror How to set up docker pulling mirror Apr 15, 2025 am 08:33 AM

Docker can customize settings when pulling images, including: specifying the image version, mirror repository, speed limit pull, authentication, and pulling tagless images. These settings can be implemented through the docker pull command and its options, including --registry, --limit-rate, --auth, and -a.

See all articles