The right way to manage patches
The reason why I decided so quickly to continue the "System Administration 101" article series again is because I realized that some Linux system administrators are no different than Windows system administrators when it comes to patch management. To be honest, it's even worse in some areas (it especially prides itself on running time). Therefore, this article will cover the basic concepts of patch management under Linux, including what good patch management looks like, some related tools you may use, and how the entire patch installation process is performed.
By patch management, I mean the systems you deploy to upgrade the software on your servers, not just to update the software to the latest and greatest cutting-edge version. Even conservative distributions like Debian, which maintain a specific version of software for the sake of "stability", will release upgrade patches from time to time to fix bugs and security holes.
Of course, if your organization decides to maintain its own version of a particular software, either because developers have a need for the latest and greatest version and need to derive the software source code and make modifications, or because you like to give yourself extra work , then you will encounter problems. Ideally, you should have configured your system to automatically build and package custom versions of your software using the same continuous integration system used by other software. However, many sysadmins still use outdated methods of packaging software on their local hosts according to the (hopefully up-to-date) documentation on the wiki. No matter which method you use, you need to find out whether the version you are using has security flaws, and if so, make sure that the new patch is installed on your customized version of the software.
The first thing to do in patch management is to check for software upgrades. First of all, for core software, you should subscribe to the security mailing list of the corresponding Linux distribution, so that you can learn about the security upgrade of the software as soon as possible. If you use software that does not come from the distribution's repository, you must also try to track security updates for them. Once you receive a new security notification, you must review the notification details to determine the severity of the security vulnerability, determine whether your system is affected, and the urgency of the security patch.
Some organizations still use manual methods to manage patches. In this way, when a security patch appears, the system administrator has to rely on memory and log in to each server to check it. After determining which servers need to be upgraded, use the server's built-in package management tool to upgrade these software from the release repository. Finally upgrade all remaining servers in the same way.
There are many problems with the manual way of managing patches. First, doing so will make patching a chore. The more patches you install, the more labor costs it will require, and the more likely system administrators will put off or even ignore it entirely. Second, manual management relies on the system administrator's memory to keep track of upgrades to the servers for which he or she is responsible. This can easily result in some servers being missed and not upgraded in time.
The faster and easier patch management is, the more likely you are to do it well. You should build a system that can quickly query which servers are running specific software and the version numbers of those software, and it should ideally be able to push various upgrade patches. Personally, I tend to use an orchestration tool like MCollective to complete this task, but Satellite provided by Red Hat and Landscape provided by Canonical also allow you to view the server's software version information and install patches on a unified management interface. .
Patch installation should also be fault-tolerant. You should have the ability to patch your service without taking it offline. The same applies to kernel patches that require a system reboot. The method I adopted is to divide my server into different high availability groups, with lb1, app1, rabbitmq1 and db1 in one group, and lb2, app2, rabbitmq2 and db2 in another group. This way, I can upgrade one group at a time without taking the service offline.
So, how fast is considered fast? For the few pieces of software that don't come with a service, your system should be able to have a patch installed in a few minutes to an hour at best (such as bash's ShellShock vulnerability). For software like OpenSSL that requires a service restart, the process of installing patches and restarting the service in a fault-tolerant manner may take a little longer, but this is where orchestration tools come in handy. In my recent articles about MCollective (see tickets from December 2016 and January 2017) I gave several examples of using MCollective for patch management. It's best to deploy a system that simplifies patch installation and service restarts in a fault-tolerant, automated manner.
If the patch requires a system reboot, like a kernel patch, it will take more time. Again, automation and orchestration tools can make this process faster than you think. I was able to fault-tolerantly upgrade and restart servers in production within an hour or two, and the process was even faster if I didn't have to wait for cluster sync backups between restarts.
Unfortunately, many sysadmins still cling to the outdated view of uptime as a badge of pride - given that emergency kernel patches happen about once a year. To me, this just means you don't take the security of your system seriously!
Many organizations still use servers that are single points of failure that cannot be temporarily taken offline, and for this reason, it cannot be upgraded or restarted. If you want to make your system more secure, you need to remove obsolete baggage and build a system that can at least be restarted during late-night maintenance windows.
Basically, fast and convenient patch management is also a sign of a mature and professional system management team. Upgrading software is one of the necessary tasks for all system administrators. Taking the time to make this process simple and fast will bring benefits far beyond system security. For example, it can help us find single points of failure in architectural design. In addition, it helps identify outdated systems in the environment and provides an incentive to replace these parts. Ultimately, when patch management is done well enough, it frees up system administrators' time, allowing them to focus on areas where expertise is truly needed.
Kyle Rankin is a senior security and infrastructure architect whose books include: Linux Hardening in Hostile Networks, DevOps Troubleshooting, and The Official Ubuntu Server Book. He is also a columnist for Linux Journal.
The above is the detailed content of The right way to manage patches. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



How to use Docker Desktop? Docker Desktop is a tool for running Docker containers on local machines. The steps to use include: 1. Install Docker Desktop; 2. Start Docker Desktop; 3. Create Docker image (using Dockerfile); 4. Build Docker image (using docker build); 5. Run Docker container (using docker run).

Docker process viewing method: 1. Docker CLI command: docker ps; 2. Systemd CLI command: systemctl status docker; 3. Docker Compose CLI command: docker-compose ps; 4. Process Explorer (Windows); 5. /proc directory (Linux).

Troubleshooting steps for failed Docker image build: Check Dockerfile syntax and dependency version. Check if the build context contains the required source code and dependencies. View the build log for error details. Use the --target option to build a hierarchical phase to identify failure points. Make sure to use the latest version of Docker engine. Build the image with --t [image-name]:debug mode to debug the problem. Check disk space and make sure it is sufficient. Disable SELinux to prevent interference with the build process. Ask community platforms for help, provide Dockerfiles and build log descriptions for more specific suggestions.

VS Code system requirements: Operating system: Windows 10 and above, macOS 10.12 and above, Linux distribution processor: minimum 1.6 GHz, recommended 2.0 GHz and above memory: minimum 512 MB, recommended 4 GB and above storage space: minimum 250 MB, recommended 1 GB and above other requirements: stable network connection, Xorg/Wayland (Linux)

The reasons for the installation of VS Code extensions may be: network instability, insufficient permissions, system compatibility issues, VS Code version is too old, antivirus software or firewall interference. By checking network connections, permissions, log files, updating VS Code, disabling security software, and restarting VS Code or computers, you can gradually troubleshoot and resolve issues.

VS Code is available on Mac. It has powerful extensions, Git integration, terminal and debugger, and also offers a wealth of setup options. However, for particularly large projects or highly professional development, VS Code may have performance or functional limitations.

VS Code is the full name Visual Studio Code, which is a free and open source cross-platform code editor and development environment developed by Microsoft. It supports a wide range of programming languages and provides syntax highlighting, code automatic completion, code snippets and smart prompts to improve development efficiency. Through a rich extension ecosystem, users can add extensions to specific needs and languages, such as debuggers, code formatting tools, and Git integrations. VS Code also includes an intuitive debugger that helps quickly find and resolve bugs in your code.

How to back up VS Code configurations and extensions? Manually backup the settings file: Copy the key JSON files (settings.json, keybindings.json, extensions.json) to a safe location. Take advantage of VS Code synchronization: enable synchronization with your GitHub account to automatically back up all relevant settings and extensions. Use third-party tools: Back up configurations with reliable tools and provide richer features such as version control and incremental backups.
