Home > Operation and Maintenance > Linux Operation and Maintenance > How to configure a highly available cluster file system on Linux

How to configure a highly available cluster file system on Linux

WBOY
Release: 2023-07-07 13:18:07
Original
1811 people have browsed it

How to configure a highly available cluster file system on Linux

Introduction:
In the computer field, high availability (high availability) is a technology that aims to improve the reliability and availability of the system. . In a cluster environment, a highly available file system is one of the important components to ensure continuous operation of the system. This article will introduce how to configure a highly available cluster file system on Linux and give corresponding code examples.

  1. Installing software packages
    First, make sure that the necessary software packages are installed on the system. In most Linux distributions, these packages can be installed using package management tools. The following are common software packages:
  • Pacemaker: Cluster management tool for managing the status and resources of the file system.
  • Corosync: Communication tool for building and maintaining cluster environments.
  • DRBD: Distributed replicated block device, used to implement disk mirroring.
  • GFS2 or OCFS2: used to provide a highly available cluster file system.

On Ubuntu, you can use the following command to install the package:

sudo apt-get install pacemaker corosync drbd8-utils gfs2-utils
Copy after login
  1. Configure the cluster environment
    First, you need to configure the cluster environment, including communication between nodes and resource management. The following is a simple configuration example with two nodes (node1 and node2):
  • Modify the /etc/hosts file and add the node’s IP address and host name so that the nodes can can access each other.
sudo nano /etc/hosts
Copy after login

Add the following content:

192.168.1.100    node1
192.168.1.101    node2
Copy after login
  • Configure Corosync communication.

Create Corosync configuration file.

sudo nano /etc/corosync/corosync.conf
Copy after login

Add the following:

totem {
    version: 2
    secauth: off
    cluster_name: mycluster
    transport: udpu
}

nodelist {
    node {
        ring0_addr: node1
        nodeid: 1
    }
    node {
        ring0_addr: node2
        nodeid: 2
    }
}

quorum {
    provider: corosync_votequorum
}

logging {
    to_syslog: yes
    to_logfile: yes
    logfile: /var/log/corosync.log
    debug: off
    timestamp: on
}
Copy after login
  • Enable Corosync and Pacemaker services.
sudo systemctl enable corosync
sudo systemctl enable pacemaker
Copy after login

Start the service.

sudo systemctl start corosync
sudo systemctl start pacemaker
Copy after login
  1. Configuring DRBD
    DRBD is a distributed replicated block device that is used to implement disk mirroring between multiple nodes. The following is an example configuration of DRBD with two nodes (node1 and node2) and using /dev/sdb as the shared block device:
  • Configuring DRBD.

Create DRBD configuration file.

sudo nano /etc/drbd.d/myresource.res
Copy after login

Add the following:

resource myresource {
    protocol C;

    on node1 {
        device /dev/drbd0;
        disk   /dev/sdb;
        address 192.168.1.100:7789;
        meta-disk internal;
    }

    on node2 {
        device /dev/drbd0;
        disk   /dev/sdb;
        address 192.168.1.101:7789;
        meta-disk internal;
    }

    net {
        allow-two-primaries;
    }

    startup {
        wfc-timeout     15;
        degr-wfc-timeout 60;
    }

    syncer {
        rate    100M;
        al-extents 257;
    }

    on-node-upgraded {
        # promote node1 to primary after a successful upgrade
        if [ "$(cat /proc/sys/kernel/osrelease)" != "$TW_AFTER_MAJOR.$TW_AFTER_MINOR.$TW_AFTER_UP" ] && 
           [ "$(cat /proc/mounts | grep $DRBD_DEVICE)" = "" ] ; then
            /usr/bin/logger "DRBD on-node-upgraded handler: Promoting to primary after upgrade.";
            /usr/sbin/drbdsetup $DRBD_DEVICE primary;
        fi;
    }
}
Copy after login
  • Initialize DRBD.
sudo drbdadm create-md myresource
Copy after login

Start DRBD.

sudo systemctl start drbd
Copy after login
  1. Configuring the cluster file system
    There are a variety of cluster file systems to choose from, such as GFS2 and OCFS2. The following is a configuration example using GFS2 as an example.
  • Create a file system.
sudo mkfs.gfs2 -p lock_gulmd -t mycluster:myresource /dev/drbd0
Copy after login
  • Mount the file system.
sudo mkdir /mnt/mycluster
sudo mount -t gfs2 /dev/drbd0 /mnt/mycluster
Copy after login
  • Add file system resources.
sudo pcs resource create myresource Filesystem device="/dev/drbd0" directory="/mnt/mycluster" fstype="gfs2"  op start  timeout="60s"  op stop  timeout="60s"  op monitor interval="10s"  op monitor timeout="20s"  op monitor start-delay="5s"  op monitor stop-delay="0s"
Copy after login
  • Enable and start resources.
sudo pcs constraint order myresource-clone then start myresource
sudo pcs constraint colocation add myresource with myresource-clone
Copy after login
  1. Test high availability
    After completing the above configuration, you can test high availability. The following are the steps for testing:
  • Stop the master node.
sudo pcs cluster stop node1
Copy after login
  • Check if the file system is running properly on the standby node.
sudo mount | grep "/mnt/mycluster"
Copy after login
Copy after login

The output should be the address and mount point of the standby node.

  • Restore the master node.
sudo pcs cluster start node1
Copy after login
  • Check whether the file system is restored to the primary node.
sudo mount | grep "/mnt/mycluster"
Copy after login
Copy after login

The output should be the address and mount point of the master node.

Conclusion:
Configuring a highly available cluster file system can improve the reliability and availability of the system. This article describes how to configure a highly available cluster file system on Linux and provides corresponding code examples. Readers can configure and adjust appropriately according to their own needs to achieve higher availability.

The above is the detailed content of How to configure a highly available cluster file system on Linux. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template