Pacemaker is a high-availability cluster software suitable for Linux-like operating systems. Pacemaker is known as the "cluster resource manager" and provides maximum availability of cluster resources by failover of resources between cluster nodes. Pacemaker uses Corosync for heartbeats and internal communications between cluster components. Corosync is also responsible for voting (Quorum) in the cluster.
Before we begin, make sure you have the following:
/dev/sdb
(2GB) Without further ado, let’s Learn more about these steps.
Add the following entries in the /etc/hosts
file on both nodes:
192.168.1.6node1.example.com192.168.1.7node2.example.com
Pacemaker and other necessary software packages cannot be obtained in the default package repository of RHEL 9/8. Therefore, we must enable a highly available warehouse. Run the following subscription manager commands on both nodes.
For RHEL 9 server:
$ sudo subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms
For RHEL 8 server:
$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
After enabling the repository, run the command to install the pacemaker
package on both nodes :
$ sudo dnf install pcs pacemaker fence-agents-all -y
To allow high availability ports in the firewall, run the following command on each node:
$ sudo firewall-cmd --permanent --add-service=high-availability$ sudo firewall-cmd --reload
On both servers Set a password for the hacluster
user and run the following echo
command:
$ echo "<Enter-Password>" | sudo passwd --stdin hacluster
Execute the following command to start and enable the cluster service on both servers:
$ sudo systemctl start pcsd.service$ sudo systemctl enable pcsd.service
Use the pcs
command to authenticate both nodes and run the following command from any node. In my case, I'm running it on node1
:
$ sudo pcs host auth node1.example.com node2.example.com
authenticated using the hacluster
user.
Use the following pcs cluster setup
command to add two nodes to the cluster. The cluster name I use here is http_cluster
. Run the command only on node1
:
$ sudo pcs cluster setup http_cluster --start node1.example.com node2.example.com$ sudo pcs cluster enable --all
The output of these two commands is as follows:
从任何节点验证初始集群状态:
$ sudo pcs cluster status
注意:在我们的实验室中,我们没有任何防护设备,因此我们将其禁用。但在生产环境中,强烈建议配置防护。
$ sudo pcs property set stonith-enabled=false$ sudo pcs property set no-quorum-policy=ignore
在服务器上,挂载了一个大小为 2GB 的共享磁盘(/dev/sdb
)。因此,我们将其配置为 LVM 卷并将其格式化为 XFS 文件系统。
在开始创建 LVM 卷之前,编辑两个节点上的 /etc/lvm/lvm.conf
文件。
将参数 #system_id_source = "none"
更改为 system_id_source = "uname"
:
$ sudo sed -i 's/# system_id_source = "none"/ system_id_source = "uname"/g' /etc/lvm/lvm.conf
在 node1
上依次执行以下一组命令创建 LVM 卷:
$ sudo pvcreate /dev/sdb$ sudo vgcreate --setautoactivation n vg01 /dev/sdb$ sudo lvcreate -L1.99G -n lv01 vg01$ sudo lvs /dev/vg01/lv01$ sudo mkfs.xfs /dev/vg01/lv01
将共享设备添加到集群第二个节点(node2.example.com
)上的 LVM 设备文件中,仅在 node2
上运行以下命令:
[sysops@node2 ~]$ sudo lvmdevices --adddev /dev/sdb
在两台服务器上安装 Apache web 服务器(httpd),运行以下 dnf
命令:
$ sudo dnf install -y httpd wget
并允许防火墙中的 Apache 端口,在两台服务器上运行以下 firewall-cmd
命令:
$ sudo firewall-cmd --permanent --zone=public --add-service=http$ sudo firewall-cmd --permanent --zone=public --add-service=https$ sudo firewall-cmd --reload
在两个节点上创建 status.conf
文件,以便 Apache 资源代理获取 Apache 的状态:
$ sudo bash -c 'cat <<-END > /etc/httpd/conf.d/status.conf<Location /server-status>SetHandler server-statusRequire local</Location>END'$
修改两个节点上的 /etc/logrotate.d/httpd
:
替换下面的行
/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
为
/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null &&/usr/bin/ps -q $(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null &&/usr/sbin/httpd -f /etc/httpd/conf/httpd.conf \-c "PidFile /run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true
保存并退出文件。
仅在 node1
上执行以下命令:
$ sudo lvchange -ay vg01/lv01$ sudo mount /dev/vg01/lv01 /var/www/$ sudo mkdir /var/www/html$ sudo mkdir /var/www/cgi-bin$ sudo mkdir /var/www/error$ sudo bash -c ' cat <<-END >/var/www/html/index.html<html><body>High Availability Apache Cluster - Test Page </body></html>END'$$ sudo umount /var/www
注意:如果启用了 SElinux,则在两台服务器上运行以下命令:
$ sudo restorecon -R /var/www
为集群定义资源组和集群资源。在我的例子中,我们使用 webgroup
作为资源组。
web_lvm
是共享 LVM 卷的资源名称(/dev/vg01/lv01
)web_fs
是将挂载在 /var/www
上的文件系统资源的名称VirtualIP
是网卡 enp0s3
的 VIP(IPadd2
)资源Website
是 Apache 配置文件的资源。从任何节点执行以下命令集。
$ sudo pcs resource create web_lvm ocf:heartbeat:LVM-activate vgname=vg01 vg_access_mode=system_id --group webgroup$ sudo pcs resource create web_fs Filesystem device="/dev/vg01/lv01" directory="/var/www" fstype="xfs" --group webgroup$ sudo pcs resource create VirtualIP IPaddr2 ip=192.168.1.81 cidr_netmask=24 nic=enp0s3 --group webgroup$ sudo pcs resource create Website apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group webgroup
现在验证集群资源状态,运行:
$ sudo pcs status
很好,上面的输出显示所有资源都在 node1
上启动。
尝试使用 VIP(192.168.1.81)访问网页。
使用 curl
命令或网络浏览器访问网页:
$ curl http://192.168.1.81
或者
完美!以上输出确认我们能够访问我们高可用 Apache 集群的网页。
让我们尝试将集群资源从 node1
移动到 node2
,运行:
$ sudo pcs node standby node1.example.com$ sudo pcs status
完美,以上输出确认集群资源已从 node1
迁移到 node2
。
要从备用节点(node1.example.com
)中删除节点,运行以下命令:
$ sudo pcs node unstandby node1.example.com
The above is the detailed content of How to set up a high-availability Apache (HTTP) cluster on RHEL 9/8. For more information, please follow other related articles on the PHP Chinese website!