Preparing the environment
node1:192.168.139.2
node2:192.168.139.4
node4:192.168.139.8
node5:192.168 .139.9
node1 as the target end
node2 node4 node5 as the initiator end
and node2 node4 node5 is configured into a three-node RHCS high-availability cluster after installing cman+rgmanager. Because gfs2 is a cluster file system, faulty nodes must be fenced with the help of the HA high-availability cluster, and node information must be transferred using the Message Layer.
Because the discovered and logged-in target needs to be made into an integrated file system, gfs2-utils must be installed on node2 node4 node5
First use the original Stop the cluster service created by luci/ricci (a cluster service I made in a previous experiment, which has nothing to do with this experiment)
[root@node2 mnt]# clusvcadm -d Web_Service
Local machine disabling service:Web_Service...
[root@node2 mnt]# clustat
Cluster Status for zxl @ Wed Dec 21 17:55:46 2016
Member Status: Quorate
service:Web_Service (node2.zxl.com) disabled
[root@node2 mnt]#yum -y install cman rgmanager
Use the css_tool command to create a cluster, the cluster name is mycluster
[root@node2 mnt]# ccs_tool create mycluster
[ root@node2 mnt]# cat /etc/cluster/cluster.conf
##Add Fence device (RHCS cluster Required)
[root@node2 mnt]# ccs_tool addfence meatware fence_manual
[root@node2 mnt]# ccs_tool lsfence
Name Agent
-v specifies the number of votes the node has
-n specifies the node identifier
-f specifies the Fence device name
Add three nodes, the RHCS cluster must have at least three nodes
[root@node2 mnt]# ccs_tool addnode -v 1 -n 1 -f meatware node2.zxl .com
[root@node2 mnt]# ccs_tool addnode -v 1 -n 2 -f meatware node4.zxl.com
[root@node2 mnt]# ccs_tool addnode -v 1 - n 3 -f meatware node5.zxl.com
Cluster name: MyCluster, Config_version: 5
## Nodename votes nodeid fencetype
node2.zxl.com 1 1 meatware 4.zxl. COM 1 2 Meatware
[root@node2 mnt]# scp /etc/cluster/cluster.conf node5:/etc/cluster/
Each node starts cman rgmanager
[root@node2 mnt]# service cman start
[root@node2 mnt]# service rgmanager start
[root@node4 mnt] # service cman start
[root@node4 mnt]# service rgmanager start
[root@node5 mnt]# service cman start
[root@node5 mnt]# service rgmanager start
[root@node2 mnt]# clustat
Cluster Status for mycluster @ Wed Dec 21 18:40:26 2016
Member Status: Quorate
/sbin/mkfs.gfs2 \\Format
/sbin/mount.gfs2 to create the gfs2 file system \\Mount the gfs2 file system
/usr/sbin/gfs2_convert
-j Specify the use of the mkfs.gfs2 command The number of log areas can be mounted by several nodes, because after formatting into a cluster file system, each node must have log records-J specifies the log size, the default is 128M
-p {lock_dlm|lock_nolock} Distributed lock management|No lock required
-t
Note: A cluster can have multiple files System, for example, a cluster shares two disks. The two disks can be gfs2 and ocfs2 file systems. When different file systems are locked, different lock tables must be used to uniquely identify them, so each lock must have a lock. Name
Format of lock table name
cluster_name: lock table name
For example: mycluster:lock_sda
-D Display detailed Debug information
Log in to the target and format it as a gfs2 file system[root@node2 mnt]# iscsiadm -m node -T iqn.2016-12.com.zxl :store1.disk1 -p 192.168.139.2 -l[root@node2 mnt]# mkfs.gfs2 -j 2 -p lock_dlm -t mycluster:lock_sde1 /dev/sde1
Are you sure you want to proceed? [y/n] y
##Device: /dev/sde1
Blocksize: 4096
Device Size 3.00 GB (787330 blocks)
Filesystem Size: 3.00 GB (787328 blocks)
[root@node4 ~]# cd /mnt
[root@node4 mnt]# ll \\You can see the files copied by node1
total 8
-rw-r--r--. 1 root root 47 Dec 21 19:06 issue
When node4 creates a file a.txt, it will immediately notify other nodes so that they can see it. This is the benefit of the cluster file system gfs2
[root@node4 mnt]# touch a.txt
[root@node2 mnt]# ll
total 16
-rw-r--r--. 1 root root 0 Dec 21 19:10 a.txt
-rw-r--r--. 1 root root 47 Dec 21 19:06 issue
Adding a node node5[root@ node5 ~]# iscsiadm -m node -T iqn.2016-12.com.zxl:store1.disk1 -p 192.168.139.2 -l cannot be mounted because only two cluster log files are created. , a few logs and a few nodes can be mounted [root@node5 ~]# mount -t gfs2 /dev/sdc1 /mnt
Too many nodes mounting filesystem, no free journals
Add log
[root@node2 mnt]# gfs2_jadd -j 1 /dev/sde1 \\-j 1 Add a log
Filesystem: /mnt
Old Journals 2
New Journals 3
[root@node2 mnt]# gfs2_tool journals /dev/sde1 \\This command can view several logs, each with a default size of 128M
journal2 - 128MB
journal1 - 128MB
journal0 - 128MB
3 journal(s) found.
[root@node5 ~]# mount -t gfs2 /dev/sdc1 /mnt \\node5 mounted successfully
[root@node5 ~]# cd /mnt
[root@node5 mnt]# touch b .txt
[root@node4 mnt]# ll
total 24
-rw-r--r--. 1 root root 0 Dec 21 19:10 a .txt
-rw-r--r--. 1 root root 0 Dec 21 19:18 b.txt
-rw-r--r--. 1 root root 47 Dec 21 19:06 issue
gfs2 cluster file system generally supports no more than 16 clusters. After that, the performance plummetsFor more RHCS related articles on using the css_tool command to create an HA cluster and create a gfs2 cluster file system, please pay attention to the PHP Chinese website!