Home > Database > Mysql Tutorial > MySQL - Detailed introduction to MySQL high availability implementation

MySQL - Detailed introduction to MySQL high availability implementation

黄舟
Release: 2017-03-14 16:43:10
Original
1722 people have browsed it

1. Basic environment introduction and basic environment configuration

Node 1: node1.hulala.com 192.168.1.35 centos6.5_64 Add 8G new hard drive
Node 2 : node2.hulala.com 192.168.1.36 centos6.5_64 Add 8G new hard drive
vip 192.168.1.39

Both node 1 and node 2 need to be configured
Modify the host name:

vim /etc/sysconfig/network
HOSTNAME=node1.hulala.com
Copy after login

Configure hosts resolution: <br/>

vim /etc/hosts
192.168.1.35    node1.hulala.com node1
192.168.1.36    node2.hulala.com node2
Copy after login

Synchronize system time: <br/>

ntpdate cn.pool.ntp.org
Copy after login

Turn off the firewall and SELINUX<br/>

service iptables stop
chkconfig iptables off
cat /etc/sysconfig/selinux
SELINUX=disabled
Copy after login

The above configuration is on both nodes Configuration is required. After the configuration is completed, restart the two nodes

2: Configure ssh mutual trust<br/>

[root@node1~]#ssh-keygen -t rsa -b 1024
[root@node1~]#ssh-copy-id root@192.168.1.36
[root@node2~]#ssh-keygen -t rsa -b 1024
[root@node2~]#ssh-copy-id root@192.168.1.35
Copy after login

3: DRBDInstallation and configuration(node1 and node2 perform the same Operation)

[root@node1~]#wget -c http://www.php.cn/
[root@node1~]#wget -c http://www.php.cn/
[root@node1~]#rpm -ivh *.rpm
Copy after login

Get a sha1 value as shared-secret<br/>

[root@node1~]#sha1sum /etc/drbd.conf
8a6c5f3c21b84c66049456d34b4c4980468bcfb3  /etc/drbd.conf
Copy after login

Create and edit resourcesConfiguration file:/etc/drbd.d/dbcluster. res<br/>

[root@node1~]# vim /etc/drbd.d/dbcluster.res
resource dbcluster {
    protocol C;
    net {
        cram-hmac-alg sha1;
        shared-secret "8a6c5f3c21b84c66049456d34b4c4980468bcfb3";
        after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
        after-sb-2pri disconnect;
        rr-conflict disconnect;
    }
    device    /dev/drbd0;
    disk      /dev/sdb1;
meta-disk internal;
    on node1.hulala.com {
        address   192.168.1.35:7789;
    }
    on node2.hulala.com {
        address   192.168.1.36:7789;
    }
}
Copy after login

Description of parameters used in the above configuration: <br/>RESOURCE: Resource name <br/>PROTOCOL: Use protocol "C" to represent "synchronous", that is, after receiving the remote write confirmation, It is considered that the writing is completed. <br/>NET: The SHA1 keys of the two nodes are the same<br/>after-sb-0pri: When "Split Brain" occurs and there is no data change, the two nodes are connected normally<br/>after -sb-1pri: If there is a data change, abandon the secondary device data and synchronize it from the primary device <br/>rr-conflict: If the previous settings cannot be applied and the drbd system has a role conflict, the system automatically disconnects the connection between nodes <br/>META-DISK: Meta data is saved on the same disk (sdb1)<br/>ON : The nodes that form the cluster<br/>Copy the DRBD configuration to the node machine:<br/>

[root@node1~]#scp /etc/drbd.d/dbcluster.res root@192.168.1.36:/etc/drbd.d/
Copy after login

Create resources and File system:<br/>Create partition (not formatted)<br/>Create LVM partitions on node1 and node2: <br/>

[#root@node1~]fdisk /dev/sdb
Copy after login

On node1 and node2 Create meta data for the resource (dbcluster): <br/>

[root@node1~drbd]#drbdadm create-md dbcluster
Copy after login

Activate the resource (both node1 and node2 must be checked)<br/>- First make sure drbd module has been loaded<br/>View Whether to load: <br/>

# lsmod | grep drbd
Copy after login

If not loaded, you need to load: <br/>

# modprobe drbd
# lsmod | grep drbd
drbd                  317261  0
libcrc32c               1246  1 drbd
Copy after login

– Start the drbd background process: <br/>

[root@node1 drbd]# drbdadm up dbcluster
[root@node2 drbd]# drbdadm up dbcluster
Copy after login

View (node1 and node2) drbd status: <br/>

[root@node2 drbd]# /etc/init.d/drbd status
GIT-hash: 7ad5f850d711223713d6dcadc3dd48860321070c build by dag@Build64R6, 2016-10-23 08:16:10
m:res        cs         ro                   ds                         p  mounted  fstype
0:dbcluster  Connected  Secondary/Secondary  Inconsistent/Inconsistent  C
Copy after login

As you can see from the above information, the DRBD service is already running on two machines, but neither machine is the primary machine ("primary" host), Therefore, the resource (block device) cannot be accessed.<br/>Start synchronization:

Only operate on the main node (here is node1)<br/>

[root@node1 drbd]# drbdadm — –overwrite-data-of-peer primary dbcluster
Copy after login

View synchronization status:<br/>

<br/>
Copy after login

Some explanations of the above output results:<br/>cs (connection state): Network connection status<br/>ro (roles): The role of the node (the role of this node is displayed first)<br/>ds ( disk states): The status of the hard disk <br/> Replication protocol: A, B or C (this configuration is C) <br/> Seeing that the drbd status is "cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate" means synchronization End.<br/>You can also check the drbd status like this:<br/>

[root@centos193 drbd]# drbd-overview
  0:dbcluster/0  Connected Secondary/Primary UpToDate/UpToDate C r—–
Copy after login

Create a file system:<br/>Create a file system on the main node (Node1):<br/>

[root@node1 drbd]# mkfs -t ext4 /dev/drbd0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
…….
180 days, whichever comes first.  Use tune2fs -c or -i to override.
Copy after login

Note : There is no need to do the same operation on the secondary node (Node2), because DRBD will handle the synchronization of the original disk data. <br/> In addition, we do not need to mount this DRBD system to any machine (of course installed MySQL requires temporary mounting to install MySQL), because the cluster management software will handle it. Also make sure that the copied file system is only mounted on the Active main server.<br/>

IV : mysql installation

For the installation of MySQL, you can also refer to the blog post "MySQL - CentOS6.5 Compile and Install MySQL5.6.16" <br/>

1, install mysql on the node1 and node2 nodes: <br/>

yum install mysql* -y
Copy after login

2. Both node1 and node2 operate to stop the mysql service <br/>

[root@node1~]# service mysql stop
Shutting down MySQL.        [  OK  ]
Copy after login

3. Both node1 and node2 operate to create a database directory and change the directory permission owner to mysql <br/>

[root@host1 /]# mkdir -p /mysql/data
[root@host1 /]# chown -R mysql:mysql /mysql
Copy after login

4, close mysql and temporarily mount the DRBD file system to the main node (Node1)<br/>

[root@node1 ~]# mount /dev/drbd0  /mysql/
Copy after login

5. Both node1 and node2 operate and modify the my.cnf file modification<br/>Add a new data storage path under [mysqld]<br/>

datadir=/mysql/data
Copy after login

6. cp all files and directories under the default data path to the new directory (no operation is required for node2)<br/>

[root@host1 mysql]#cd /var/lib/mysql
[root@host1 mysql]#cp -R * /mysql/data/
Copy after login

Both node1 and node2 operate here. Note that the owner of the copy directory permissions in the past needs to be changed to mysql. You can directly modify the mysql directory here. <br/>

[root@host1 mysql]# chown -R mysql:mysql /mysql
Copy after login

7. Start node1 mysql for login test<br/>

[root@host1 mysql]# mysql
Copy after login

8. Uninstall the DRBD file system on node Node1<br/>

[root@node1 ~]# umount /var/lib/mysql_drbd
[root@node1 ~]# drbdadm secondary dbcluster
Copy after login

9. Mount the DRBD file system on node Node2<br/>

[root@node2 ~]# drbdadm primary dbcluster
[root@node2 ~]# mount /dev/drbd0 /mysql/
Copy after login

10. Configure MySQL on node Node2 and test <br/>

[root@node1 ~]# scp node2:/etc/my.cnf /etc/my.cnf
[root@node2 ~]# chown mysql /etc/my.cnf
[root@node2 ~]# chmod 644 /etc/my.cnf
Copy after login

11. Do mysql login test on node2 <br/>

[root@node2 ~]# mysql
Copy after login

12. Uninstall the DRBD file system on Node2, Let the cluster management software Pacemaker manage it<br/>

[root@node2~]# umount /var/lib/mysql_drbd
[root@node2~]# drbdadm secondary dbcluster
[root@node2~]# drbd-overview
  0:dbcluster/0  Connected Secondary/Secondary UpToDate/UpToDate C r—–
[root@node2~]#
Copy after login

Five: Installation and configuration of Corosync and Pacemaker (both node1 and node2 need to be installed)

Installing Pacemaker must depend on:<br/>

[root@node1~]#yum -y install automake autoconf libtool-ltdl-devel pkgconfig python glib2-devel libxml2-devel 
libxslt-devel python-devel gcc-c++ bzip2-devel gnutls-devel pam-devel libqb-devel
Copy after login

Install Cluster Stack dependencies:<br/>

[root@node1~]yum -y install clusterlib-devel corosynclib-devel
Copy after login

Install Pacemaker optional dependencies:<br/>

[root@node1~]yum -y install ncurses-devel openssl-devel cluster-glue-libs-devel docbook-style-xsl
Copy after login

Pacemaker installation:<br/>

[root@node1~]yum -y install pacemaker
Copy after login

crmsh安装:<br/>

[root@node1~]wget http://www.php.cn/:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
[root@node1~]yum -y install crmsh
Copy after login

1,配置corosync<br/>Corosync Key<br/>– 生成节点间安全通信的key:<br/>

[root@node1~]# corosync-keygen
Copy after login
– 将authkey拷贝到node2节点(保持authkey的权限为400):
[root@node~]# scp /etc/corosync/authkey node2:/etc/corosync/
2,[root@node1~]# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
Copy after login

编辑/etc/corosync/corosync.conf:<br/>

# Please read the corosync.conf.5 manual page
compatibility: whitetank
aisexec {
        user: root
        group: root
}
totem {
        version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 192.168.1.0
mcastaddr: 226.94.1.1
mcastport: 4000
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
Copy after login

– 创建并编辑/etc/corosync/service.d/pcmk,添加”pacemaker”服务<br/>

[root@node1~]# cat /etc/corosync/service.d/pcmk
service {
	# Load the Pacemaker Cluster Resource Manager
	name: pacemaker
	ver: 1
}
Copy after login

将上面两个配置文件拷贝到另一节点<br/>

[root@node1]# scp /etc/corosync/corosync.conf node2:/etc/corosync/corosync.conf
[root@node1]# scp /etc/corosync/service.d/pcmk node2:/etc/corosync/service.d/pcmk
Copy after login

3,启动corosync和Pacemaker<br/> 分别在两个节点上启动corosync并检查.<br/>

[root@node1]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync):               [  OK  ]
[root@node1~]# corosync-cfgtool -s
Printing ring status.
Local node ID -1123964736
RING ID 0
id = 192.168.1.189
status = ring 0 active with no faults
[root@node2]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync):               [  OK  ]
Copy after login

– 在两节点上分别启动Pacemaker:<br/>

[root@node1~]# /etc/init.d/pacemaker start
Starting Pacemaker Cluster Manager:                        [  OK  ]
[root@node2~]# /etc/init.d/pacemaker start
Starting Pacemaker Cluster Manager:
Copy after login

六、资源配置

<br/>配置资源及约束 <br/>配置默认属性<br/>查看已存在的配置:

[root@node1 ~]# crm configure property stonith-enabled=false
[root@node1 ~]# crm_verify -L
Copy after login
Copy after login

禁止STONITH错误:<br/>

[root@node1 ~]# crm configure property stonith-enabled=false
[root@node1 ~]# crm_verify -L
Copy after login
Copy after login

让集群忽略Quorum:<br/>

[root@node1~]# crm configure property no-quorum-policy=ignore
Copy after login

防止资源在恢复之后移动:<br/>

[root@node1~]# crm configure rsc_defaults resource-stickiness=100
Copy after login

设置操作的默认超时:<br/>

[root@node1~]# crm configure property default-action-timeout="180s"
Copy after login

设置默认的启动失败是否为致命的:

[root@node1~]# crm configure property start-failure-is-fatal="false"
Copy after login

配置DRBD资源<br/>– 配置之前先停止DRBD:<br/>

[root@node1~]# /etc/init.d/drbd stop
[root@node2~]# /etc/init.d/drbd stop
Copy after login

– 配置DRBD资源:<br/>

[root@node1~]# crm configure
crm(live)configure# primitive p_drbd_mysql ocf:linbit:drbd params drbd_resource="dbcluster" op monitor interval="15s"
 op start timeout="240s" op stop timeout="100s"
Copy after login

– 配置DRBD资源主从关系(定义只有一个Master节点):<br/>

crm(live)configure# ms ms_drbd_mysql p_drbd_mysql meta master-max="1" master-node-max="1" 
clone-max="2" clone-node-max="1" notify="true"
Copy after login

– 配置文件系统资源,定义挂载点(mount point):<br/>

crm(live)configure# primitive p_fs_mysql ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/var/lib/mysql_drbd/" fstype="ext4"
Copy after login

配置VIP资源<br/>

crm(live)configure# primitive p_ip_mysql ocf:heartbeat:IPaddr2 params ip="192.168.1.39" cidr_netmask="24" op 
monitor interval="30s"
Copy after login

配置MySQL资源<br/>

crm(live)configure# primitive p_mysql lsb:mysql op monitor interval="20s" 
timeout="30s" op start interval="0" timeout="180s" op stop interval="0" timeout="240s"
Copy after login

七、组资源和约束

通过”组”确保DRBD,MySQL和VIP是在同一个节点(Master)并且确定资源的启动/停止顺序.<br/>

启动: p_fs_mysql–>p_ip_mysql->p_mysql
停止: p_mysql–>p_ip_mysql–>p_fs_mysql
Copy after login
crm(live)configure# group g_mysql p_fs_mysql p_ip_mysql p_mysql
Copy after login

组group_mysql永远只在Master节点:<br/>

crm(live)configure# colocation c_mysql_on_drbd inf: g_mysql ms_drbd_mysql:Master
Copy after login

MySQL的启动永远是在DRBD Master之后:<br/>

crm(live)configure# order o_drbd_before_mysql inf: ms_drbd_mysql:promote g_mysql:start
Copy after login

配置检查和提交<br/>

crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# quit
Copy after login

查看集群状态和failover测试<br/>状态查看:<br/>

[root@node1 mysql]# crm_mon -1r
Copy after login

Failover测试:<br/>将Node1设置为Standby状态<br/>

[root@node1 ~]# crm node standby
Copy after login

过几分钟查看集群状态(若切换成功,则看到如下状态):<br/>

[root@node1 ~]# crm status
Copy after login

将Node1恢复online状态:<br/>

[root@node1 mysql]# crm node online
[root@node1 mysql]# crm status
Copy after login

The above is the detailed content of MySQL - Detailed introduction to MySQL high availability implementation. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template