Home > Database > Mysql Tutorial > mysql高可用之mha_MySQL

mysql高可用之mha_MySQL

WBOY
Release: 2016-06-01 12:58:49
Original
1230 people have browsed it

mysql高可用有很多方案,如mmm,mysql cluster等,但都无法真正应用到生产环境。偶然间发现mha(master high availability),目前在mysql高可用方面是一个相对成熟的解决方案,它能够在较短时间内实现自动故障检测和故障转移,通常在10~30秒内;并且在replication环境中,mha能够很好的解决复制过程中数据行一致性问题。我们可以在不改动现有环境下部署mha,安装非常简单。

mha由mha manager(管理节点)和mha node(数据节点)组成。管理节点可以单独部署在一台独立的机器上管理一个或多个master-slave集群,也可以部署在一台slave节点上。数据节点运行在每台mysql服务器上,管理节点会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将其他的slave重新指向新的master。

如何保持数据的一致性呢?主要通过mha node的以下几个工具实现,但是这些工具由mha manager触发:

save_binary_logs如果master的二进制日志可以存取的话,保存复制master的二进制日志,最大程度保证数据不丢失

apply_diff_relay_logs相对于最新的slave,生成差异的中继日志并将所有差异事件应用到其他所有的slave

注:对比的是relay log,relay log越新就越接近于master,才能保证数据是最新的。

purge_relay_logs删除中继日志而不阻塞sql线程

好了,基本就介绍那么多,如果想详细了解请访问官网:https://code.google.com/p/mysql-master-ha,下面我们来部署下。

mha架构如下:

master:10.10.10.56 hostname:rd-mysql-test1server-id:1写入,数据节点

slave1:10.10.10.57 hostname:rd-mysql-test2 server-id:2读,数据节点,备选master(candicate master)

slave2:10.10.10.57 hostname:rd-mysql-test3 server-id:3读,数据节点

mha manager:10.10.10.59 hostname:rd-mysql-test4-管理节点

1.在所有节点安装mha node

#安装mha node
[root@rd-mysql-test4 src]# wget http://mysql-master-ha.googlecode.com/files/mha4mysql-node-0.53.tar.gz
[root@rd-mysql-test4 ~]# cd /usr/local/src/
[root@rd-mysql-test4 src]# yum install perl-DBD-MySQL -y
[root@rd-mysql-test4 src]# wget http://mysql-master-ha.googlecode.com/files/mha4mysql-node-0.53.tar.gz
[root@rd-mysql-test4 src]# tar -zxvf mha4mysql-node-0.53.tar.gz
[root@rd-mysql-test4 src]# cd mha4mysql-node-0.53
[root@rd-mysql-test4 mha4mysql-node-0.53]# perl Makefile.PL 
Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/Install/Can.pm line 6.
BEGIN failed--compilation aborted at inc/Module/Install/Can.pm line 6.
Compilation failed in require at inc/Module/Install.pm line 307.
Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/Install/Makefile.pm line 4.
BEGIN failed--compilation aborted at inc/Module/Install/Makefile.pm line 4.
Compilation failed in require at inc/Module/Install.pm line 307.
Can't locate ExtUtils/MM_Unix.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/Install/Metadata.pm line 316.
[root@rd-mysql-test1 mha4mysql-node-0.53]# yum install perl-devel
*** Module::AutoInstall version 1.03
*** Checking for Perl dependencies...
Can't locate CPAN.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/AutoInstall.pm line 279.
[root@rd-mysql-test4 mha4mysql-node-0.53]# yum -y install perl-CPAN
[root@rd-mysql-test4 mha4mysql-node-0.53]# perl Makefile.PL
[root@rd-mysql-test4 mha4mysql-node-0.53]# make && make install
Copy after login

注意:mha是由perl编写,因此依赖perl模块,我们使用epel源来安装相关perl模块。

2.安装mha manager
[root@rd-mysql-test4 src]# yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes -y
[root@rd-mysql-test4 src]# cd mha4mysql-manager-0.53
[root@rd-mysql-test4 mha4mysql-manager-0.53]# perl Makefile.PL
[root@rd-mysql-test4 mha4mysql-manager-0.53]# make && make install
Copy after login
注意:1.mha manager节点也需要安装mha node

2.mha manager的版本可以后manager node的版本不一样,建议使用新版本。

安装完成后会在/usr/local/bin目录下生成以下脚本:

apply_diff_relay_logs
filter_mysqlbinlog
masterha_check_repl				检查MySQL复制状况
masterha_check_ssh				检查MHA的SSH配置状况
masterha_check_status			检测当前MHA运行状态
masterha_conf_host				添加或删除配置的server信息
masterha_manager				启动MHA
masterha_stop					停止MHA
masterha_master_monitor			检测master是否宕机
masterha_master_switch			控制故障转移(自动或者手动)
masterha_secondary_check		多种线路检测master是否存活
Copy after login

另外在/usr/local/src/mha4mysql-manager-0.53/samples/scripts下还有以下脚本,我们将其复制到/usr/local/bin

[root@rd-mysql-test4 mha4mysql-manager-0.53]#cd samples/scripts/
[root@rd-mysql-test4 scripts]# ll
total 32
-rwxr-xr-x 1 root root  3443 Jan  8  2012 master_ip_failover		自动切换时VIP管理脚本
-rwxr-xr-x 1 root root  9186 Jan  8  2012 master_ip_online_change	在线切换时VIP脚本
-rwxr-xr-x 1 root root 11867 Jan  8  2012 power_manager				故障发生后关闭master脚本		
-rwxr-xr-x 1 root root  1360 Jan  8  2012 send_report				故障切换发送报警脚本
[root@rd-mysql-test4 mha4mysql-manager-0.53]#cp * /usr/local/bin
Copy after login

3.在所有节点上做ssh免密码登录

#rd-mysql-test1
[root@rd-mysql-test1 mha4mysql-node-0.53]#ssh-keygen -t rsa
[root@rd-mysql-test1 mha4mysql-node-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.57 
[root@rd-mysql-test1 mha4mysql-node-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.58
[root@rd-mysql-test1 mha4mysql-node-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.59
#rd-mysql-test2
[root@rd-mysql-test2 mha4mysql-node-0.53]#ssh-keygen -t rsa
[root@rd-mysql-test2 mha4mysql-node-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.56 
[root@rd-mysql-test2 mha4mysql-node-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.58
[root@rd-mysql-test2 mha4mysql-node-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.59
#rd-mysql-test3
[root@rd-mysql-test3 mha4mysql-node-0.53]#ssh-keygen -t rsa
[root@rd-mysql-test3 mha4mysql-node-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.56 
[root@rd-mysql-test3 mha4mysql-node-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.57
[root@rd-mysql-test3 mha4mysql-node-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.59
#rd-mysql-test4
[root@rd-mysql-test4 mha4mysql-manager-0.53]#ssh-keygen -t rsa
[root@rd-mysql-test4 mha4mysql-manager-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.56 
[root@rd-mysql-test4 mha4mysql-manager-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.57
[root@rd-mysql-test4 mha4mysql-manager-0.53]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.58
Copy after login
4.搭建mysql主从复制

(1)在master和slave上创建复制用户并授权

mysql> grant replication slave on *.* to 'rep'@'10.10.10.%' identified by '123456';
Query OK, 0 rows affected (0.01 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
Copy after login

(2)两台slave设置

mysql> change master to master_host='10.10.10.56',master_port=3306,master_user='rep',master_password='123456',master_log_file='mysql-bin.XXXX',master_log_pos=XXXX;
mysql> start slave
mysql> show slave status\G;
Copy after login
查看Slave_IO_Running,Slave_SQL_Running是否为yes
(3)两台slave设置read_only(从库对外只提供读,因为slave可能选为master,因此不把它写入配置文件)

mysql -e 'set global read_only=1'

(4)master和slave都添加管理账号

mysql> grant all privileges on *.* to 'mha'@'10.10.10.%' identified by '123456';
Query OK, 0 rows affected (0.01 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
Copy after login
具体搭建过程在此不详细说了,可以参考前面atlas的相关博文,但是请注意以下要点:

(1)log-bin必须在candicate mater(备选主机)上必须设置,如果备选主机没有设置log-bin,那么在故障转移过程mha manager会检测不到log-bin,则备选主机不会成为新的master,即使你设置过candicate master=1;如果所有slave没有设置log-bin,则mha manager不会启动故障转移。

(2)replication用户必须在所有的slave上存在,否则masterha_check_repl会不成功

(3)replication过滤规则(binlog-do-db和replicate-ignore-db等)必须在所有msyql上一致,否则masterha_check_repl会不成功

5.mha配置

(1)创建mha配置文件目录并编辑配置文件

[root@rd-mysql-test4 mha4mysql-manager-0.53]#mkdir -p /etc/mha/
[root@rd-mysql-test4 mha4mysql-manager-0.53]#cp samples/conf/app1.conf /etc/mha
[root@rd-mysql-test4 mha4mysql-manager-0.53]#mkdir -p /var/log/masterha/app1
[root@rd-mysql-test4 mha4mysql-manager-0.53]#vim /etc/mha/app1.conf
[server default]
#manger mha工作目录,用于产生相关状态文件
manager_workdir=/var/log/masterha/app1
#manger mha日志
manager_log=/var/log/masterha/app1/manager.log
#mha产生日志的目录,需要足够的权限
remote_workdir=/tmp
#master保存binlog的目录,如果master故障,但是ssh能通则用于拷贝复制二进制日志,保证数据一致性
master_binlog_dir=/data/mysql
#mysql管理用户和密码,最好是root,以便执行所有mysqlm命令,如stop slaves,change master等
user=mha
password=123456
#复制用户和密码
repl_user=rep
repl_password=123456
#ssh的用户名和密码,为了方便拷贝复制,我们用root用户
ssh_user=root
ssh_port=1022
#mha manager使用ping sql statement检测master的频率,如果超过3次不通则认定mysql master崩溃
ping_interval=1
#默认使用ping_type=SELECT检测,但是从0.53开始使用更可靠的CONNECT,从0.53开始使用更可靠的INSERT
ping_type=CONNECT

#故障转移使用的脚本
#master_ip_failover_script=/usr/local/bin/master_ip_failover
#在线切换脚本
#master_ip_online_change_script=/usr/local/bin/master_ip_online_change
#发送报警信息脚本
#report_script=/usr/local/bin/send_report
#通过多种网络路径检测ssh是否能够连接到master脚本,若其中一个路径不通,则会通过另一个路径ssh连接到master
#secondary_check_script=/usr/local/bin/masterha_secondary_check -s rd-mysql-test2 -s rd-mysql-test3 --user=mha --master_host=rd-mysql-test1 --master_ip=10.10.10.56 --master_port=3306
#关闭master脚本防止脑裂
#shutdown_script=/usr/local/bin/power_manager

[server1]
hostname=10.10.10.56
port=3306

[server2]
hostname=10.10.10.57
port=3306
#备选master参数
candidate_master=1
#如果slave的relay log落后于master超过100M,将此选为新master时会花费较长时间,因此mha manager不会将此slave选为新的master。通过过设置此参数可以忽略这种情况
check_repl_delay=0

[server3]
hostname=10.10.10.58
port=3306
Copy after login

注意:以上相关执行脚本命令的已经全都注释点,后面我们会通过手动执行的方式来充分了解,当我们全都弄明白了在恢复自动执行。

(2)设置relay log的清除方式

默认情况下,slave上的relay log在sql thread执行完成后会自动清除,但是这些relay log可能在恢复其他slave时仍然需要。因此我们禁用relay log自动清除功能,使用定期清除relay log的方法。然而在手动清除的时候我们需要考虑到复制延迟的问题,在ext3文件系统中,删除大文件会花费很长的时间,这会导致严重的复制延迟。为了避免这种情况,我们可以通过建立硬链接的方式来避免。

mha node用purge_relay_logs工具来实现,它会创建硬链接,执行set global relay_log_purge=1,等待几秒钟以便sql thread能够切换到新的relay log,最后执行set global log_purge=0.

具体参数如下:

--user mysql                      用户名
--password mysql                  密码
--port                            端口号
--host				  默认为127.0.0.1,不能为其他ip地址
--workdir                         指定创建relay log的硬链接的位置,默认是/var/tmp,由于系统不同分区创建硬链接文件会失败,故需要执行硬链接具体位置,成功执行脚本后,硬链接的中继日志文件被删除
--disable_relay_log_purge         默认情况下,如果relay_log_purge=1,脚本会什么都不清理,自动退出,通过设定这个参数,当relay_log_purge=1的情况下会将relay_log_purge设置为0。清理relay log之后,最后将参数设置为OFF。
Copy after login
purge_relay_logs清除relay log不会阻塞sql thread,我们需要在所有的slave上定期执行此命令,如:

[root@rd-mysql-test2 scripts]# cat /etc/rc.local
#在此我们使用root
* 5 * * * /usr/local/bin/purge_relay_logs  --user=root  --disbale_relay_log_purge >> /var/log/masterha/purge_relay_log.log 2>&1
[root@rd-mysql-test3 scripts]# cat /etc/rc.local
#在此我们使用root
* 5 * * * /usr/local/bin/purge_relay_logs  --user=root  --disbale_relay_log_purge >> /var/log/masterha/purge_relay_log.log 2>&1
Copy after login

注意:确定使用的用户是否能够登录mysql

我们手动执行以下:
[root@rd-mysql-test2 ~]# purge_relay_logs  --user=root  --host=127.0.0.1 --port=3306 --disable_relay_log_purge
2015-08-07 15:11:38: purge_relay_logs script started.
 Found relay_log.info: /data/mysql/relay-log.info
 Removing hard linked relay log files net-app1b-relay-bin* under /var/tmp.. done.
 Current relay log file: /data/mysql/net-app1b-relay-bin.000013
 Archiving unused relay log files (up to /data/mysql/net-app1b-relay-bin.000012) ...
 Creating hard link for /data/mysql/net-app1b-relay-bin.000012 under /var/tmp/net-app1b-relay-bin.000012 .. ok.
 Creating hard links for unused relay log files completed.
 Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if it keeps up); SET GLOBAL relay_log_purge=0; .. ok.
 Removing hard linked relay log files net-app1b-relay-bin* under /var/tmp.. done.
2015-08-07 15:11:41: All relay log purging operations succeeded.
Copy after login
6. 检查ssh配置是否正确

检查所有的节点ssh是否能够互通

[root@rd-mysql-test4 scripts]# masterha_check_ssh --conf=/etc/mha/app1.cnf 
Fri Aug  7 15:20:11 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Aug  7 15:20:11 2015 - [info] Reading application default configurations from /etc/mha/app1.cnf..
Fri Aug  7 15:20:11 2015 - [info] Reading server configurations from /etc/mha/app1.cnf..
Fri Aug  7 15:20:11 2015 - [info] Starting SSH connection tests..
Fri Aug  7 15:20:11 2015 - [debug] 
Fri Aug  7 15:20:11 2015 - [debug]  Connecting via SSH from root@10.10.10.56(10.10.10.56:1022) to root@10.10.10.57(10.10.10.57:1022)..
Fri Aug  7 15:20:11 2015 - [debug]   ok.
Fri Aug  7 15:20:11 2015 - [debug]  Connecting via SSH from root@10.10.10.56(10.10.10.56:1022) to root@10.10.10.58(10.10.10.58:1022)..
Fri Aug  7 15:20:11 2015 - [debug]   ok.
Fri Aug  7 15:20:12 2015 - [debug] 
Fri Aug  7 15:20:11 2015 - [debug]  Connecting via SSH from root@10.10.10.57(10.10.10.57:1022) to root@10.10.10.56(10.10.10.56:1022)..
Fri Aug  7 15:20:11 2015 - [debug]   ok.
Fri Aug  7 15:20:11 2015 - [debug]  Connecting via SSH from root@10.10.10.57(10.10.10.57:1022) to root@10.10.10.58(10.10.10.58:1022)..
Fri Aug  7 15:20:11 2015 - [debug]   ok.
Fri Aug  7 15:20:12 2015 - [debug] 
Fri Aug  7 15:20:12 2015 - [debug]  Connecting via SSH from root@10.10.10.58(10.10.10.58:1022) to root@10.10.10.56(10.10.10.56:1022)..
Fri Aug  7 15:20:12 2015 - [debug]   ok.
Fri Aug  7 15:20:12 2015 - [debug]  Connecting via SSH from root@10.10.10.58(10.10.10.58:1022) to root@10.10.10.57(10.10.10.57:1022)..
Fri Aug  7 15:20:12 2015 - [debug]   ok.
Fri Aug  7 15:20:12 2015 - [info] All SSH connection tests passed successfully.
Copy after login
7.检查整个复制环境的状况
[root@rd-mysql-test4 scripts]# vim /etc/mha/app1.cnf 
[root@rd-mysql-test4 scripts]# masterha_check_repl --conf=/etc/mha/app1.cnf 
Fri Aug  7 15:24:54 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Aug  7 15:24:54 2015 - [info] Reading application default configurations from /etc/mha/app1.cnf..
Fri Aug  7 15:24:54 2015 - [info] Reading server configurations from /etc/mha/app1.cnf..
Fri Aug  7 15:24:54 2015 - [info] MHA::MasterMonitor version 0.53.
Fri Aug  7 15:24:54 2015 - [info] Dead Servers:
Fri Aug  7 15:24:54 2015 - [info] Alive Servers:
Fri Aug  7 15:24:54 2015 - [info]   10.10.10.56(10.10.10.56:3306)
Fri Aug  7 15:24:54 2015 - [info]   10.10.10.57(10.10.10.57:3306)
Fri Aug  7 15:24:54 2015 - [info]   10.10.10.58(10.10.10.58:3306)
Fri Aug  7 15:24:54 2015 - [info] Alive Slaves:
Fri Aug  7 15:24:54 2015 - [info]   10.10.10.57(10.10.10.57:3306)  Version=5.6.25-log (oldest major version between slaves) log-bin:enabled
Fri Aug  7 15:24:54 2015 - [info]     Replicating from 10.10.10.56(10.10.10.56:3306)
Fri Aug  7 15:24:54 2015 - [info]     Primary candidate for the new Master (candidate_master is set)
Fri Aug  7 15:24:54 2015 - [info]   10.10.10.58(10.10.10.58:3306)  Version=5.6.25-log (oldest major version between slaves) log-bin:enabled
Fri Aug  7 15:24:54 2015 - [info]     Replicating from 10.10.10.56(10.10.10.56:3306)
Fri Aug  7 15:24:54 2015 - [info] Current Alive Master: 10.10.10.56(10.10.10.56:3306)
Fri Aug  7 15:24:54 2015 - [info] Checking slave configurations..
Fri Aug  7 15:24:54 2015 - [info] Checking replication filtering settings..
Fri Aug  7 15:24:54 2015 - [info]  binlog_do_db= city, binlog_ignore_db= mysql
Fri Aug  7 15:24:54 2015 - [info]  Replication filtering check ok.
Fri Aug  7 15:24:54 2015 - [info] Starting SSH connection tests..
Fri Aug  7 15:24:56 2015 - [info] All SSH connection tests passed successfully.
Fri Aug  7 15:24:56 2015 - [info] Checking MHA Node version..
Fri Aug  7 15:24:56 2015 - [info]  Version check ok.
Fri Aug  7 15:24:56 2015 - [info] Checking SSH publickey authentication settings on the current master..
Fri Aug  7 15:24:56 2015 - [info] HealthCheck: SSH to 10.10.10.56 is reachable.
Fri Aug  7 15:24:57 2015 - [info] Master MHA Node version is 0.53.
Fri Aug  7 15:24:57 2015 - [info] Checking recovery script configurations on the current master..
Fri Aug  7 15:24:57 2015 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql --output_file=/tmp/save_binary_logs_test --manager_version=0.53 --start_file=mysql-bin.000006 
Fri Aug  7 15:24:57 2015 - [info]   Connecting to root@10.10.10.56(10.10.10.56).. 
  Creating /tmp if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /data/mysql, up to mysql-bin.000006
Fri Aug  7 15:24:57 2015 - [info] Master setting check done.
Fri Aug  7 15:24:57 2015 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Fri Aug  7 15:24:57 2015 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user=mha --slave_host=10.10.10.57 --slave_ip=10.10.10.57 --slave_port=3306 --workdir=/tmp --target_version=5.6.25-log --manager_version=0.53 --relay_log_info=/data/mysql/relay-log.info  --relay_dir=/data/mysql/  --slave_pass=xxx
Fri Aug  7 15:24:57 2015 - [info]   Connecting to root@10.10.10.57(10.10.10.57:1022).. 
  Checking slave recovery environment settings..
    Opening /data/mysql/relay-log.info ... ok.
    Relay log found at /data/mysql, up to net-app1b-relay-bin.000014
    Temporary relay log file is /data/mysql/net-app1b-relay-bin.000014
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Fri Aug  7 15:24:57 2015 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user=mha --slave_host=10.10.10.58 --slave_ip=10.10.10.58 --slave_port=3306 --workdir=/tmp --target_version=5.6.25-log --manager_version=0.53 --relay_log_info=/data/mysql/relay-log.info  --relay_dir=/data/mysql/  --slave_pass=xxx
Fri Aug  7 15:24:57 2015 - [info]   Connecting to root@10.10.10.58(10.10.10.58:1022).. 
  Checking slave recovery environment settings..
    Opening /data/mysql/relay-log.info ... ok.
    Relay log found at /data/mysql, up to net-app1b-relay-bin.000011
    Temporary relay log file is /data/mysql/net-app1b-relay-bin.000011
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Fri Aug  7 15:24:57 2015 - [info] Slaves settings check done.
Fri Aug  7 15:24:57 2015 - [info] 
10.10.10.56 (current master)
 +--10.10.10.57
 +--10.10.10.58

Fri Aug  7 15:24:57 2015 - [info] Checking replication health on 10.10.10.57..
Fri Aug  7 15:24:57 2015 - [info]  ok.
Fri Aug  7 15:24:57 2015 - [info] Checking replication health on 10.10.10.58..
Fri Aug  7 15:24:57 2015 - [info]  ok.
Fri Aug  7 15:24:57 2015 - [info] Checking master_ip_failover_script status:
Fri Aug  7 15:24:57 2015 - [info]   /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=10.10.10.56 --orig_master_ip=10.10.10.56 --orig_master_port=3306  --orig_master_ssh_port=1022
Bareword "FIXME_xxx" not allowed while "strict subs" in use at /usr/local/bin/master_ip_failover line 88.
Execution of /usr/local/bin/master_ip_failover aborted due to compilation errors.
Fri Aug  7 15:24:57 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln214]  Failed to get master_ip_failover_script status with return code 255:0.
Fri Aug  7 15:24:57 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln383] Error happend on checking configurations.  at /usr/local/bin/masterha_check_repl line 48
Fri Aug  7 15:24:57 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln478] Error happened on monitoring servers.
Fri Aug  7 15:24:57 2015 - [info] Got exit code 1 (Not master dead).

MySQL Replication Health is NOT OK!
Copy after login
最后发现我们检查结果是not ok的,是master_ip_failover_script中的脚本有错误,我们暂时将其注释掉后,再检查一遍。

[root@rd-mysql-test4 scripts]# masterha_check_repl --conf=/etc/mha/app1.cnf 
Fri Aug  7 15:27:22 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Aug  7 15:27:22 2015 - [info] Reading application default configurations from /etc/mha/app1.cnf..
Fri Aug  7 15:27:22 2015 - [info] Reading server configurations from /etc/mha/app1.cnf..
Fri Aug  7 15:27:22 2015 - [info] MHA::MasterMonitor version 0.53.
Fri Aug  7 15:27:22 2015 - [info] Dead Servers:
Fri Aug  7 15:27:22 2015 - [info] Alive Servers:
Fri Aug  7 15:27:22 2015 - [info]   10.10.10.56(10.10.10.56:3306)
Fri Aug  7 15:27:22 2015 - [info]   10.10.10.57(10.10.10.57:3306)
Fri Aug  7 15:27:22 2015 - [info]   10.10.10.58(10.10.10.58:3306)
Fri Aug  7 15:27:22 2015 - [info] Alive Slaves:
Fri Aug  7 15:27:22 2015 - [info]   10.10.10.57(10.10.10.57:3306)  Version=5.6.25-log (oldest major version between slaves) log-bin:enabled
Fri Aug  7 15:27:22 2015 - [info]     Replicating from 10.10.10.56(10.10.10.56:3306)
Fri Aug  7 15:27:22 2015 - [info]     Primary candidate for the new Master (candidate_master is set)
Fri Aug  7 15:27:22 2015 - [info]   10.10.10.58(10.10.10.58:3306)  Version=5.6.25-log (oldest major version between slaves) log-bin:enabled
Fri Aug  7 15:27:22 2015 - [info]     Replicating from 10.10.10.56(10.10.10.56:3306)
Fri Aug  7 15:27:22 2015 - [info] Current Alive Master: 10.10.10.56(10.10.10.56:3306)
Fri Aug  7 15:27:22 2015 - [info] Checking slave configurations..
Fri Aug  7 15:27:22 2015 - [info] Checking replication filtering settings..
Fri Aug  7 15:27:22 2015 - [info]  binlog_do_db= city, binlog_ignore_db= mysql
Fri Aug  7 15:27:22 2015 - [info]  Replication filtering check ok.
Fri Aug  7 15:27:22 2015 - [info] Starting SSH connection tests..
Fri Aug  7 15:27:24 2015 - [info] All SSH connection tests passed successfully.
Fri Aug  7 15:27:24 2015 - [info] Checking MHA Node version..
Fri Aug  7 15:27:24 2015 - [info]  Version check ok.
Fri Aug  7 15:27:24 2015 - [info] Checking SSH publickey authentication settings on the current master..
Fri Aug  7 15:27:25 2015 - [info] HealthCheck: SSH to 10.10.10.56 is reachable.
Fri Aug  7 15:27:25 2015 - [info] Master MHA Node version is 0.53.
Fri Aug  7 15:27:25 2015 - [info] Checking recovery script configurations on the current master..
Fri Aug  7 15:27:25 2015 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql --output_file=/tmp/save_binary_logs_test --manager_version=0.53 --start_file=mysql-bin.000006 
Fri Aug  7 15:27:25 2015 - [info]   Connecting to root@10.10.10.56(10.10.10.56).. 
  Creating /tmp if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /data/mysql, up to mysql-bin.000006
Fri Aug  7 15:27:25 2015 - [info] Master setting check done.
Fri Aug  7 15:27:25 2015 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Fri Aug  7 15:27:25 2015 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user=mha --slave_host=10.10.10.57 --slave_ip=10.10.10.57 --slave_port=3306 --workdir=/tmp --target_version=5.6.25-log --manager_version=0.53 --relay_log_info=/data/mysql/relay-log.info  --relay_dir=/data/mysql/  --slave_pass=xxx
Fri Aug  7 15:27:25 2015 - [info]   Connecting to root@10.10.10.57(10.10.10.57:1022).. 
  Checking slave recovery environment settings..
    Opening /data/mysql/relay-log.info ... ok.
    Relay log found at /data/mysql, up to net-app1b-relay-bin.000014
    Temporary relay log file is /data/mysql/net-app1b-relay-bin.000014
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Fri Aug  7 15:27:25 2015 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user=mha --slave_host=10.10.10.58 --slave_ip=10.10.10.58 --slave_port=3306 --workdir=/tmp --target_version=5.6.25-log --manager_version=0.53 --relay_log_info=/data/mysql/relay-log.info  --relay_dir=/data/mysql/  --slave_pass=xxx
Fri Aug  7 15:27:25 2015 - [info]   Connecting to root@10.10.10.58(10.10.10.58:1022).. 
  Checking slave recovery environment settings..
    Opening /data/mysql/relay-log.info ... ok.
    Relay log found at /data/mysql, up to net-app1b-relay-bin.000011
    Temporary relay log file is /data/mysql/net-app1b-relay-bin.000011
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Fri Aug  7 15:27:26 2015 - [info] Slaves settings check done.
Fri Aug  7 15:27:26 2015 - [info] 
10.10.10.56 (current master)
 +--10.10.10.57
 +--10.10.10.58

Fri Aug  7 15:27:26 2015 - [info] Checking replication health on 10.10.10.57..
Fri Aug  7 15:27:26 2015 - [info]  ok.
Fri Aug  7 15:27:26 2015 - [info] Checking replication health on 10.10.10.58..
Fri Aug  7 15:27:26 2015 - [info]  ok.
Fri Aug  7 15:27:26 2015 - [warning] master_ip_failover_script is not defined.
Fri Aug  7 15:27:26 2015 - [warning] shutdown_script is not defined.
Fri Aug  7 15:27:26 2015 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.
Copy after login
最后是ok的了,这是什么原因呢?

根据官方文件说明:

In common HA environments, many cases people allocate one virtual IP address on a master. If the master crashes, HA software like Pacemaker takes over the virtual IP address to the standby server.
Another common approach is creating a global catalog database that has all mappings between application names and writer/reader IP addresses (i.e. {app1_master1, 192.168.0.1}, {app_master2, 192.168.0.2}, …), instead of using virtual IP addresses. In this case, you need to update the catalog database when the current master dies.
Both approaches have advantages and disadvantages. MHA does not force one approach, but allow users to use any IP address failover solution. master_ip_failover_script parameter can be used for that purpose. In other words, you need to write a script to make applications transparently connect to the new master, and have to define at master_ip_failover_script parameter. Here is an example.
Copy after login
大体意思是在高可用环境中,我们可以在主master和备用master使用vip,以便在主master宕机时,vip可以漂移到备用master继续提供服务,而不用更改程序代码;还有一种方式是使用全局配置文件。而这两种方式都已优缺点,MHA不会强制我们使用哪一种方式,但是我们无论使用哪种方式,都可以用到master_ip_failover_script,我们只需要设置相关参数即可。

为了便于我们充分了解相关脚本命令,我们将/etc/mha/app1.conf中的相关执行脚本的命令行全都注释掉,通过手动执行的方式来体验下,在弄明白所有脚本后我们在自动执行。

8.开启mha manager监控

[root@rd-mysql-test4 scripts]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
Copy after login
我们可以使用nohup来使监控运行在后台。

其中:

--remove_dead_master_conf      该参数代表当发生主从切换后,老的主库的ip将会从配置文件中移除。

--manger_log                            日志存放位置

--ignore_last_failover                 在缺省情况下,如果MHA检测到连续发生宕机,且两次宕机间隔不足8小时的话,则不会进行Failover,之所以这样限制是为了避免ping-pong效应。该参数代表忽略上次MHA触发切换产生的文件,默认情况下,MHA发生切换后会在日志目录,也就是上面我设置的/data产生app1.failover.complete文件,下次再次切换的时候如果发现该目录下存在该文件将不允许触发切换,除非在第一次切换后收到删除该文件,为了方便,这里设置为--ignore_last_failover。
Copy after login

注意:现在mha manager进行没有作为daemon来运行,如果故障转移成功完成或master进程意外被杀,那么监控将会停止。我们可以使用daemontools来是监控作为daemon来运行。

[root@rd-mysql-test4 yum.repos.d]# cd /usr/local/src
[root@rd-mysql-test4 src]# wget http://cr.yp.to/daemontools/daemontools-0.76.tar.gz
[root@rd-mysql-test4 src]# tar -zxvf daemontools-0.76.tar.gz
[root@rd-mysql-test4 src]# cd admin/daemontools-0.76/
[root@rd-mysql-test4 daemontools-0.76]# package/install 
Linking ./src/* into ./compile...
Compiling everything in ./compile...
./load envdir unix.a byte.a 
/usr/bin/ld: errno: TLS definition in /lib64/libc.so.6 section .tbss mismatches non-TLS reference in envdir.o
/lib64/libc.so.6: could not read symbols: Bad value
collect2: ld returned 1 exit status
make: *** [envdir] Error 1

####解决方法####
在src/conf-cc最后加上-include /usr/include/errno.h
[root@rd-mysql-test4 daemontools-0.76]# package/install
Copy after login
最后会在/usr/local/src/admin/daemontools-0.76先建立command目录并存放相关命令,有什么命令我们可以这样查看:
[root@rd-mysql-test4 daemontools-0.76]# ls /usr/local/src/admin/daemontools-0.76/command
envdir     fghack    pgrphack       setlock    softlimit  svc   svscan      svstat  tai64nlocal
envuidgid  multilog  readproctitle  setuidgid  supervise  svok  svscanboot  tai64n
Copy after login

同时在/usr/local/bin下对上面这些命令建立了软连接方便我们执行。另外创建监控/services目录,并在/etc/inittab下也有变化:

SV:123456:respawn:/command/svscanboot

它使用init的方式来守护自己。

[root@rd-mysql-test4 daemontools-0.76]#mkdir -p /service/masterha_app1
[root@rd-mysql-test4 daemontools-0.76]#vim /service/masterha_app1/run
#!bin/bash
exec masterha_manager --conf=/etc/mha/app1.cnf --wait_on_monitor_error=60 --wait_on_failover_error=60 >> /var/log/masterha/app1/app1.log 2>&1
[root@rd-mysql-test4 daemontools-0.76]#chmod 755 /service/masterha_app1/run
##启动monitoring
svc -u /service/masterha_app1
##停止monitoring
svc -d /service/masterha_app1
Copy after login
在此我们先不使用这种方式启动。

9.查看是否启动

[root@rd-mysql-test4 /]# masterha_check_status --conf=/etc/mha/app1.cnf 
app1 (pid:21438) is running(0:PING_OK), master:10.10.10.56
Copy after login

启动后会在/var/log/masterha/app1下生成app1.master_status.health manager.log两个文件。

关闭监控我们可以使用masterha_stop --conf=/etc/mha/app1.cnf

10.测试

一.自动failover

1.我们关闭master的mysql

[root@rd-mysql-test1 mysql]# mysqladmin shutdown

我们查看/var/log/masterha/app1会发现以下情况:

[root@rd-mysql-test4 app1]# ls
app1.failover.complete  manager.log  saved_master_binlog_from_10.10.10.56_3306_20150807180251.binlog 
Copy after login
app1.failover.complete文件,说明failover已经完成;

saved_master_binlog_from_10.10.10.56_3306_20150807180251.binlog是从master上复制过来的二进制日志。

manager.log是复制环境状态的检查并记录了整个failover的过程,我们可以failover报告:

----- Failover Report -----

app1: MySQL Master failover 10.10.10.56 to 10.10.10.57 succeeded

Master 10.10.10.56 is down!

Check MHA Manager logs at rd-mysql-test4:/var/log/masterha/app1/manager.log for details.

Started automated(non-interactive) failover.
The latest slave 10.10.10.57(10.10.10.57:3306) has all relay logs for recovery.
Selected 10.10.10.57 as a new master.
10.10.10.57: OK: Applying all logs succeeded.
Generating relay diff files from the latest slave succeeded.
10.10.10.57: Resetting slave info succeeded.
Master failover to 10.10.10.57(10.10.10.57:3306) completed successfully.
Copy after login

从报告中可以看出故障迁移是成功,我们还可以看看mysql的状态确认迁移是否成功。

整个迁移过程如下:

(1)检查配置文件阶段,检查配置文件中master存活

(2)宕机master保证应用不会连接,会使用shutdown_script配置的脚本;如果master_ip_failover_script设置的话,将会进行VIP漂移切换

(3)master恢复阶段,其中包括:

获取最新的slave的二进制日志

从master上保存二进制日志并和最新的slave上的rilay log对比差异,将差异保存到slave的remote_workdir=/tmp下,然后再将其保存到mha manager上的/var/log/masterha/app1下

选出新的master,如果设置了candidate_master,则优先选为新masterha_conf_host来添加。

对比新master的relay log,生成差异并将mha manager上的日志复制到remote_workdir=/tmp下

将差异应用在新master上

(4)slaves恢复阶段,其中包括:

并行操作,对比relay log

并行操作,将mha managermha manager上的日志复制到remote_workdir=/tmp下,将差异日志应用到slave

(5)将slave提升为新的master,并将老master从配置文件中删除。

大体过程如下,总结的比较吃力,请详见manager.log。

注:1.成功完成一次故障转移后mha manager的监控会停止,若新master出现宕机时failover则不会发生,因此我们需要使用daemontools将将他作为daemon运行

2.当failover完成后,会将老的master从配置文件删除,若我们在修复好老的master后,希望将其作为slave继续使用,我们需要使用masterha_conf_host来添加,如:

#添加
masterha_conf_host --command=add --conf=/etc/mha/app1.cnf --block=server100  --hostname=test4 --params="no_master=1;ignore_fail=1"
则会在配置文件中生成:
[server100]
hostname=test4
no_master=1
ignore_fail=1
#删除
masterha_conf_host --command=add --conf=/etc/mha/app1.cnf --block=server100
Copy after login

好了,mha就先写到这里,内容有点多,有可能有错误的地方请指正,另外细节性的东西有很多,稍微不注意就会掉坑里,可能说的不完全,大家可以到官网仔细看看吧。

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template