MySQL的主从搭建大家有很多种方式,传统的mysqldump方式是很多人的选择之一。但对于较大的数据库则该方式并非理想的选择。使用Xtrabackup可以快速轻松的构建或修复mysql主从架构。本文描述了基于现有的从库来快速搭建主从,即作为原主库的一个新从库。该方式的好处是对主库无需备份期间导致的相关性能压力。搭建过程中使用了快速流备份方式来加速主从构建以及描述了加速流式备份的几个参数,供大家参考。
有关流式备份可以参考:Xtrabackup 流备份与恢复
1、备份从库
###远程备份期间使用了等效性验证,因此应先作相应配置,这里我们使用的是mysql用户
$ innobackupex --user=root --password=xxx --slave-info --safe-slave-backup \--compress-threads=3 --parallel=3 --stream=xbstream \--compress /log | ssh -p50021 mysql@172.16.16.10 "xbstream -x -C /log/recover"
###备份期间使用了safe-slave-backup参数,可以看到SQL thread被停止,完成后被启动
$ mysql -uroot -p -e "show slave status \G"|egrep 'Slave_IO_Running|Slave_SQL_Running' Enter password: Slave_IO_Running: Yes Slave_SQL_Running: No Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
###复制my.cnf文件到新从库
$ scp -P50021 /etc/my.cnf mysql@172.16.16.10:/log/recover
2、主库授予新从库复制账户
master@MySQL> grant replication slave,replication client on *.* to repl@'172.16.%.%' identified by 'repl';
3、新从库prepare
###由于使用了流式压缩备份,因此需要先解压
###下载地址 http://www.php.cn/
# tar -xvf qpress-11-linux-x64.tar qpress# cp qpress /usr/bin/ $ innobackupex --decompress /log/recover ###解压$ innobackupex --apply-log --use-memory=2G /log/recover ###prepare备份
4、准备从库配置文件my.cnf
###根据需要修改相应参数,这里的修改如下,
skip-slave-start datadir = /log/recover port = 3307 server_id = 24 socket = /tmp/mysql3307.sock pid-file=/log/recover/mysql3307.pid log_error=/log/recover/recover.err
5、启动从库及修改change master
# chown -R mysql:mysql /log/recover
# /app/soft/mysql/bin/mysqld_safe --defaults-file=/log/recover/my.cnf &
mysql> system more /log/recover/xtrabackup_slave_info CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000658', MASTER_LOG_POS=925384099 mysql> CHANGE MASTER TO -> MASTER_HOST='172.16.16.10', ### Author: Leshami -> MASTER_USER='repl', ### Blog : http://www.php.cn/ -> MASTER_PASSWORD='repl', -> MASTER_PORT=3306, -> MASTER_LOG_FILE='mysql-bin.000658', -> MASTER_LOG_POS=925384099; Query OK, 0 rows affected, 2 warnings (0.31 sec)
mysql> start slave;
Query OK, 0 rows affected (0.02 sec)
6、基于从库备份相关参数及加速流备份参数
The --slave-info option This option is useful when backing up a replication slave server. It prints the binary log position and name of the master server. It also writes this information to the xtrabackup_slave_info file as a CHANGE MASTER statement. This is useful for setting up a new slave for this master can be set up by starting a slave server on this backup and issuing the statement saved in the xtrabackup_slave_info file.
The --safe-slave-backup option In order to assure a consistent replication state, this option stops the slave SQL thread and wait to start backing up until Slave_open_temp_tables in SHOW STATUS is zero. If there are no open temporary tables, the backup will take place, otherwise the SQL thread will be started and stopped until there are no open temporary tables. The backup will fail if Slave_open_temp_tables does not become zero after --safe-slave-backup-timeout seconds (defaults to 300 seconds). The slave SQL thread will be restarted when the backup finishes. Using this option is always recommended when taking backups from a slave server.
Warning: Make sure your slave is a true replica of the master before using it as a source for backup. A good tool
to validate a slave is pt-table-checksum.
--compress
This option instructs xtrabackup to compress backup copies of InnoDB data files. It is passed directly to the xtrabackup child process.
###注compress方式是一种相对粗糙的压缩方式,压缩为.gp文件,没有gzip压缩比高
--compress-threads
This option specifies the number of worker threads that will be used for parallel compression. It is passed directly to the xtrabackup child process. Try 'xtrabackup --help' for more details.
--decompress
Decompresses all files with the .qp extension in a backup previously made with the --compress option.
--parallel=NUMBER-OF-THREADS
On backup, this option specifies the number of threads the xtrabackup child process should use to back up files concurrently. The option accepts an integer argument. It is passed directly to xtrabackup's --parallel option. See the xtrabackup documentation for details. On --decrypt or --decompress it specifies the number of parallel forks that should be used to process the backup files.
以上就是使用innobackupex基于从库搭建mysql主从架构的内容,更多相关内容请关注PHP中文网(www.php.cn)!