CentOS 6.4系统MySQL主从复制基本配置实践
对于MySQL数据库一般用途的主从复制,可以实现数据的备份(如果希望在主节点失效后,能够使从节点自动接管,就需要更加复杂的配置,这里暂时先不考虑),如果主节点出现硬件故障,数据库服务器可以直接手动切换成备份节点(从节点),继续提供服务。基本的主
对于MySQL数据库一般用途的主从复制,可以实现数据的备份(如果希望在主节点失效后,能够使从节点自动接管,就需要更加复杂的配置,这里暂时先不考虑),如果主节点出现硬件故障,数据库服务器可以直接手动切换成备份节点(从节点),继续提供服务。基本的主从复制配置起来非常容易,这里我们做个简单的记录总结。
我们选择两台服务器来进行MySQL的主从复制实践,一台m1作为主节点,另一台nn作为从节点。
两台机器上都需要安装MySQL数据库,如果想要卸掉默认安装的,可以执行如下命令:
sudo rpm -e --nodeps mysql yum list | grep mysql
现在可以在CentOS 6.4上直接执行如下命令进行安装:
sudo yum install -y mysql-server mysql mysql-deve
为root用户设置密码:
mysqladmin -u root password 'shiyanjun'
然后可以直接通过MySQL客户端登录:
mysql -u root -p
主节点配置
首先,考虑到数据库的安全,以及便于管理,我们需要在主节点m1上增加一个专用的复制用户,使得任意想要从主节点进行复制从节点都必须使用这个账号:
CREATE USER repli_user; GRANT REPLICATION SLAVE ON *.* TO 'repli_user'@'%' IDENTIFIED BY 'shiyanjun';
这里还进行了操作授权,使用这个换用账号来执行集群复制。如果想要限制IP端段,也可以在这里进行配置授权。
然后,在主节点m1上,修改MySQL配置文件/etc/my.cnf,使其支持Master复制功能,修改后的内容如下所示:
[mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 server-id=1 log-bin=m-bin log-bin-index=m-bin.index [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
server-id指明主节点的身份,从节点通过这个server-id来识别该节点是Master节点(复制架构中的源数据库服务器节点)。
如果MySQL当前已经启动,修改完集群复制配置后需要重启服务器:
sudo service mysqld restart
从节点配置
接着,类似地进行从节点nn的配置,同样修改MySQL配置文件/etc/my.cnf,使其支持Slave端复制功能,修改后的内容如下所示:
[mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 server-id=2 relay-log=slave-relay-bin relay-log-index=slave-relay-bin.index [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
同样,如果MySQL当前已经启动,修改完集群复制配置后需要重启服务器:
sudo service mysqld restart
然后,需要使从节点nn指向主节点,并启动Slave复制,执行如下命令:
CHANGE MASTER TO MASTER_HOST='m1', MASTER_PORT=3306, MASTER_USER='repli_user', MASTER_PASSWORD='shiyanjun'; START SLAVE;
验证集群复制
这时,可以在主节点m1上执行相关操作,验证从节点nn同步复制了主节点的数据库中的内容变更。
如果此时,我们已经配置好了主从复制,那么对于主节点m1上MysQL数据库的任何变更都会复制到从节点nn上,包括建库建表、插入更新等操作,下面我们从建库开始:
在主节点m1上建库建表:
CREATE DATABASE workflow; CREATE TABLE `workflow`.`project` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(100) NOT NULL, `type` tinyint(4) NOT NULL DEFAULT '0', `description` varchar(500) DEFAULT NULL, `create_at` date DEFAULT NULL, `update_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `status` tinyint(4) NOT NULL DEFAULT '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
在m1上查看binlog内容,执行命令:
SHOW BINLOG EVENTS\G
binlog内容内容如下所示:
*************************** 1. row *************************** Log_name: m-bin.000001 Pos: 4 Event_type: Format_desc Server_id: 1 End_log_pos: 106 Info: Server ver: 5.1.73-log, Binlog ver: 4 *************************** 2. row *************************** Log_name: m-bin.000001 Pos: 106 Event_type: Query Server_id: 1 End_log_pos: 197 Info: CREATE DATABASE workflow *************************** 3. row *************************** Log_name: m-bin.000001 Pos: 197 Event_type: Query Server_id: 1 End_log_pos: 671 Info: CREATE TABLE `workflow`.`project` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(100) NOT NULL, `type` tinyint(4) NOT NULL DEFAULT '0', `description` varchar(500) DEFAULT NULL, `create_at` date DEFAULT NULL, `update_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `status` tinyint(4) NOT NULL DEFAULT '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 3 rows in set (0.00 sec)
通过上述binlog内容,我们大概可以看到MySQL的binlog都记录那些信息,一个事件对应一行记录。这些记录信息的组织结构如下所示:
- Log_name:日志名称,指定的记录操作的binlog日志名称,这里是m-bin.000001,与我们前面在/etc/my.cnf中配置的相对应
- Pos:记录事件的起始位置
- Event_type:事件类型
- End_log_pos:记录事件的结束位置
- Server_id:服务器标识
- Info:事件描述信息
然后,我们可以查看在从节点nn上复制的情况。通过如下命令查看从节点nn上数据库和表的信息:
SHOW DATABASES; USE workflow; SHOW TABLES; DESC project;
我们再看一下执行插入语句的情况。在主节点m1上执行如下SQL语句:
INSERT INTO `workflow`.`project` VALUES(1, 'Avatar-II', 1, 'Avatar-II project', '2014-02-16', '2014-02-16 11:09:54', 0);
可以在从节点上执行查询,看到从节点nn上复制了主节点m1上执行的INSERT语句的记录:
SELECT * FROM workflow.project;
验证复制成功。
复制常用命令
下面,我们总结了几个在MySQL主从复制场景中常用到的几个相关命令:
- 终止主节点复制
STOP MASTER;
- 清除主节点复制文件
RESET MASTER;
- 终止从节点复制
STOP SLAVE;
- 清除从节点复制文件
RESET SLAVE;
- 查看主节点复制状态
SHOW MASTER STATUS\G;
结果示例:
*************************** 1. row *************************** File: m-bin.000001 Position: 956 Binlog_Do_DB: Binlog_Ignore_DB: 1 row in set (0.00 sec)
- 查看从节点复制状态
SHOW SLAVE STATUS\G;
结果示例:
*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: m1 Master_User: repli_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: m-bin.000001 Read_Master_Log_Pos: 956 Relay_Log_File: slave-relay-bin.000002 Relay_Log_Pos: 1097 Relay_Master_Log_File: m-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 956 Relay_Log_Space: 1252 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec)
- 查看BINLOG列表
SHOW BINARY LOGS\G
原文地址:CentOS 6.4系统MySQL主从复制基本配置实践, 感谢原作者分享。

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Improve HDFS performance on CentOS: A comprehensive optimization guide to optimize HDFS (Hadoop distributed file system) on CentOS requires comprehensive consideration of hardware, system configuration and network settings. This article provides a series of optimization strategies to help you improve HDFS performance. 1. Hardware upgrade and selection resource expansion: Increase the CPU, memory and storage capacity of the server as much as possible. High-performance hardware: adopts high-performance network cards and switches to improve network throughput. 2. System configuration fine-tuning kernel parameter adjustment: Modify /etc/sysctl.conf file to optimize kernel parameters such as TCP connection number, file handle number and memory management. For example, adjust TCP connection status and buffer size

CentOS will be shut down in 2024 because its upstream distribution, RHEL 8, has been shut down. This shutdown will affect the CentOS 8 system, preventing it from continuing to receive updates. Users should plan for migration, and recommended options include CentOS Stream, AlmaLinux, and Rocky Linux to keep the system safe and stable.

Complete Guide to Checking HDFS Configuration in CentOS Systems This article will guide you how to effectively check the configuration and running status of HDFS on CentOS systems. The following steps will help you fully understand the setup and operation of HDFS. Verify Hadoop environment variable: First, make sure the Hadoop environment variable is set correctly. In the terminal, execute the following command to verify that Hadoop is installed and configured correctly: hadoopversion Check HDFS configuration file: The core configuration file of HDFS is located in the /etc/hadoop/conf/ directory, where core-site.xml and hdfs-site.xml are crucial. use

The CentOS shutdown command is shutdown, and the syntax is shutdown [Options] Time [Information]. Options include: -h Stop the system immediately; -P Turn off the power after shutdown; -r restart; -t Waiting time. Times can be specified as immediate (now), minutes ( minutes), or a specific time (hh:mm). Added information can be displayed in system messages.

The Installation, Configuration and Optimization Guide for HDFS File System under CentOS System This article will guide you how to install, configure and optimize Hadoop Distributed File System (HDFS) on CentOS System. HDFS installation and configuration Java environment installation: First, make sure that the appropriate Java environment is installed. Edit /etc/profile file, add the following, and replace /usr/lib/java-1.8.0/jdk1.8.0_144 with your actual Java installation path: exportJAVA_HOME=/usr/lib/java-1.8.0/jdk1.8.0_144exportPATH=$J

When configuring Hadoop Distributed File System (HDFS) on CentOS, the following key configuration files need to be modified: core-site.xml: fs.defaultFS: Specifies the default file system address of HDFS, such as hdfs://localhost:9000. hadoop.tmp.dir: Specifies the storage directory for Hadoop temporary files. hadoop.proxyuser.root.hosts and hadoop.proxyuser.ro

The key differences between CentOS and Ubuntu are: origin (CentOS originates from Red Hat, for enterprises; Ubuntu originates from Debian, for individuals), package management (CentOS uses yum, focusing on stability; Ubuntu uses apt, for high update frequency), support cycle (CentOS provides 10 years of support, Ubuntu provides 5 years of LTS support), community support (CentOS focuses on stability, Ubuntu provides a wide range of tutorials and documents), uses (CentOS is biased towards servers, Ubuntu is suitable for servers and desktops), other differences include installation simplicity (CentOS is thin)

Installing MySQL on CentOS involves the following steps: Adding the appropriate MySQL yum source. Execute the yum install mysql-server command to install the MySQL server. Use the mysql_secure_installation command to make security settings, such as setting the root user password. Customize the MySQL configuration file as needed. Tune MySQL parameters and optimize databases for performance.
