Home > Database > Mysql Tutorial > body text

MySQL distributed recovery instance analysis

王林
Release: 2023-04-17 20:25:01
forward
919 people have browsed it

1. Overview

Whenever a MySQL server newly joins or rejoins an MGR cluster, it must catch up with the different transactions in the cluster to ensure that the data of this node is synchronized with the latest data in the cluster. of. The process in which the newly added node catches up with the data in the cluster or the node that rejoins the cluster catches up on the difference in transaction data between the time it left the cluster and the present is called distributed recovery.

The node that applies to join the cluster first checks the relay log in the groupreplicationapplier channel and checks the transaction data of the node that has not yet been synchronized from the cluster. If it is a node that rejoins the cluster, the node will find transaction data that has not been replayed since it left the cluster and the latest data in the cluster. In this case, the node will first apply these unsynchronized transactions. For newly added nodes to the cluster, full data recovery can be performed directly from a boot node.

After that, the newly added node establishes a connection with the existing online node (boot node) in the cluster for state transfer. The newly added node synchronizes the data that has not been synchronized before joining the cluster or after leaving the cluster from the boot node in the cluster. These different transactions are provided by the boot node in the cluster. Next, the newly added node applies the unapplied transactions synchronized from the bootstrap node in the cluster. At this time, the node applying to join the cluster will apply the data written by the new transaction in the cluster during the state transfer process. (At this time, the data written by new things in the cluster is temporarily stored in the cache queue, and the data is not written to the disk.) After completing this process, the data of the newly added nodes is at a level compared with the data of the entire cluster. status, and the node is set to online status.

Note: New nodes that join the cluster, whether they have been in this cluster before or not, will first randomly select an online node to synchronize the different transactions between the node and the cluster.

Group replication uses the following method to implement state transfer during distributed recovery:

Use the function of the clone plug-in to perform remote cloning operations. The plug-in can be downloaded from Supported starting from MySQL 8.0.17. To use this method, the cloning plug-in must be installed in advance on the boot node and the newly joined node. Group Replication automatically configures the required clone plug-in settings and manages remote clone operations.

Copy data from the boot node's binary log and apply these transactions on the newly joined node. This method requires a standard asynchronous replication channel called groupreplicationrecovery established between the boot node and the joining node.

After executing STARTGROUP_REPLICATION on the joining node, Group Replication will automatically select the best combination of the above methods for state transfer. To do this, Group Replication will check which existing nodes in the cluster are suitable to be used as boot nodes, how many transactions the joining node will require from the boot node, and whether any transactions are no longer present in the binary log files of any node in the cluster. If there is a large transaction gap between the joining node and the boot node, or if some of the required transactions are not in the boot node's binary log files, Group Replication will start a distributed recovery through a remote clone operation. If there are no large transaction gaps, or the clone plug-in is not installed, Group Replication will transfer state directly from the boot node's binary log.

During a remote clone operation, existing data on the joining node is deleted and replaced with a copy of the boot node data. When the remote cloning operation is completed and the newly joined node has been restarted, the state transfer from the boot node binary log will continue to obtain the incremental data written by the cluster during the remote cloning operation.

During state transfer from the boot node's binary log, the newly joining node copies and applies the required transactions from the boot node's binary log, and applies transactions as they are received until the binary log records new The joining node joins the cluster. (When the joining node successfully joins the cluster, the corresponding view change event will be recorded in the binary log.) During this process, the joining node will buffer new transaction data applied to the cluster. After the state transfer from the binary log is complete, the newly joining node will apply the buffered transactions.

When the joining node is up to date with all transactions for the cluster, the node will be set online and can join the cluster as a normal node, and distributed recovery is complete.

ps: State transfer from the binary log is the basic mechanism for distributed recovery with group replication, and if the boot node and joining node in the replication group are not set up to support cloning. Since state transfer from the binary log is based on classic asynchronous replication, if the MySQL server joining the cluster does not have data for the cluster at all, or the data is obtained from a very old backup, it may take a long time to restore The latest data. Therefore, in this case, it is recommended that before adding the MySQL server to the cluster, you should set it up with the cluster's data by transferring a fairly recent snapshot of the nodes already in the cluster. This minimizes the time required for distributed recovery and reduces the impact on the boot node, which must retain and transmit fewer binary log files.

2. Connection for distributed recovery

When the joining node connects to the boot node in the existing node for state transfer during distributed recovery, The joining node acts as a client, while the bootstrap node acts as a server. When state transfer occurs from the boot node's binary log over this connection (using the asynchronous replication channel GroupReplicationRecovery), the joining node acts as the replica and the boot node acts as the source. When performing a remote cloning operation through this connection, the newly added node acts as a full data receiver and the boot node acts as a full data provider. Configuration settings that apply to roles outside the context of Group Replication also apply to Group Replication unless they are overridden by Group Replication-specific configuration settings or behaviors.

The connections that existing nodes provide to newly joined nodes for distributed recovery are different from the connections that Group Replication uses for communication between nodes within the cluster.

The group communication engine is used for group replication (XCom, Paxos variant), and the connection used for TCP communication between remote XCom instances is specified by the groupreplicationlocal_address system variable. This connection is used for TCP/IP messaging between online nodes within the cluster. Communication with the local instance occurs through the use of an in-memory shared transport channel.

For distributed recovery, until MySQL8.0.20, nodes within the cluster provided their standard SQL client connections to the joining node, as specified by the MySQL Server's hostname and port system variables. If the report_port system variable specifies an alternate port number, that port number is used instead.

Starting with MySQL 8.0.21, group members can use an alternative list of distributed recovery endpoints as dedicated client connections for joining members, allowing connections independent of the member's regular client users to be used to control distribution style recovery. This list can be specified using the groupReplicationAdvertiseRecoveryEndpoints system variable, and members transfer their list of distributed recovery endpoints to the group when they join the group. The default is for members to continue providing the same standard SQL client connections as in earlier versions.

PS:

Distributed recovery may fail if the joining node cannot correctly identify the other nodes using the hostname defined by MySQLServer's system variables. It is recommended that the operating system running MySQL has a correctly configured unique hostname using DNS or local settings. The hostname used by the server for SQL client connections can be verified in the Memberhost column of the ReplicationgroupMembers table under the "performanceschema" library. If multiple group members externalize the default hostname set by the operating system, there is a chance that the joining node will not resolve it to the correct address and will be unable to connect for distributed recovery. In this case, MySQL Server's report_host system variable can be used to configure a unique hostname externalized by each server.

The steps for joining a node to establish a connection for distributed recovery are as follows:

When a node joins the cluster, it uses a seed node included in the list in the groupreplicationgroupseeds system variable Make a connection, initially using the groupreplicationlocaladdress specified in this list. The seed node may be a subset of the cluster data.

Through this connection, the seed node uses Group Replication's membership service to provide the joining node with a list of all online nodes in the cluster in the form of a view. Membership information includes details of the distributed recovery endpoint or standard SQL client connection provided by each member for distributed recovery.

Join Node Select the appropriate online node from this list as its boot node for distributed recovery

Join Node Try to connect to the boot node using the boot node's distributed recovery endpoint and press the list Connections to each endpoint are attempted in turn in the order specified in . If the boot node does not provide an endpoint, the joining node will attempt to connect using the boot node's standard SQL client connection. The SSL requirements for the connection are specified by the groupreplicationrecoveryssl* options.

If the joining node cannot connect to the specified boot node, it will retry the connection with another suitable boot node. If the joining node exhausts the endpoint's broadcast list without establishing a connection, it does not fall back to the boot node's standard SQL client connection, but instead switches to another boot node to try to re-establish the connection.

When the joining node establishes a distributed recovery connection with the boot node, it will use the connection for state transfer. The log of the joining node shows the host and port of the connection used. If a remote clone operation is used, when the joining node is restarted at the end of the operation, it will establish a connection with the new boot node, performing state transfer from the boot node's binary log. This may be a different connection than the boot node used for the remote clone operation, or it may be the same connection as the boot node. Regardless, distributed recovery will establish a connection to the boot node in the same way.

2.1 Search for distributed recovery end address

The groupreplicationadvertiserecoveryendpoints system variable serves as the IP address provided by the distributed recovery end and does not need to be configured for MySQL Server (that is, it does not have to be configured by the adminaddress system variable or in bindaddress system variable).

Configure the port provided for the distributed recovery end for MySQL Server, which must be specified by the port, reportport or adminport system variables. TCP/IP connections must be listened on these ports. If adminport is specified, the replication user used for distributed recovery requires SERVICECONNECTIONADMIN permission to connect. Selecting adminport keeps distributed recovery connections separate from regular MySQL client connections.

Join nodes try each endpoint in turn in the order specified in the list. If groupreplicationadvertiserecoveryendpoints is set to DEFAULT instead of a list of endpoints, standard SQL client connections will be provided. Standard SQL client connections are not automatically included in the distributed recovery endpoint list and will not be used as a backup if the boot node's endpoint list is exhausted without connections. If you want to provide a standard SQL client connection as one of multiple distributed recovery endpoints, you must explicitly include it in the list specified by groupreplicationadvertiseadvertiserecovery_endpoints. You can put this at the end as a last resort connection.

It is not necessary to add the group member's distributed recovery endpoint (or standard SQL client connection if no endpoint is provided) to the group replication allowlist specified by the groupreplicationipallowlist (from MySQL 8.0.22) or the groupreplicationipwhitelist system variable. The permission list applies only to the addresses specified by groupreplicationlocal_address for each node. The joining node must have an initial connection to the cluster allowed by the allow list in order to retrieve one or more addresses for distributed recovery.

After setting the system variables and executing the STARTGROUP_REPLICATION statement, the listed distributed recovery endpoints will be verified. If the list cannot be parsed correctly, or if any endpoints cannot be reached on the host because the service is not listening on the list, Group Replication will log an error and fail to start.

2.2 Distributed Recovery Compression

In MySQL 8.0.18, you can optionally configure compression for distributed recovery using the state transfer method in the boot node binary log. Compression can benefit distributed recovery in situations where network bandwidth is limited and the leader node must transfer many transactions to the joining nodes. The groupreplicationrecoverycompressionalgorithm and groupreplicationrecoveryzstdcompression_level system variables configure the allowed compression algorithms and zstd compression levels used when performing state transfer from the boot node's binary log.

These compression settings do not apply to remote cloning operations. When a remote clone operation is used for distributed recovery, the clone plugin's cloneenablecompression setting is applied.

2.3 Users for Distributed Recovery

Distributed recovery requires a replication user with the correct permissions so that Group Replication can establish direct node-to-node replication channels. The replication user must also have the correct permissions. If the replication user also acts as a clone user in a remote clone operation, the replication user must also have remote cloning related permissions (BACKUP_ADMIN permissions) in the boot node to act as a clone user on the boot node. Clone users for remote cloning operations. In addition to this, the same replication user must be used for distributed recovery on every node within the cluster.

2.4 Distributed recovery and SSL authentication

SSL used for distributed recovery is configured separately from SSL used for ordinary group communication, which is determined by the server's SSL settings and the groupreplicationssl_mode system variable. For distributed recovery connections, you can use the dedicated Group Replication Distributed Recovery SSL system variable to configure the use of certificates and keys specifically for distributed recovery.

By default, SSL is not used for distributed recovery connections. Set groupreplicationrecoveryusessl=ON to enable it, then configure the group replication distributed recovery SSL system variable and set the replication user to use SSL.

When you configure distributed recovery to use SSL, Group Replication applies this setting to remote clone operations and state transfers from the boot node's binary log. Group Replication automatically configures the clone SSL options (clonesslca, clonesslcert, and clonesslkey) to match the settings of the corresponding Group Replication distributed recovery options (groupreplicationrecoverysslca, groupreplicationrecoverysslcert, and groupreplicationrecoverysslkey).

If SSL is not used for distributed recovery (groupreplicationrecoveryusessl is set to OFF), and the replication user account for group replication is authenticated using the cachingsha2password plugin (default in MySQL 8.0) or the sha256password plugin, the RSA key The pair is used for password exchange. In this case, use the groupreplicationrecoverypublickeypath system variable to specify the RSA public key file, or use the groupreplicationrecoverygetpublic_key system variable to request the public key. Otherwise, the entire distributed reply will fail due to error reporting.

3. Using the clone plug-in for distributed recovery

The clone plug-in for MySQLServer is available from MySQL8.0.17. If you want to use remote clone operations for distributed recovery in a cluster, existing and joining nodes must be pre-provisioned to support this feature. Do not set this up if you do not want to use this feature in your cluster, in which case Group Replication only uses state transfer in the binary log.

To use the cloning plug-in, at least one existing cluster node and joining node must be pre-set to support remote cloning operations. At a minimum, the clone plug-in must be installed on the boot and joining nodes, BACKUPADMIN permissions must be granted to the replication user for distributed recovery, and the groupreplicationclonethreshold system variable must be set to the appropriate level. (By default, it is the maximum value allowed by the GTID sequence, which means that under normal circumstances, status transmission based on the binary log is always preferred, unless the transaction requested by the joiner node does not exist in any member of the group. At this time, if it is set If the cloning function is disabled, no matter what the value of the system variable is set to, distributed recovery through cloning will be triggered, for example: when a newly initialized Server applies to join the group. If you do not want to use the cloning function, do not install it. and configuration) To ensure maximum availability of the boot node, it is recommended to set up all current and future cluster nodes to support remote cloning operations. So that when a server joins the cluster later, the remote cloning operation can be used to quickly catch up with the latest data in the cluster.

The remote clone operation deletes user-created table spaces and data in the joining node before transferring data from the boot node to the joining node. If the operation is terminated unexpectedly midway, the joining node may have partial or no data left. This issue can be fixed by re-performing the remote clone operation that Group Replication automatically performs.

This is mainly for the case where a data saving path is specified using the DATADIRECTORY sub-option during remote cloning. When the path is specified, the data will be saved in the specified directory, that is, the data after cloning is not related to the instance on which the clone is operated. , you need to manually start the instance and specify the datadir to the directory where the clone data is saved. Of course, the MGR plug-in can automatically perform the retry operation of the remote clone (you need to ensure that the clone operation does not specify the DATA DIRECTORY sub-option. In this case, the remote clone The data will overwrite the data of the server that operates the remote clone. After the remote clone operation is completed, the server that operates the remote clone will automatically restart based on the clone data). In addition, although the cloning plug-in is used in conjunction with group replication to make the management and maintenance of group replication more automated, the cloning plug-in does not require it to be run in the cluster (but the MGR plug-in must be installed).

3.1 Basic conditions for cloning

For group replication, you need to pay attention to the following points and differences when using the clone plug-in for distributed recovery:

Existing cluster Nodes and joining nodes must have the clone plugin installed and active.

The boot node and the joining node must be running on the same operating system and must have the same MySQL Server version (Must be MySQL 8.0.17 or higher to support the clone plugin). Therefore, cloning does not work for clusters whose members are running different versions of MySQL.

The boot node and the joining node must have the "Group Replication" plugin installed and activated, and all other plugins activated on the boot node (for example, keyring plugins) must also be active on the joining node.

If distributed recovery is configured to use SSL (groupreplicationrecoveryusessl=ON), Group Replication applies this setting to remote clone operations. Group Replication automatically configures the settings of the clone SSL options (clonesslca, clonesslcert, and clonesslkey) to match the settings of the corresponding Group Replication distributed recovery options (groupreplicationrecoverysslca, groupreplicationrecoverysslcert, and groupreplicationrecoverysslkey).

There is no need to set the valid boot node list in the clonevaliddonor_list system variable on the joining node to join the cluster. Group Replication automatically configures this setting after MGR automatically selects the boot node from the existing cluster nodes. Note that the remote cloning operation uses the server's SQL protocol host name and port, not the address and port for internal communication between cluster members.

The cloning plugin has a number of system variables to manage the network load and performance impact of remote cloning operations. These settings are not configured with Group Replication, so you can review them and set them if necessary, or you can set them as defaults. When using a remote clone operation for distributed recovery, the clone plug-in's cloneenablecompression setting will be applied to the operation instead. Affects existing configured Group Replication compression settings.

In order to invoke the remote clone operation on the joining node, Group Replication uses the internal mysql.session user, which already has CLONE_ADMIN privileges, so no special settings are required.

As the clone user on the boot node for remote clone operations, Group Replication uses the replication user set up for distributed recovery. Therefore, this replication user must be given BACKUP_ADMIN privileges on all cloning-capable cluster nodes. When configuring a node for group replication, if there is a joining node, you should also grant this permission to the replication user on that node because after the joining node joins the cluster, they can act as a boot node for other joining nodes. The same replication user is used for distributed recovery on each cluster node. To grant this permission to a replication user on an existing node, execute this statement individually on each cluster node with binary logging disabled, or on the primary node of one cluster with binary logging enabled. The following statements:

GRANT BACKUP_ADMIN ON *.* TO *rpl_user*@'%';
Copy after login

If you used CHANGE MASTER TO to specify replication user credentials on the server that provided the user credentials before using STARTGROUPREPLICATION, be sure to delete the user credentials from the replication metadata repository before performing any remote cloning operations. Also make sure groupreplicationstartonboot=OFF is set on the joining member. If user credentials are not unset, they are transferred to the joining member during the remote clone operation. A GroupReplicationRecovery channel may then be accidentally started using the stored credentials on the original member or a member cloned from it. Automatically starting Group Replication when the server starts (including after a remote clone operation) will use the stored user credentials, and will also use distributed recovery credentials if they are not specified on the START GROUPREPLICATION command.

3.2 Cloning threshold

After setting group members to support cloning, the groupreplicationclonethreshold system variable will specify a threshold, expressed as how many transactions, to use remote cloning operations in distributed recovery. If the number of transactions between the bootstrap node and the joining node is greater than this number, a remote clone operation will be used to transfer the state to the joining node when technically feasible. Group Replication calculates whether a threshold has been exceeded based on the gtidexecuted set of existing group members. Using remote clone operations when transaction gaps are large, new members can be added to the cluster without having to manually transfer the cluster's data to the server beforehand, and it can also enable lagging nodes to catch up with data more efficiently.

The default setting of the groupreplicationclone_threshold group replication system variable is very high (maximum allowed sequence number of transactions in GTID), so it effectively disables cloning whenever it is possible to transfer state from the binary log. To enable Group Replication to select a remote clone operation that is more suitable for state transfer, set a system variable that specifies how many transactions should be used as the transaction interval for cloning.

PS:

Do not use lower settings for groupreplicationclone_threshold in an active cluster. If more than a threshold transaction occurs in the cluster while a remote clone operation is in progress, the joining member triggers the remote clone operation again after a restart and can continue indefinitely. To avoid this, make sure to set the threshold to a reliable number that is greater than the number of transactions expected to occur in the cluster during the time period that the remote clone operation takes.

When a state transfer cannot be made from the boot node's binary log, Group Replication attempts to perform a remote clone operation regardless of the threshold at that time, for example, because the transaction required to join the member is in any existing group member are not available in the binary log. Group replication identifies this based on the gtidpurged set of existing group members. The groupreplicationclonethreshold system variable cannot be used to deactivate cloning when the required transactions are not available in any member's binary log file, because in this case cloning is the only option to manually transfer data to the joining node

3.3 Clone operation

After setting up the cluster node and joining the node for cloning, Group Replication will manage the remote cloning operation. The remote clone operation takes some time to complete, depending on the size of the data.

The performanceschema.cloneprogress table records each stage of the entire cloning operation and its corresponding stage information. Each stage will generate a row of records (note that only one process information of the cloning operation is recorded in this table. Next When performing a cloning operation, the last information will be overwritten)

select * from clone_progress;
+------+-----------+-----------+---------------------------- 
+----------------------------+---------+------------+-------- 
----+------------+------------+---------------+
| ID | STAGE | STATE | BEGIN_TIME | END_TIME | THREADS | 
ESTIMATE | DATA | NETWORK | DATA_SPEED | NETWORK_SPEED |
+------+-----------+-----------+---------------------------- 
+----------------------------+---------+------------+------- 
-----+------------+------------+---------------+
| 1 | DROP DATA | Completed | 2019-10-08 16:46:58.757964 | 
2019-10-08 16:46:59.128436 | 1 | 0 | 0 | 0 | 0 | 0 |
| 1 | FILE COPY | Completed | 2019-10-08 16:46:59.128766 | 
 2019-10-08 16:47:16.857536 | 8 | 8429731840 | 8429731840 | 
 8430190882 | 0 | 0 |
| 1 | PAGE COPY | Completed | 2019-10-08 16:47:16.857737 | 
 2019-10-08 16:47:17.159531 | 8 | 0 | 0 | 785 | 0 | 0 |
| 1 | REDO COPY | Completed | 2019-10-08 16:47:17.159748 | 
2019-10-08 16:47:17.460516 | 8 | 2560 | 2560 | 3717 | 0 | 0 
|
| 1 | FILE SYNC | Completed | 2019-10-08 16:47:17.460788 | 
2019-10-08 16:47:20.926184 | 8 | 0 | 0 | 0 | 0 | 0 |
| 1 | RESTART | Completed | 2019-10-08 16:47:20.926184 |
Copy after login
| 1 | RESTART | Completed | 2019-10-08 16:47:20.926184 | 
2019-10-08 16:47:28.623732 | 0 | 0 | 0 | 0 | 0 | 0 |
| 1 | RECOVERY | Completed | 2019-10-08 16:47:28.623732 | 
2019-10-08 16:47:34.898453 | 0 | 0 | 0 | 0 | 0 | 0 |
+------+-----------+-----------+---------------------------- 
+----------------------------+---------+------------+------- 
-----+------------+------------+---------------+
7 rows in set (0.00 sec)
select * from clone_status\G
*************************** 1. row ***************************
ID: 1
PID: 0
STATE: Completed
BEGIN_TIME: 2019-10-08 16:46:58.758
END_TIME: 2019-10-08 16:47:34.898
SOURCE: 10.10.30.162:3306
DESTINATION: LOCAL INSTANCE
ERROR_NO: 0
ERROR_MESSAGE:
BINLOG_FILE: mysql-bin.000022
BINLOG_POSITION: 222104704
GTID_EXECUTED: 320675e6-de7b-11e9-b3a9-5254002a54f2:1-4,
aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-2771494
1 row in set (0.01 sec)
Copy after login

PS:

After the state transfer is completed, the group replication will restart the joining node to complete the process. If groupreplicationstartonboot=OFF is set on the joining node, for example because replication user credentials were specified on the START GROUPREPLICATION statement, START GROUPREPLICATION must be published manually again after a restart. If groupreplicationstartonboot=ON and other settings required to start group replication are set in the configuration file or using the SET PERSIST statement, the process will automatically continue with the joining node online without intervention.

The remote cloning operation will clone various data files in the data directory of the boot node to the joining node (the table may contain some configuration information and user data, etc.). However, Group Replication membership settings saved in configuration files (such as Group Replication local address configuration, etc.) will not be cloned, nor will any changes be made on the joining node. That is, the configuration related to group replication needs to be configured by yourself and cannot conflict with existing members in the cluster. The remote cloning operation is only responsible for cloning data files and will not clone configuration information (of course, if some configuration information is saved in the table, for For cloning operations, it will also be cloned as data).

如果远程克隆过程花费很长时间,则在MySQL 8.0.22之前的发行版中,在该时间段内为该集群累积的一组认证信息可能会变得太大而无法传输给加入成员。在这种情况下,加入成员会记录一条错误消息,并且不会加入该集群。从MySQL 8.0.22开始,组复制以不同的方式管理应用事务的垃圾收集过程,以避免发生这种情况。在早期版本中,如果确实看到此错误,则在远程克隆操作完成之后,请等待两分钟,以允许进行一轮垃圾收集,以减小集群的认证信息的大小。然后在加入成员上发出以下声明,以使其停止尝试应用先前的认证信息集:

RESET SLAVE FORCHANNEL group_replication_recovery;
RESET REPLICA FOR CHANNEL group_replication_recovery;(从8.0.22开始)
Copy after login

引导节点中用于组复制专用通道groupreplicationrecovery的用户凭证(复制用户和密码),在克隆操作完成之后,会被新成员使用,所以,该用户和密码及其权限必须在新成员中也有效。因此,所有组成员才能够使用相同的复制用户和密码通过远程克隆操作接收状态传输进行分布式恢复。但是,组复制会保留与使用SSL相关的组复制通道设置,这些设置对单个成员来说可以是惟一的(即,每个组成员使用不同的复制用户和密码)。如果使用了PRIVILEGECHECKSUSER帐户来帮助保护复制应用线程(从MySQL8.0.18开始,可以创建一个具有特定权限的用户账号,然后将其指定为PRIVILEGECHECKSUSER帐户,这样可以防止将未经授权或意外将具有特权的账号用于组复制通道),则在克隆操作完成之后新加入成员不会使用该用户帐户作为组复制通道的用户。此时必须为组复制通道手工指定合适的复制用户。

如果引导节点用于groupreplicationrecovery复制通道的复制用户凭据已使用CHANGE MASTER TO语句存储在复制元数据存储库中,则在克隆后将它们转移到加入成员并由其使用,并且它们在此处必须有效。因此,使用存储的凭据,所有通过远程克隆操作接收状态转移的组成员都会自动接收复制用户和密码,进行分布式恢复。如果在START GROUPREPLICATION语句上指定了复制用户凭据,则这些凭据将用于启动远程克隆操作,但是在克隆后它们不会传输到加入节点并由其使用。如果不希望将凭据转移到新的server上并记录在那里,确保在进行远程克隆操作之前取消设置它们,并使用START GROUPREPLICATION代替提供它们。

ps:如果已使用PRIVILEGECHECKSUSER帐户来帮助保护复制应用程序,则从MySQL 8.0.19开始,会将PRIVILEGECHECKSUSER帐户以及来自引导节点的相关设置克隆出来。如果将加入节点设置为在启动时启动组复制,它将自动使用该帐户在相应的复制通道上进行权限检查。(在MySQL 8.0.18中,由于许多限制,建议不要将PRIVILEGECHECKSUSER帐户用于组复制通道。)

3.4克隆的其他用处

组复制启动并管理用于分布式恢复的克隆操作。设置为支持克隆的组成员也可以参与用户手动启动的克隆操作。例如,可能希望通过从组成员作为引导节点来进行克隆来创建新的MySQL实例,但是不希望新的服务器实例立即加入或可能永远不会加入该集群。

在所有支持克隆的发行版中,可以手动启动涉及停止了组复制的组成员的克隆操作。由于克隆要求引导节点和接收数据的节点上的克隆插件必须匹配,因此即使不希望该实例加入集群,也必须在另一个实例上安装并激活组复制插件。可以通过发出以下语句来安装插件:

INSTALL PLUGIN group_replication SONAME'group_replication.so';
Copy after login

在MySQL 8.0.20之前的发行版中,如果操作涉及正在运行“组复制”的组成员,则无法手动启动克隆操作。从MySQL8.0.20开始,只要克隆操作不会删除和替换接收者上的数据,就可以执行此操作。因此,如果正在运行组复制,则用于启动克隆操作的语句必须包含DATA DIRECTORY子句。

The above is the detailed content of MySQL distributed recovery instance analysis. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:yisu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!