ClusterControl Module for Puppet_MySQL

WBOY
发布: 2016-05-31 08:46:58
原创
1040 人浏览过

July 7, 2014

By Severalnines

If you are automating your infrastructure using Puppet, then this blog is for you. We are glad to announce the availability of a Puppet module for ClusterControl. For those using Chef, we already publishedChef cookbooksfor Galera Cluster and ClusterControl some time back.  

ClusterControl Module for Puppet_MySQL

ClusterControl on Puppet Forge

The ClusterControl module initial release is available on Puppet Forge , installing the module is as easy as:

$ puppet module install severalnines-clustercontrol
登录后复制

If you haven’t change the default module path, this module will be installed under /etc/puppet/modules/clustercontrol on your Puppet master host. ClusterControl supports following database clusters:

  • Galera Cluster
    • MySQL Galera Cluster by Codership
    • Percona XtraDB Cluster by Percona
    • MariaDB Galera Cluster by MariaDB
  • MySQL Cluster
  • MySQL Replication
  • MongoDB or TokuMX Clusters
    • Sharded Cluster
    • Replica Set

Severalnines Package Repository

This module makes use of the Severalnines repository for yum and apt packages. This repository hosts the latest stable release of ClusterControl and all of its components.

ClusterControl and all of its components requires post-installation procedures, like setting up MySQL, granting users, setting up Apache and etc. This module will automate most of these.

If you lookup the Severalnines package repository, you will find the following packages:

  • clustercontrol - Severalnines ClusterControl Web Application. Frontend for clustercontrol-controller. Previously known as cc-ui.
  • clustercontrol-cmonapi - Severalnines ClusterControl REST API. Previously known as cc-cmonapi.
  • cmon-agent - Agent for ClusterControl. Manage and monitor MySQL, MySQL Cluster and Galera Cluster for MySQL
  • cmon-controller - ClusterControl Controller. Manage and monitor MySQL, MySQL Cluster and Galera Cluster for MySQL

The Severalnines Repository installation instructions are available at http://repo.severalnines.com .

Installing ClusterControl with Puppet

We’ll now show you how to deploy ClusterControl on top of an existing database cluster using the ClusterControl Puppet module.

This module requires the following criteria to be met:

  • The node for ClusterControl must be a clean/dedicated host.
  • ClusterControl node must be running on 64bit OS platform and together with the same OS distribution with the monitored DB hosts. Mixing Debian with Ubuntu and CentOS with Red Hat is acceptable.
  • ClusterControl node must have an internet connection during the deployment. After the deployment, ClusterControl does not need internet access.
  • Make sure your database cluster is up and running before doing this deployment.

**Please review the module’s requirement available at Puppet Forge for more details.

Now we should have the Puppet module installed. The first thing that we need to do is to generate a SSH key. ClusterControl requires a proper configuration of passwordless SSH using SSH key. It also needs an API token. The following are two pre-deployment steps that you need to complete:

1. Generate a SSH key:

$ bash /etc/puppets/modules/clustercontrol/files/s9s_helper.sh --generate-key
登录后复制

** This step is compulsory. The above command will generate a RSA key (if not exists) to be used by the module and the key must exist in the Puppet master module's directory before the deployment begins.

2. Generate an API token:

$ bash /etc/puppets/modules/clustercontrol/files/s9s_helper.sh --generate-tokenb7e515255db703c659677a66c4a17952515dbaf5
登录后复制

** Copy the generated token and specify in the node definition under api_token .

Both steps described above need to be executed once (unless you intentionally want to regenerate them all). Now, we can configure the database nodes to be managed, as per example architectures below:

ClusterControl Module for Puppet_MySQL

As illustrated in the above figure, we have a three-node Percona XtraDB Cluster running on CentOS 6.5 64bit. The SSH user is root and the MySQL datadir is using the default /var/lib/mysql .

Therefore, the node definition in Puppet master would be as simple as:

# ClusterControl hostnode "clustercontrol.local" {	class { 'clustercontrol':		is_controller => true,		email_address => 'admin@localhost.xyz',		mysql_server_addresses => '192.168.1.11,192.168.1.12,192.168.1.13',		api_token => 'b7e515255db703c659677a66c4a17952515dbaf5'	}}# Monitored DB hostsnode "galera1.local", "galera2.local", "galera3.local" {	class {'clustercontrol':		is_controller => false,		mysql_root_password => 'r00tpassword',		clustercontrol_host => '192.168.1.10'	}}
登录后复制

Once done, you can either instruct the agent to pull the configuration from the Puppet master and apply it immediately:

$ puppet agent -t
登录后复制

Or, wait for the Puppet agent service to apply the catalog automatically (depending on the runinterval value, default is 30 minutes). Once completed, open the ClusterControl UI page at http://[ClusterControl IP address]/clustercontrol and login using the specified email address with default password ‘admin’.

You should see something similar to below:

ClusterControl Module for Puppet_MySQL

Take note that this module will install the RSA key at $HOME/.ssh/id_rsa_s9s . Details of this in the Puppet Forge readme page.

Example Node Definition for Other Clusters

MySQL Cluster

For MySQL Cluster, extra options are needed to allow ClusterControl to manage your management and data nodes. You may also need to add NDB data directory (e.g /mysql/data ) into the datadir list so ClusterControl knows which partition is to be monitored. In the following example, /var/lib/mysql is mysql API datadir and /mysql/data is NDB datadir.

The following figure shows our MySQL Cluster architecture running on Debian 7 (Wheezy) 64bit:

ClusterControl Module for Puppet_MySQL

The node definition would be:

# ClusterControl hostnode "clustercontrol.local" {	class { 'clustercontrol':		is_controller => true,		email_address => 'admin@localhost.xyz',		cluster_type => 'mysqlcluster',		mysql_server_addresses => '192.168.1.11,192.168.1.12',		mgmnode_addresses => '192.168.1.11,192.168.1.12',		datanode_addresses => '192.168.1.13,192.168.1.14',		datadir => '/var/lib/mysql,/mysql/data',		api_token => 'b7e515255db703c659677a66c4a17952515dbaf5'	}}# Monitored DB hostsnode "mysql1.local", "mysql2.local", "data1.local", "data2.local" {	class {'clustercontrol':		is_controller => false,		mysql_root_password => 'dpassword',		clustercontrol_host => '192.168.1.10'	}}
登录后复制

MySQL Replication

MySQL Replication node definition will be similar to Galera cluster’s. In following example, we have a three-node MySQL Replication running on RHEL 6.5 64bit on Amazon AWS. The SSH user is ec2-user with passwordless sudo:

ClusterControl Module for Puppet_MySQL

The node definition would be:

# ClusterControl hostnode "clustercontrol.local" {	class { 'clustercontrol':		is_controller => true,		email_address => 'admin@localhost.xyz',		ssh_user => 'ec2-user',		cluster_type => 'replication',		mysql_server_addresses => 'mysql-master.aws,mysql-slave1.aws,mysql-slave2.aws',		api_token => 'b7e515255db703c659677a66c4a17952515dbaf5'	}}# Monitored DB hostsnode "mysql-master.aws", "mysql-slave1.aws", "mysql-slave2.aws" {	class {'clustercontrol':		is_controller => false,		mysql_root_password => 'dpassword',		clustercontrol_host => 'clustercontrol.aws'	}}
登录后复制

MongoDB/TokuMX Replica Set

The MongoDB Replica Set runs on Ubuntu 12.04 LTS 64bit with sudo user ubuntu and password 'mySuDOpassXXX'. There is also an arbiter node running on mongo3.local . In MongoDB, the module does not require mysql_cmon_password and mysql_root_password which specifically for MySQL granting.

ClusterControl Module for Puppet_MySQL

The node definition would be:

# Monitored mongoDB hostsnode 'mongo1.local', 'mongo2.local', 'mongo3.local' {	class {'clustercontrol' :		is_controller => false,		ssh_user	=> 'ubuntu',		clustercontrol_host => '192.168.1.40'	}}# ClusterControl hostnode 'clustercontrol.local' {	class {'clustercontrol' :		is_controller => true,		ssh_user	=> 'ubuntu',		sudo_password => 'mySuDOpassXXX',		email_address => 'admin@localhost.xyz',		cluster_type=> 'mongodb',		mongodb_server_addresses => 'mongo1.local:27017,mongo2.local:27017',		mongoarbiter_server_addresses => 'mongo3.local:30000',		datadir => '/var/lib/mongodb',		api_token => 'b7e515255db703c659677a66c4a17952515dbaf5'	}}
登录后复制

MongoDB/TokuMX Sharded Cluster

MongoDB Sharded Cluster needs to have mongocfg_server_addresses and mongos_server_addresses options specified. The mongodb_server_addresses value should be to the list of shard servers in the cluster. In the below example, we have a three-node MongoDB Sharded Cluster running on CentOS 5.6 64bit with 2 mongos nodes, 3 shard servers and 3 config servers:

ClusterControl Module for Puppet_MySQL

The node definition would be:

# Monitored mongoDB hostsnode 'mongo1.local', 'mongo2.local', 'mongo3.local' {	class {'clustercontrol' :		is_controller => false,		clustercontrol_host => '192.168.1.40'	}}# ClusterControl hostnode 'clustercontrol.local' {	class {'clustercontrol' :		is_controller => true,		email_address => 'admin@localhost.xyz',		cluster_type=> 'mongodb',		mongodb_server_addresses => '192.168.1.41:27018,192.168.1.42:27018,192.168.1.43:27018',		mongocfg_server_addresses => '192.168.1.41:27019,192.168.1.42:27019,192.168.1.43:27019',		mongos_server_addresses => '192.168.1.41:27017,192.168.1.42:27017',		datadir => '/var/lib/mongodb',		api_token => 'b7e515255db703c659677a66c4a17952515dbaf5'	}}
登录后复制

Please have a look at the documentation at the ClusterControl Puppet Forge page for more details. In our upcoming post, we are going to elaborate on how to deploy new database clusters with ClusterControl using existing modules available in Puppet Forge.

来源:php.cn
本站声明
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
热门教程
更多>
最新下载
更多>
网站特效
网站源码
网站素材
前端模板
关于我们 免责声明 Sitemap
PHP中文网:公益在线PHP培训,帮助PHP学习者快速成长!