So you’re interested in the “new” MySQL Group Replication feature but are concerned that configuring it is going to be difficult. Let me soothe those concerns with this blog post. Yes, the MySQL team has recently been discussing other methods to configure Group Replication using the RC release of MySQL Shell and how it can be used for tools such as Puppet for enhanced provisioning of a Group as Lefred has demonstrated in his MySQL Puppet post.
To start, I’ll post a sample my.cnf file below and highlight the related configurations that Group Replication requires. Yes, there are a number of them. I’ll try and lay that out below so it is easy to follow. But first….
Recap of Group Replication
This self-managed cluster of MySQL InnoDB based nodes coordinates multi-master write activities in a total order delivery. This ensures writes maintain their proper commit sequence across all members no matter where they originated from in the group. It also self-manages the Group memberships among server instances, always knowing who is in the group and who is not, all without the peer pressure that goes along with normal social “groups”. Plus, in this group they’re all actually leaders and followers at the same time. That doesn’t happen to often in society! …I digress. For a more comprehensive summary, check out this docs page.
You can setup an InnoDB Group Replication setup based on varying circumstances and needs.
- Maybe you’re building out brand new set of highly available database servers
- Possibly a single instance is currently in use and you want to turn it into a Group Replication managed environment
- You have a standard replication setup that you wish to convert into a Group
- ….the list goes on
Let’s assume in this blog that we are just starting with 3 brand new server instances in a highly available setup. All 3 MySQL instances are fresh installs with no activities performed against them after the database files are initialized. Not even setting the root user password for the first time.
Let’s also assume that the network resolvable server host names for the 3 members are “gr1“, “gr2” and “gr3“.
Setting up the first Member
The first member named “gr1” becomes the “bootstrap” member. Once it is up and running all the other members can join. Here we start:
- Getting that root user password initially set.
There is a very simple way of doing this by using the mysql client from a command line with little intervention, but for the sake of staying to just the SQL command, it would be something like this:
Setting the root user password without generating a GTIDMySQL124root@gr1# mysql --user=root --passwordmysql> SET SQL_LOG_BIN=0;mysql> SET SQL_LOG_BIN=1;
From there, it’ll prompt you for the temporary password generated by the --initialize of the new database files and found in the error log. Use that to login and set the root user’s new password. Don’t forget to use the other SQL statements both preceding and following the password change. The additional SQL_LOG_BIN statements do 2 things:
(A) Prevents the server from binary logging the password change
(B) Prevents the creation of a GTID event that is based on the individual server’s identity. We want to ensure GTID events are generated under the Group’s UUID string instead. I’ll speak more to this in a future blog.
- Next we need to configure an initial replication channel for the Group Replication service.
It is interesting that we can do this, but not yet have defined the database user account and password in the database yet. It works though and is important. You can certainly use your own defined username for this replication channel. But “please” use better passwords than I am using here!Defining a replication channel for Group Replication to useMySQL1mysql> CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='rpl_pass' FOR CHANNEL 'group_replication_recovery';
It’s a pretty weak looking configuration, no need to identify a host or other details. It’s true, nothing else is needed. Next….
- Now here is our first real initializing of the Group.
These few statements setup this first member to be the bootstrap member. It means everyone else will follow it first to bring themselves onlineBootstrapping a Group Replication clusterMySQL1234mysql> SET GLOBAL group_replication_bootstrap_group=ON;mysql> STOP GROUP_REPLICATION;mysql> START GROUP_REPLICATION;mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
Note: I’m running STOP GROUP_REPLICATION here as I assume that the configuration file either has the group_replication_start_on_boot=ON set accordingly or does not include it in the my.cnf file as ON is the default setting. I also highly recommend that this value is left ON. The reason will be explained in another blog. Running STOP and then START recycles the service and will ensure the Group is properly enabled.
Now that we’ve done these things….we can start generating transactions that “belong to the Group”. This is important as the individual servers now lose their identity and uniqueness. It is the fear of every teenager and citizen in the world. But these server’s have no feelings (that we know about) or concern. The Group prevails and authors the transactions that come through the database instance and application.
- Creating our initial user accounts.
Items we create at this point will properly replicate across the group. So now is a good time to create the replication user account that we used above. The rpl_user account (or whatever you named it) is used to establish the group_replication_recovery replication channel. We’re sort of doing this in reverse, establish the channel configuration first, then create the user account and its password.Be sure to use the username and password defined in step #2 above.Configuring initial user accountsMySQL12mysql> CREATE USER 'rpl_user'@'192.168.56.%' IDENTIFIED BY 'rpl_pass';mysql> GRANT REPLICATION SLAVE ON *.* TO 'rpl_user'@'192.168.56.%';
That’s it…first member is up! That took 6 or 7 SQL commands….and are easily scripted.
Configuring Subsequent Group Members
The steps to configure the remaining members apply the same to both “gr2” and “gr3“. They are:
Step #1 above: To deal with resetting the root password
Step #2 above: Establish the group_replication_recovery channel
Last: Just recycle the Group Replication service by running these commands. Once again, STOP is only needed if the configuration file starts the Group Replication service on boot.
mysql> STOP GROUP_REPLICATION;
mysql> START GROUP_REPLICATION;
You now have a fully configured Group Replication cluster implemented.
A Look at Needed Configurations
The “my.cnf” configuration file for the server can contain a number of configurations. Group Replication requires the base setup for binary logging and many of the associated standard replication configurations to be defined. This is because it makes a complete re-use of MySQL already mature replication processing. It simply adds a distributed management facility into it which actually has few needed specific configurations…..but a lot of supporting ones.
Here is the config with an attempt to not include Group Replication configs that we don’t need when using a “single-primary” configuration. For configurations that have “#advised” following them, they aren’t strictly needed but highly advised. *some adjustments below from well-pointed comments*
# General Binary logging #
server_id=1 ## between 1-9 for GR usage, value must be unique among members
log_bin=mysql-bin #advised, minimum is: "log_bin"
expire_logs_days = 10 #advised
# Replication configs #
relay_log_recovery = ON
binlog_rows_query_log_events = ON
log_bin_trust_function_creators = TRUE
# Multi-Threaded Slaving #advised
slave_parallel_workers=4 # can increase as per workload/benefit
# Group Replication needed settings #
slave_rows_search_algorithms = 'INDEX_SCAN,HASH_SCAN'
slave_type_conversions = ALL_NON_LOSSY
transaction_isolation=READ-COMMITTED #advised, but REPEATABLE-READ is ok
# Group Replication specific items #
### construct your own group_name by running this command on Linux $ uuidgen -t
### for the local_address and group_seeds, you can choose a port of your choice
### the port for the below items is what Group Communication System uses for consensus
group_replication_local_address = '192.168.56.128:16601'