In this blog post, I’d like to show some relationships of “Group Replication” and “InnoDB Cluster“. My hope is that I will be able to draw meaningful correlations and attempt to simplify our way of thinking about these setups.
The MySQL development team released the first GA release of Group Replication in MySQL 5.7.17, which turns the InnoDB storage engine into a distributed database platform. Since then 4 patch releases of 5.7 have been produced with various new GR capabilities added in each one. In one of my early blogs, I mentioned the minimal set of SQL commands needed to configure the environment. That blog represents the least that needs to be done to build a Group Replication setup. In fact, if you deviate and over-complicate it instead of staying the path, the cluster doesn’t come together so easily. …Imagine that.
The steps in brief:
1. Have a configuration file setup with all expected Group Replication settings, including Group members listed in the group seeds configuration
2. Build a new set of database instance files (with something like mysqld --initialize ) and start the instance, turn off super_read_only (if on, which still leaves it in read_only) and reset the MySQL root user password (with no logging)
3. Configure the replication channel group_replication_recovery using a database user created in step 5.
4. Enable the group replication bootstrap configuration then stop & start group replication, and finally disable the bootstrap config
5. Create the replication database user that was utilized in step 3 above
6. Create additional members, only using steps (1), (2), (3)…then just (re)start group replication
Seems easy enough, right! sigh….
Simplify The Complex
The reality is, it’s a complex system…which is so because it self-handles the even more complex tasks of things like auto-failover and transaction-safe multi-master writes. Thankfully, the MySQL development team has been building the tools to abstract away that complexity. The introduction of the MySQL Shell and the revamped MySQL Router act as the initial set of core tools to take on this task of building out a Group Replication (GR) distributed setup, on our behalf (sort of). The tools change the approach so much that when building this way, we call it an InnoDB Cluster (IC).
Building a Cluster, GR vs. IC – Compared
Reminder, GR is a plugin on MySQL that enables distributed write capability on MySQL InnoDB tables. Generally speaking, when configured properly and used in accordance with its limitations, is a self-contained capability. Let’s look at comparing those 6 “steps in brief” noted above, and compare what’s involved for GR vs. IC:
Steps 1 & 2: Configuring a GR setup vs. InnoDB Cluster (IC)
Step 2 first:
It’s worth noting here, initializing the database instance’s files can be done before or after getting our fully defined configurations in place. This is true for the core GR/IC setups at least…some other configurations “may” need to be established before initializing files.
- InnoDB Cluster will require the database instance’s files to be built first with the root user already protected with an intentionally set password to get that user unlocked.
- Plus you will also need to have an admin-like user that can connect locally over 127.0.0.1 and another admin-like account (can be the same name) to connect remotely to the other servers in your cluster.
- These extra accounts aren’t “needed” for a GR only setup! …but they’re useful for practical management of your group either way.
- Adding the configuration
super_read_only=ON is very important to have in your configuration when you are building a GR or IC setup. Both will deal with it being on (as of MySQL 5.7.20) and it’ll protect your clustered setups.
- Just remember to disable it when resetting your root password and adding any initial user accounts.
- Interesting read_only fact:
- Enabling super_read_only=ON also enables read_only=ON.
- Disabling super_read_only=OFF retains read_only=ON.
- But disabling read_only=OFF will also disable super_read_only=OFF.
- Also remember to use no logging
SET SQL_LOG_BIN=0; ….otherwise reset your GTIDs on each server as part of your build process
MySQL12345SET SQL_LOG_BIN=0;SET GLOBAL SUPER_READ_ONLY=OFF;ALTER USER USER() IDENTIFIED BY '#GoodP@ssw0rdhErE';## if not using the first command above, then use this below after you are doneRESET MASTER;
Step 1: GR’s setup (and a review of the configuration entries):
- All replication members in any MySQL replication topology all need a unique “server_id” value.
- Many of the other configurations listed here are also required for GR or IC to work properly.
MySQL1234567891011server_id=1 #remember all members in any replication setup needs a unique server_id >0port=3306log_bin=mysql-bin # name or no name, binary logging is requiredlog_slave_updates=ONbinlog_checksum=NONEenforce_gtid_consistency=ONgtid_mode=ONmaster_info_repository=TABLErelay_log_info_repository=TABLEtransaction_write_set_extraction=XXHASH64super_read_only=ON
- Listed first below is the process of enabling the group_replication plugin so that the GR functionality is available to the server.
- Normally with MySQL, all instances have their own unique hash ID and that’s how we identify which server wrote which GTID transactions. GR members (when clustered) operate under a single unique hash ID, so all group members would need this below.
MySQL123plugin_load_add ="group_replication.so"group_replication_group_name = 550fa9ee-a1f8-4b6d-9bfe-c03c12cd1c72# generate your own cluster ID on Linux with `uuidgen -t`, all members use this config
- Here are a collection of GR configurations that add to the core of how members know “who’s who in the cluster”.
- “Group Seeds” are a list of possible candidates that can sponsor a new member coming online to provide it with transactions it doesn’t have.
- To note: I have listed all 3 members in my group_seeds configuration below, whereas only the 2 remote members are needed. This is more regarding form, having configuration files only being different where it matters. Having the local members entry in its own seed configuration doesn’t “seem” to cause any problems.
- The “Local Address” is the reference on the current instance that other’s would know it by.
- Both Group Seeds and Local Address identify a network port that XCOM (member communication) occurs on…different from the typical MySQL port of 3306. Choose an XCOM port that works for your network (some networks may have different port numbering rules). It’s a good practice for all members to use the same XCOM port.
- The “Start on Boot” configuration is put in for good measure – we don’t want to startup a member without GR’s service coming online. This also aligns with the benefit of having
super_read_only=ON in your config file as the GR plugin will manage it.
MySQL123group_replication_start_on_boot = ONgroup_replication_local_address = '192.168.56.128:13306'group_replication_group_seeds = '192.168.56.127:13306,192.168.56.128:13306,192.168.56.129:13306''
- This GR configuration documentation page from the MySQL team discusses all the needed configurations for your GR setup. Particularly useful as things evolve beyond the current time of this blog.
Steps 1: IC’s setup approach
- This URL to the documentation for “Working with InnoDB Cluster” provides many of the commands we would need to use.
- The 2 little commands below will be used, both at the beginning of our setup, and a little at the very end too
Now Remember to do these commands first on all the members you’d like to add to your IC setup. That way you’ll be certain that the configurations are ready to get the cluster going when its time.
Steps 3, 4 & 5: Configuring a GR setup vs. InnoDB Cluster (IC)
GR’s setup (and a review of the configuration entries):
There is very little that needs to be done to get a GR setup up and running with members once you’ve got the members all configured.
- The activity for “step 3” is to configured the group_replication_recovery channel as below. This is the channel subsequent members will use to retrieve missed transactions from a donor member.
- To “initiate” a NEW GR cluster setup, after the first member instance starts up, you need to use the following SQL statements to bootstrap the Group’s clustering capabilities:
MySQL24567910CHANGE MASTER TO MASTER_USER 'rpl_user', MASTER_PASSWORD='rpl_pass' FOR CHANNEL'group_replication_recovery';SET group_replication_bootstrap_group=ON;STOP GROUP_REPLICATION; # GR is set to start on boot by default, so the plugin just needs to be recycledSTART GROUP_REPLICATION;SET group_replication_bootstrap_group=OFF;CREATE USER 'rpl_user'@'192.168.56.%' IDENTIFIED BY 'rpl_pass';GRANT REPLICATION SLAVE ON *.* TO 'rpl_user'@'192.168.56.%';
- Then creating your replication user account related to the group_replication_recovery channel. By creating the account after the bootstrap process is done and the GR plugin is restarted, it will get replicated to the rest of the cluster automatically. But its initial usage will be for the remote members to connect to “this” bootstrapped master.
InnoDB Cluster’s cluster setup
As we move forward with building an IC setup, the MySQL Shell continues to handle our cluster configuration and setups.
- The cluster is initialized using this one command.
cluster=dba.createCluster('icBlog'); Here, the cluster is actually getting bootstrapped in the same way Group Replication would when it initializes its first member…that is at least one of the core things that happens here.
Step 6: Adding subsequent members
GR, adding additional members:
- If your other GR members are configured properly
group_replication_group_seeds="IP-ADDRESS:xcom-port,hostname:xcom-port,..." …then you’re most of the way there. If they are started up, you can either restart the instance, or restart the GR plugin service as shown above.
Adding subsequent members in a fully configured GR setupShell1234## Either restart the database instance[root@gr129 gr-files]# systemctl restart mysqld## Or restart the GR service[root@gr129 gr-files]# mysql -e "STOP GROUP_REPLICATION; START GROUP_REPLICATION;"
IC, adding additional members:
- Continuing to configure an IC cluster, adding members produces output that reminds us from the createCluster() command.
** VERY IMPORTANT FINAL STEP **
- It is also VERY VERY important in an InnoDB Clustered environment to go around “configuring” ALL members a 2nd time. Once the clustered setup is functioning, it needs to persist a final configuration setup in the local my.cnf file of each member.
- This requires running
dba.configureLocalInstance('admin@????:3306') again on each member.
I’m hoping that by showing the correlations of how the core Group Replication layer is involved in its configured setup, that it will give more “meaning” to what the InnoDB Cluster shell commands might be doing under the hood.