InnoDB Cluster, Managing Async Integration

By | September 12, 2019

In MySQL 8.0.17 there have been a lot of updates to the MySQL set of offerings. We’ve introduced Cloning into InnoDB Cluster 8.0.17, advances with the MySQL-Router in 8.0.17 and MySQL continues to expand its collection of automation managed features. When Group Replication was first introduced in MySQL 5.7.17, there was considerably less to manage right away compared to a “3-member Async replication setup w/auto-failover“.  Auto-failover tooling requires a lot of thought, determination and coding.  With the advent of Group Replication, MySQL’s users can put aside all that work and let MySQL Server’s Group Replication deal with that quandary for us.

Managing Asynchronous Replication Between InnoDB Clusters

The focus of this blog post is to identify a solution for having asynchronous replication between 2 InnoDB Cluster setups and allows some flexibility for changing the async members as the cluster topology changes. The solution presented here is certainly not the only approach. It is one that am documenting based on a collaboration with others.

The solution is of particular interest as there are two bug fixes implemented in MySQL-Shell 8.0.17’s AdminAPI (Bug#29756457,Bug#29305551).  The bug fixes block MySQL Instances from joining or forming an InnoDB Cluster if they actively have Async replication or any binlog or replication filtering configured at that time. The bugs also identify that solutions taking this approach are no longer “implied” to be supported by MySQL, but as you’ll see below, there is a way to make it work and the MySQL team will generally help people with their “creative implementations”.  And seriously, MySQL users can be very creative with how they use MySQL… and its really inspiring!!

ERROR: 51150: Dba.createCluster: ic2a:3306: instance has replication filters configured, but they are not supported in InnoDB clusters.

A proposed Solution Architecture

For this blog post:

  • I have prepared 6 Oracle Cloud VMs using OL7.7, installed all the needed MySQL 8.0.17 binaries and configured any SELINUX setups, firewalls and Oracle Cloud VCN security rules as needed. The members communicate via private IPs in the Oracle Cloud.
  • Server host names are, cluster-1: ic1a, ic1b, ic1c – cluster-2: ic2a, ic2b, ic2c
  • Steps I will take for walking through this solution:

    1. The first cluster will get fully configured with 3 members and have the cluster name icBlog1.

    2. Then the first member of the 2nd cluster (ic2a), will have Async replication and filtering setup (using an “async-config.sh” shell script).  I will attempt to have that member form the 2nd cluster (which it should fail). Upon successfully failing to be used to form a cluster, I will remove the async channel and its filtering (using the same “async-config.sh” script) 

    3. Build the 2nd cluster named icBlog2, out with all 3 members.

    4. In this step we would then configure each cluster’s members with a mysql-router installed and configured to point to the opposite cluster. The idea here is that it simplifies the process for any cluster member to easily establish an async channel to the alternate cluster as needed. 

      Please read my blog post on MySQL Router 8.0.17 for ideas on how to install and manage it. 

    5. identify a possible process of noting when single primary-RW members change roles, so that the async setup can be migrated to the new single-primary RW member.

I won’t show steps I’ve already demonstrated. But I will show all the unique steps and the results of configurations/setup.  I will also be using a shell script to add and remove the configurations for the asynchronous replication and I will show the contents of that script. 

Step 1 – Building the First Cluster

The InnoDB Clusters will be built according to the commands I used in my blog post: innodb-cluster-devops-mysqlsh-command-usage .   I also added to that blog an update to the selinux statements due to changes in 8.0.17, adding setsebool -P nis_enabled 1 which I believe is needed due to a new local tcp_socket usage by xcom to improving local communication.

As usual, let’s add the next couple instances too

Step 2 – Adding Async replication Filtering for Testing to form an InnoDB Cluster – and resetting it when done

The first cluster is done.

Onto the next cluster setup (which is not configured yet – intentionally)

Lets get an asynchronous replication channel with filtering configured on the member that will end up being the single-primary member of the new group.

Then try to initialize a new InnoDB Cluster setup with that member. 

Our expectation is that it will fail.  Once we discover the predicted outcome is true, we will “remove the asynchronous Replication setup”.

Part 1 of 4: Configure Async Replication & Filtering

First I’ll test to confirm that no replication is configured yet. 

Now, lets run a script to configure asynchronous replication by using a read-only member from the current cluster icBlog1.

I’ve named the script async-config.sh.  The script does the following things:

  • It allows positional parameters, of which they have default values
  • Running the script with the first positional value of either “create” or “remove” may be all that is needed if the other positional parameters have re-useable defaults. 
    • create” an asynchronous channel with a name and associated filter.  It also adds a configuration file into the MySQL Server Instances [mysqld] section of the config file.
    • remove” stops replication for the defined channel and then removes the configurations for that channel, which also removes the filter at the same time. It also comments out the channel based filter configuration in the MySQL Server’s configuration file.

Let’s see what this looks like when the script is run with “create“.

*** Special Note:

  • SEE on line #20 in the colored scripting below, it shows the hostname of the icBlog1 cluster server providing the transactions to the slave that was configured.
  • That member was selected by the script, as it connects to a mysqlrouter instance co-located with the server instance.
  • That local router setup isn’t a core requirement for a process like this.  But it sure is convenient and quite possibly necessary element to have reliable replication setups when dealing with operational system.

Part 2 of 4: Attempt to initialize the Server as the initial member of a InnoDB Cluster.

So here I am intentionally trying to get InnoDB Cluster to fail and prove the bug fix catches this scenario. 

Sure enough, the last line in the output, is the error message that proves they do catch the configuration.

Part 3 of 4: Remove asynchronous replication and filtering from the instance

The script that I had created makes a short order for this task.  Sure, it is handy to have the script – but it is super important if you are operationally managing a large environment of MySQL technology stacks, nonetheless other technologies too.

You would run this command:

Part 4 of 4: Configure ALL the members for the 2nd cluster

To finish the job, we are configuring the other cluster now. As its members join, data in both clusters “should” match.  In reality icBlog1 is the “master cluster” and icBlog2 is a “replica cluster“.  So writes should all go to icBlog1‘s members

And the full status

Step 4 – Add mysqlrouter instances to all Cluster members, configure to connect to the alternate cluster


Managing cluster topology changes can be troublesome, and mysqlrouter and the scripting makes simple work of this. Albeit a more manual initiated but partially automated way of re-provisioning process for your slaves. 

You can always check either cluster’s status using the mysqlsh icadmin@ic1a:3306 -- cluster status  command and it will show you which members in the alternate cluster are RW or RO.  The slave process always needs to be provisioned on a RW member.  Although it’s easiest to have a mysql-router on each instance, might not be the optimal in terms of managing the extra router components. 

Maybe it is enough to have a single mysql-router on a crontab server that does health checks to see if the RW master on the “replica cluster” has changed (or if the RW member on either cluster if doing bi-directional async replication), and if so, maybe a good cron script could remove async replicaton from the old host and reprovision async replication accordingly on the new RW cluster member. Remote ssh execution against the script using a parameter in script to identify the correct new host and/or replica where async replication and filtering should be provisioned.

One Last Parting gift

Below is the script I used throughout this blog.

It is meant to be run on a Read-Write Cluster member.  But you can use this script to probably configure regular channel-based replication.  It would still be run on the RW slave member.  However, the filtering setup may not make any sense for many situations.

Happy Scripting with MySQL – and remember Enjoy Responsibly with your solutions!

 

Please follow and like us:
error

Leave a Reply

Your email address will not be published. Required fields are marked *

14 − 2 =