Group Replication minimal SQL based configuration

By | March 10, 2017

So you’re interested in the “new” MySQL Group Replication feature but are concerned that configuring it is going to be difficult.  Let me soothe those concerns with this blog post.  Yes, the MySQL team has recently been discussing other methods to configure Group Replication using the RC release of MySQL Shell and how it can be used for tools such as Puppet for enhanced provisioning of a Group as Lefred has demonstrated in his MySQL Puppet post.

To start, I’ll post a sample my.cnf file below and highlight the related configurations that Group Replication requires.  Yes, there are a number of them.  I’ll try and lay that out below so it is easy to follow. But first….

Recap of Group Replication

This self-managed cluster of MySQL InnoDB based nodes coordinates multi-master write activities in a total order delivery.  This ensures writes maintain their proper commit sequence across all members no matter where they originated from in the group.  It also self-manages the Group memberships among server instances, always knowing who is in the group and who is not, all without the peer pressure that goes along with normal social “groups”.  Plus, in this group they’re all actually leaders and followers at the same time.  That doesn’t happen to often in society!  …I digress.  For a more comprehensive summary, check out this docs page.

You can setup an InnoDB Group Replication setup based on varying circumstances and needs.

  • Maybe you’re building out brand new set of highly available database servers
  • Possibly a single instance is currently in use and you want to turn it into a Group Replication managed environment
  • You have a standard replication setup that you wish to convert into a Group
  • ….the list goes on

Let’s assume in this blog that we are just starting with 3 brand new server instances in a highly available setup.  All 3 MySQL instances are fresh installs with no activities performed against them after the database files are initialized.  Not even setting the root user password for the first time.

Let’s also assume that the network resolvable server host names for the 3 members are “gr1“, “gr2” and “gr3“.

Setting up the first Member

The first member named “gr1”  becomes the “bootstrap” member.  Once it is up and running all the other members can join.  Here we start:

  1. Getting that root user password initially set.
    There is a very simple way of doing this by using the mysql client from a command line with little intervention, but for the sake of staying to just the SQL command, it would be something like this:

    From there, it’ll prompt you for the temporary password generated by the --initialize of the new database files and found in the error log.  Use that to login and set the root user’s new password.  Don’t forget to use the other SQL statements both preceding and following the password change. The additional  SQL_LOG_BIN statements do 2 things:

    (A) Prevents the server from binary logging the password change

    (B) Prevents the creation of a GTID event that is based on the individual server’s identity.  We want to ensure GTID events are generated under the Group’s UUID string instead.  I’ll speak more to this in a future blog.

  2. Next we need to configure an initial replication channel for the Group Replication service.  
    It is interesting that we can do this, but not yet have defined the database user account and password in the database yet.  It works though and is important.  You can certainly use your own defined username for this replication channel.  But “please” use better passwords than I am using here!

    It’s a pretty weak looking configuration, no need to identify a host or other details.  It’s true, nothing else is needed. Next….

  3. Now here is our first real initializing of the Group.
    These few statements setup this first member to be the bootstrap member. It means everyone else will follow it first to bring themselves online

    Note:  I’m running STOP GROUP_REPLICATION here as I assume that the configuration file either has the group_replication_start_on_boot=ON set accordingly or does not include it in the my.cnf file as ON is the default setting.  I also highly recommend that this value is left ON.  The reason will be explained in another blog.  Running STOP and then START recycles the service and will ensure the Group is properly enabled.

    Now that we’ve done these things….we can start generating transactions that “belong to the Group”.  This is important as the individual servers now lose their identity and uniqueness.  It is the fear of every teenager and citizen in the world.  But these server’s have no feelings (that we know about) or concern.  The Group prevails and authors the transactions that come through the database instance and application.

  4. Creating our initial user accounts.
    Items we create at this point will properly replicate across the group. So now is a good time to create the replication user account that we used above. The rpl_user account (or whatever you named it) is used to establish the group_replication_recovery replication channel.  We’re sort of doing this in reverse, establish the channel configuration first, then create the user account and its password.Be sure to use the username and password defined in step #2 above.

    That’s it…first member is up!  That took 6 or 7 SQL commands….and are easily scripted.

Configuring Subsequent Group Members

The steps to configure the remaining members apply the same to both “gr2” and “gr3“.  They are:

Step #1 above:  To deal with resetting the root password

Step #2 above:  Establish the group_replication_recovery channel

Last:  Just recycle the Group Replication service by running these commands.  Once again, STOP is only needed if the configuration file starts the Group Replication service on boot.

You now have a fully configured Group Replication cluster implemented.


A Look at Needed Configurations

The “my.cnf” configuration file for the server can contain a number of configurations.  Group Replication requires the base setup for binary logging and many of the associated standard replication configurations to be defined.  This is because it makes a complete re-use of MySQL already mature replication processing.  It simply adds a distributed management facility into it which actually has few needed specific configurations…..but a lot of supporting ones.

Here is the config with an attempt to not include Group Replication configs that we don’t need when using a “single-primary” configuration.   For configurations that have “#advised” following them, they aren’t strictly needed but highly advised.  *some adjustments below from well-pointed comments*

3 thoughts on “Group Replication minimal SQL based configuration

  1. Simon J Mudd

    A couple of observations:
    * you mix underscores and dashes in the config. Why? It is not consistent and potentially confusing even if it will work.
    * slave-parallel-workers=0 , which is the default. 5.6 and later has had parallel replication yet you seem not to trust it as you do not enable it by default. Why? If it works better, and as we all want this functionality, why do you not leave it with a non-zero default value. It seems to make people shy away from new settings which probably are good. If there are exceptions point to them.
    * group_replication_group_name: mysql 5.6 managed to auto-generate server_uuid, and given this is supposed to be a unique value (which no-one will remember) it would seem better to auto-generate and remember this value. If not people will copy this specific value from one group to another which at some point in time will break something weirdly if two different groups meet each other and both share the same UUID. A user-friendly name might have been better here but since it looks like you use this as an alternative to the server_uuid then the option name is probably not really ideal. group_replication_uuid might be better?
    * minimal RBR? Reasoning? It certainly reduces the size of binlogs but for anyone doing binlog “mining” or pushing their binlogs out to external systems an extra Full RBR master may be needed. Perhaps most people will not care, so I am curious about the context, since optimisation for speed is one thing but it should not be done prematurely.

    I still feel that from a “do it the same way as normal replication point of view” it might have made more sense to have a CHANGE GROUP REPLICATION TO …. type command, but also to store this information in Group replication tables rather than in configuration files. It means that if I copy this server’s binaries somewhere else and start it up with the wrong or a different configuration file then it will not work as it did before. With mysql slaves, except for the auto.cnf file you can happily make a file system copy of a slave, remove the auto.cnf file and start up the server and it will carry on working pretty much as the old one did (as a new slave). Group replication configuration is completely different which will make the learning curve harder for those of us used to normal replication.

    Having said this thank you for sharing a simple sample configuration and procedure to get group replication up and running. This sort of information is definitely needed.

    1. Andrew Grimo Post author

      Hello Simon,
      That’s a good line of questions and its great to get your feedback. I’ll comment on your points sequentially as you wrote them.
      1. There is certainly a strong case for creating a mysql configuration file with just underscores and refraining from using dashes. Actually, I had come across documentation recently that did state both were supported. Of course you’re right, having a config file with a mix of things makes it hard to manage and identify duplicates in use among other confusions. For myself, it started back when I was supporting MySQL 5.5 there was at least one config that only seemed to work with dashes but others worked with both. So I never got into a pattern of using just underscores. It’s a good idea to do so though.

      2. Regarding slave-parallel-workers, I had included these as early labs versions as it seemed support from LOGICAL_CLOCK and preserving commit order was derived from these settings, I will look to confirm. If not the case then it may not apply here. However, I do tend to add this in to remind folks I work with that it is available to use and in many cases will help to solve problems it addresses.

      3. I can see your point on group_name and including a reference to uuid as it would align with the individual instance usage. I’m sure my team that has influence at this level will read this.

      4. Minimal RBR is purely for minimizing data that is moving through the network. If a use case has different needs of course that configuration can use the full image.

      I appreciate the feedback on the idea of using Group Replication members in the same way as regular master slave members. It’s an interesting idea and it may very well support the ideas you are mentioning. I look forward to the community exploring these ideas and pushing the limits of what might not have yet been considered.

  2. Pingback: Group Replication GCS Troubleshooting – Select All From MySQL

Comments are closed.