Terraform on OCI – Provisioning MySQL for InnoDB Cluster Setups

By | April 4, 2019

In my prior blog post on Terraform, I demonstrated building the dependent infrastructure that MySQL implementations need.  Building MySQL isn’t much different, but does identify a need for a Webserver to provide configuration files for Terraform to execute on as was done in my prior MySQL on OCI post, and a Yum Repo Webserver to deliver install and upgrade capabilities to your instances.

Here we’ll focus more on adding additional block volume storage to MySQL instances, and also configuring iterative Compute instances that span across “availability domains” in a region so that there is built in HA at the infrastructure level for clustered solutions.

Terraform OCI – Provision Configured MySQL Solutions

Note to begin: Terraform identifies that the build order in your code is not necessarily followed.
However…
1. I will show the code in their logical order to simplify following along.
2. I will add code dependency requirements on items that need to force an order.

Block Storage Considerations

For all of the MySQL Setups I have configured, they include at least 1 boot volume and a separate Block Volume for the MySQL Data Directory storage.  To me, this is a  minimum for isolating boot/OS disk from that of the database files.  Using block storage for backups, even if its just a temporary landing zone, is also ideal. Plus Terraform makes this a trivial and easily managed task.  And cloud environments have an abundance of block volume to provide!  So lap it up!

Building the Single MySQL Instance Setup

With that being said, my terraform scripts follow suite, for each Compute with a MySQL instance, I will also provision a block volume. I will also demonstrate a means of adding any number of additional block storage volumes for use cases like a temporary or longer term Backup files to be locally retained.

The below code provisions 3 resources: an oci_core_volume, an oci_core_instance and an oci_core_volume_attachment. Each one has its own unique local name (to the project code).  The names in order of the resources mentioned are: “MyDataDirBV“, “MySQLInstance“, and “MyDataDirBVAttach“.

As you might expect from scanning through the code below, and the references above, it will build a compute instance, a block storage volume, attach that block store to the compute instance, and then run some scripts to build a MySQL instance!!!  How wonderful!

So do you notice the “provisioner” scripting usage above…Great!  The script that I used there oci-mysql-install.sh was the same one I demonstrated in a previous blog, and the script output is here.  Being able to use wget to target a collection of build scripts certainly helps as it seems to be a cleaner way to build with terraform.  Merging terraform code and scripting code into a single code base would best be done too.

Adding Destroy Time (de)provisioner

It’s a funny concept, “destroy time”, but its a Terraform pattern. If we’re interested in preserving, or just plain cleanly disconnecting from our block volume storage, then this is the scripting we would use to do it.  This pattern I had found on an Oracle Developers blog post on iscsi-block-volume-attachments-with-terraform. I got a few good ideas from this post!

Adding the following when = "destroy" Terraform code block for the locally named resource “MyDataDirBVAttach“!

The only thing above that may need more careful implementation thought, is tracking the mount point reference so that terraform code can be predictable in it’s un-mounting such as the case above
"sudo umount /var/lib/mysql" as this mount path is buried in one of my scripts.

In my next segment of code, you’ll notice this mount point is exposed in the terraform code as it passes the value as a scripting parameter.

Adding a 2nd block storage volume

Let’s say we want to add an additional block volume for a landing zone for backup jobs.  Let me demonstrate that next.  The idea here is that we’ll provision another block storage volume in Terraform, and add it to the pre-existing MySQL instance that we already created.  It’s purpose will be for holding MySQL Enterprise Backups.

We are NOT implementing a new compute instance, just adding a 2nd block volume.

Technically, for provisioning AND attaching a Block Volume, you need 2 resources:

  1. You need a uniquely named “oci_core_volume” resource. Example below has a local name of “MyBackupDirBv
  2. You also need a uniquely named “oci_core_volume_attachment” resource. Example below has a local name of “MyBackupDirBvAttach”  <— this last item handles provisioning very nicely!

Note: Dependency definitions should ensure correct order of build, so that 2nd block volume is added after first boot volume.

The "${self.ipv4}"   self referencing attributes are a unique characteristic that belongs to provisioner code sections ALONE.  This is a really great feature!  When it comes to oci_core_volume_attachment resources, these can make for very lengthy references to the associated attributes.  So having access to the self reference (even for loop-created) attributes is really great!

Not much is different in the Terraform code above, except that the MySQL Compute instance already exists, now, so we just need to reference back to it based on its “local name” of MySQLInstance .  Like before, we’re using a depends_on reference to ensure that both those resources exist and are ready before the oci_core_volume_attachment resource named MyBackupDirBvAttach can implement attaching the block volume to the compute instance.

A Different script for Block Volume Additions

I am also using a different script, and although it doesn’t vary much to my reference in the other blog, it does allow a certain flexibility.  It can be re-used more easily.

Admittedly, I don’t like having scripting repeated in other scripts – maintenance would be troublesome, but this is a blog post for goodness sake – build your solutions to fit your needs.  🙂
I’m just proposing methods that might inspire valuable and interesting ways to solve technical and business problems!

Update: Sometime early 2019, the Oracle Cloud updated the Web Console-based Block Volume attachments to “require” a “Consistent” (aliased) device path be selected and used.  This helps ensure a simpler means of reliable block volume mount setups.  For scripted (Terraform or Oracle CLI) setups this requirement is NOT imposed, and the iSCSI mount options will work in those cases.

Consistent device paths are patterned like: /dev/oracleoci/oracle*

Update 2: My script below has been updated to support both methods, and each option is listed below:

  • Using a Fixed device path: provide all 4 positional attributes to the script
  • Using the iSCSI attachment process, provide all attributes, but for the 1st positional value, use something like “none” (or any non-empty value that does not match the pattern above)

Personally, I’d rather have scripting put into files to execute, and not defaulted to be “inline” in the Terraform provisioner code block.  However, there can be trade-offs around code maintaining-ability. Many single line scripts tightly align terraform and scripting code together – but requires repeating that scripting every where.  Having scripting files negates repeating it everywhere, but decouples the script from terraform.  You choose.

Usage of the sgdisk utility for scripted setup and partitioning of block storage only made sense for the above scripting.  Looks like a lone gunman supporting these scripting solutions.  If you use sgdisk for work or contracting, throw the man a donation.

Preparing IaaS for an HA InnoDB Cluster w/Terraform

Ok so let me explain what the above title for this section means. Terraform can build equally provisioned resources in multiples. You can define a counter with any number and terraform will build that many replicas of those resources.

So let’s say for a 3-member MySQL clustered setup we still have our core resources that we’ve noted above.  That includes: an oci_core_volume, an oci_core_instance and an oci_core_volume_attachment. Each with its own unique local name (to the project code).  The names in order of the resources mentioned are: “MyDataDirBVic“, “MySQLInstanceIC“, and “MyDataDirBVAttachic“.  This matches our Terraform code below.

This list item below, not only helps to control the iterative
build of MySQL instance, it also helps to explicitly
define a distinct “availability domain” for each compute instance too!

Usage of count.index

I re-use that ad_list list object to drive my 3-member build.  Notice when I increment display_name or hostname_label attributes, I add +1 ${count.index + 1} .  That is because the count index starts at zero (0).  See the display_name attributes in the compute listing in the picture below, all are appended with {1,2,3}.  Plus, the Availability Domain matches the instance’s trailing number.

I also wanted to manage the private IPs for the MySQL instances to keep them aligned in a contiguous pattern, easing my ability to later sequentially iterate through their IP addresses for subsequent activities private_ip = "10.10.10.${count.index + 21}" . I arbitrarily started the first ip to be 10.10.10.21 , so subsequent ones are 10.10.10.22 & 10.10.10.23.

After running this Terraform script below to create the infrastructure, its not much extra to run the same or similar InnoDB Cluster configuration  as defined in this linked text. It is also possible, but a little more work, to have Terraform to run those commands for you. In the end, you’ll need to manage the system with those commands anyhow.  Both options are ideal!

Thanks for following my journey with Terraform, and enjoy MySQL!