Terraform on OCI – Building MySQL On Compute – initial setups

By | April 2, 2019

I have written previous blog posts about Oracle Cloud OCI and this series continues. My post titled with Iaas Getting Started was to get us acquainted with important security-focused items like Compartments and network Services like NAT and Internet-Gateways. Then I posted about building MySQL on Compute with Scripting using a mix of OCI Web console navigation and shell scripting.  The world moves to fast to wait for some one to fat-finger their way through a web console.  Terraform to the Rescue!

This post will focus on using Terraform’s automation capabilities to build the supporting Infrastructure Services that MySQL benefits from…. and does so in a predictable and repeatable manner.
A subsequent Blog will demonstrate Building MySQL with Terraform and scripting combined.

Building with Terraform on the OCI Oracle Cloud

There is a lot of content online for implementing Terraform. The website that stood out to me in terms of Oracle Cloud OCI reference material included: Terraform.io Best Practice Guide, Terraform.io OCI Provider website in general. Some practical examples that were helpful came from a blogger of That Finnish Guy who did a series on implementing Oracle Cloud OCI resources with Terraform… Starting at Part1 and scanning through his series certainly helped me get going.

The Terraform Interpolation webpage is great for understanding Terraform coding structures, but is relevant for up to Terraform v0.11.
If you happen to be running Terraform v0.12 or greater, you’ll want to refer to these pages on Terraform’s Configuration Language Expressions and Functions.

If you were to bookmark anything for reference, get this page on the oci_core_instance resource as it was the only place that identified that a private_ip attribute even existed for a compute resource, and was shown in the “example usage” block, which was super useful!!! This site also lists other OCI Terraform resources

Other “example code” areas include the Github Terraform Provider OCI space, which is also very good!

Getting Started with Terraform

If you are looking to Get Started, the Terraform website has a getting started learning “Guide“. Reading from the Intro to Terraform pages can be helpful, and so will reading on the Oracle Cloud OCI Provider page.

In this exercise, I have been using my Mac laptop to build out and test the terraform code that made up this blog post.  Initially I was using a compute instance that was inside my cloud’s private-ip subnet.  So it can be done. either way.
I will defer your setup requirements for developing with Terraform to the Terraform.io/docs.  In general, this is my setup:


Reviewing the 4 core Project files

There are 4 main files in a terraform project that one would likely use, and are listed next. We’ll review their contents as we review the Resources that that are build!

  • env-vars: contains variables that are common for a particular cloud environment or possibly just for that project.  They are deployed as environment variables (hence the name) and their values are captured through ]defined variables in your Terraform project files, likely in main.tf.
  • main.tf: contains the core “provider” variable setups and also the core infrastructure code constructs, like compute instances or virtual cloud network setups.
  • variables.tf: Other supporting variables or data sources are generally found here.
  • outputs.tf:  This file houses data sources that aid in providing output from new infrastructure, or possibly from data sources that allow you to inspect the values

The “env-vars” file

In the file below, the following variables are provisioned when implemented in your shell session by running  SOURCE env-vars. But it doesn’t necessarily end there, the main.tf file will likely host the associated and defined variables so that these environment variables can be utilized in the project code.

Variables defined here (if its not noticeable, which it might not be) are:

  • tenancy_ocid = the cloud account associated to the code activities
  • user_ocid = the api-user account that has been provisioned privileges and has access to the keys
  • fingerprint = verification of the keys used for access
  • private_key_path = OS filepath to the pem private key
  • region = the core location of the primary data centre you are targeting

Other variables captured here are the ssh keys used for accessing the built instances:
Variable are:  ssh_public_key & ssh_private_key.

The main.tf file

The first part of this file handles the environment variables so that they can be used in the code.

Network Components

The next portion of the main.tf for my coding is the network components that need to be created before anything else can happen. I’ll also show associated settings from the variables.tf file too.

Also, you’ll start noticing “resources” being created, which will have a formal code reference and then a name that is given to identify it.  The first “resource” in my code is a oci_core_virtual_network (VCN) and it has a nickname of “MySQL_VCN_TF“.  This nickname can be re-used in other areas of the codebase to refer back to this VCN object. The image below is the VCN we are going to create.

VCN to be created

Also, note in the resource oci_core_default_security_list I am able to re-define the network access security rules for the VCN.  Here I have configured the security list’s normal defaults plus now I can add the MySQL required port access that is needed, which is TCP ports 3306 and 33060-33062. I also need to open up TCP port 80 for a webserver that serves up configuration files and also MySQL RPM files.

Notice on the security rules for port 80 only allows traffic from within the VCN, [Source]. Even though there are public IPs in the subnet, only traffic from within the VCN itself have access to that webserver.

Included is an Internet Gateway which is helpful, and also a normal default setup when using the web console to build a default VCN. Among other benefits, it will allow public IP based communication between two OCI systems to “remain” in the boundaries of the OCI environment – It never sees the public internet.

Lastly, a single, region-wide subnet was created to house the servers I will build.

Region Wide Public Subnet

The main.tf file – Continued…

The Next core section of this file is building a compute instance for the Yum Repository Server.  I will highlight a few related things:

  • To identify the “availability_domain” that the compute instance should be created in, I’ve used a variable that is an array, named var.ad_list[1] and we’ll see this setup on the variables.tf file.  That structure allows me to iterate through the names of the availability_domains in our cloud environment in a predictable and simple manner.  These array lists start at zero (0), so there real value is offset by one. This matches “Availability Domains”, as they do the same thing.
  • Also to note here, the source_type is bootVolume. I’ve chosen this option as I’ve taken a clone of another Repo-Webserver’s boot volume (outside of terraform) and am just re-using it here.
    • Taking a clone of a boot volume can be a good way to create new servers without having to fully reconfigure them all the time.
  • I’ve also defined an explicit private IP address that matches a valid address for the Subnet.  For this server, I want control of the private IP, so its helpful that it is that easy.

The variables.tf related code

Here are the associated variables and structures that help keep the main.tf structures organized.

To Recap & Conclude …for now, we have:

  • Created a “Virtual Cloud Network” and customized its:
    • CIDR Block of
    • VCN display name & DNS label
  • Took Management of the default security list and:
    • Re-implemented the normal default security rules
      • includes internet inbound access for ssh access over port 20-22, ICMP checks
    • Added VCN resource/compute members to use ports 80, 3306, 33060-33062 internal in the VCN.
  • Built an Internet Gateway resource named “MySQL_VCN_TF_IG
    • Provisioned the IG resource a Route Table allowing internet traffic into the VCN.
  • Created a Subnet with a CIDR Block of
    • Assigned it all of the default VCN setups including the custom security rules
    • Established it to be a region-wide Subnet and customized its dns_label.
  • Lastly, we created a compute resource with a purpose to host Yum installation packages & other scripting collateral for building MySQL environments. To this end we:
    • Gave it a display name and a host label name
    • Gave it a custom private IP address of
    • Used a previously provisioned/cloned boot volume from another Repo Server (which was located in Availability Domain 2)
    • AND we provisioned the NEW compute VM in the same “availability domain” as the boot volume so that the resources could be used together.

We conclude this post with the following output from the one VM we have created so far.

The next post we will build MySQL instances using shell scripting that I’ve blogged about before, but Terraform will do all of the work!

One thought on “Terraform on OCI – Building MySQL On Compute – initial setups

  1. Pingback: Terraform on OCI – Provisioning MySQL for InnoDB Cluster Setups – Select All From MySQL

Leave a Reply

Your email address will not be published. Required fields are marked *

7 + thirteen =