Deploying a LAMP Stack with Terraform Modules

In a previous post in this series, we looked at how Terraform code should be modified to allow better protection and stability for our production environments. We looked at state, building out our Dev, Test, Staging and Production code and protecting the crown jewels: access and security keys to the environment by using Vault to provide one-time use credentials to AWS. In this post we take the next step; we are going to look at making the script more scalable stable and reusable, by introducing the concept of Terraform modules.

These enable the script to be broken down in to sections, thereby enabling the DevOps team to manipulate sections of the environment at a greater cadence than other less fluid sections of the deployment and production environments such as an AWS VPC or core networking sections.

Terraform

As mentioned, modularizing your code helps to scale when by re-using sections of code across various environments with only the module inputs changing to reflect the build required for the environment at hand.

What Is a Module in Terraform?

A module is a small component of a program that contains one or more routines. In Terraform, it means we separate out a section of the code that is repeated numerous times, across different environments or even within different scripts in the same environment. For instance, building a virtual machine using the enterprise template (which incorporates certain security and resource guidelines).

Terraform modules are a container for resources that are used together. Currently our main.tf script can technically be called a root module as it contains all the resources for our environment, it however contains everything needed to build the environment, including the root VPC, the networks, security groups and details about the webserver and the RDS Database instance.

Looking back on our last post you can see that we created several subfolders to enable our code to be split into sections that reflected our environment. While this is a great way forward and it aids in actually understanding how your architecture is build out, it does not stop massive repetition of code and as a result human error can creep back into your environment.

Stage
 └ vpc
   └ services
      └ frontend-app
      └ backend-app
         └ main.tf
         └ outputs.tf
         └ variables.tf
  └ data-storage
     └ mysql
     └ redis

One of the advantages of a language such as Terraform is that your scripts can be broken down into smaller more defined subscripts or modules. These modules can be called multiple times either within the same environment or even from other configuration scripts thereby allowing your code to be reusable across multiple environments reducing development time and human error. You also know that these sections of reusable code are proven and tested. Troubleshooting issues becomes easier, since there’s only one piece of code to improve and test, instead of iterating through many, many duplicates of the same code.

To help you get started Hashicorp provides a registry of Terraform modules, found at registry.terraform.io. You will find modules here that can be used against multiple providers such as AWS, GCP, Azure, and VMware vSphere for example.

How do we build a Terraform Module?

To start the process of building out Terraform modules, first create a new subfolder called modules in each of your pipeline folders.

Stage
 └ vpc
   └ services
      └ frontend-app
      └ backend-app
         └ main.tf
         └ outputs.tf
         └ variables.tf
  └ data-storage
     └ mysql
     └ redis
 └ modules

Let’s make our first module. At its base level a module is any set of Terraform configuration files in a subfolder. Copy the main.tf, variables.tf and outputs.tf files into the module directory.

Next open up the new .tf file in your favourite editor and remove the providers sections, as these should be provided by the consumer of the module, not the module itself. Save the file with a new name, in our example we have called it lamp.tf.

## downloads the relevant providers
provider "vault" {
              address = "${var.vault_addr}"
              token = "${var.vault_token}"
}
data "vault_aws_access_credentials" "creds" {
  backend = "aws"
  role    = "ec2-admin-role"
}
provider "aws" {
  access_key = "${data.vault_aws_access_credentials.creds.access_key}"
  secret_key = "${data.vault_aws_access_credentials.creds.secret_key}"
  region     = "${var.region}"
}

So how do we utilize this new module? It is as simple as referring to it from your root.tf file with the following syntax:

module "<NAME>" {
  source = "<SOURCE>" 
  [CONFIG ...]
}

A little explanation of the sections. <NAME> is an identifier that can be used throughout your Terraform code to refer to the module, <SOURCE> is the location of the module, and we will return to [CONFIG] later. To refer to our new module our new main.tf file will look similar to this:

## Provider to be initialized.

provider "vault" {
              address = "${var.vault_addr}"
              token = "${var.vault_token}"
}
data "vault_aws_access_credentials" "creds" {
  backend = "aws"
  role    = "ec2-admin-role"
}
provider "aws" {
  access_key = "${data.vault_aws_access_credentials.creds.access_key}"
  secret_key = "${data.vault_aws_access_credentials.creds.secret_key}"
  region     = "${var.region}"
}
module "lamp-stack" {
              source = "D:\Terraform\Stage\modules"
              }

Let’s see if this has run correctly. Remember to start up your Vault instance within your AWS environment.

Now when we first run this new code, we issue terraform validate, and receive the following errors:

Error: Reference to undeclared input variable
  on main.tf line 18, in provider "vault":
  18:   token = "${var.vault_token}"
An input variable with the name "vault_token" has not been declared. This
variable can be declared with a variable "vault_token" {} block.

Why? Well this is because the main.tf directory no longer has a variable.tf file in it. So let’s rectify this issue. Open the variable.tf file in your modules directory and copy the following variables into a new variable.tf file in the root of your pipeline directory

variable "vault_addr" {default="http://ec2-52-90-103-17.compute-1.amazonaws.com:8200"}
variable "vault_token" {default = "Vault Token Here"}
variable "region" {default = "us-east-1"}

Re-run terraform validate. When we run this with terraform apply, you will notice a difference in the initialization:

Our apply now shows our new created Module.

This is only a half-way house

This is only a half-way house, we still have all our configuration options in the module. This code is not a stateful reusable block. To be fair we have currently gained nothing here, other than introducing complexity and more moving parts.

It is time to make some reusable code, and at the same time split the code into more useable chunks to allow sections of the infrastructure to be modified at a greater cadence.

We will first create a VPC module to show the power of modules. Later in the post we will create the network module, an amalgam of the VPC creation and the subnet coupled with:

  • A Subnet Module
  • A Gateway Module
  • A Default Route Module
  • A Route Association Module

Let’s start with the VPC module. Currently it looks like this in the script:

## create VPC
resource "aws_vpc" "testvpc" {
  cidr_block           = "${var.vpc_cidr}"
  enable_dns_hostnames = true
  tags = {
    Name = "testvpc"
  }
}

So taking what we have learned let’s make this stanza into a module. Cut the following  section out of lamp.tf and create a new subfolder called VPC. In this folder create a new file called vpc.tf.

##-------------------------------------------
##  Terraform: Create a VPC in AWS  ##
##-------------------------------------------
##
## Author - Tom Howarth
##
## Date - 07-02-20
##
## Version - 0.1
## Create vpc.tf

## creates a VPC,
resource "aws_vpc" "testvpc" {
  cidr_block           = "${var.vpc_cidr}"
  enable_dns_hostnames = true
  tags = {
    Name = "testvpc"
  }
}

Note that I have placed a header on the file to show versioning and information about what the code does.

It is true that we have created the module but it is not yet reusable.

Modify your module to read:

##-------------------------------------------
##  Terraform: Create a VPC in AWS  ##
##-------------------------------------------
##
## Author - Tom Howarth
##
## Date - 07-02-20
##
## Version - 0.1
## Create vpc.tf
## creates a VPC,

resource "aws_vpc" "vpc" {
  cidr_block           = "${var.vpc_cidr}"
  enable_dns_hostnames = true
  tags = {
    Name = "$(var.cluster_name)-vpc"
  }
}

Hey, stop right there Mr postman, where has that variable come from? Well noticed! We will now need to declare a new variable called cluster_name in the variable.tf file. So let’s do this right from the start.

In your VPC directory create a new variable.tf file and add the new variable, however rather than just adding the variable we are now going to add a description to the variable to describe what it actually does. This will result in a slight change of the file’s format. At the same time you will also have to copy the variable from the lamp modules’ variable.tf file to the newly created file for the VPC module. Your resultant file should look similar to this:

variable "vpc_cidr" {
  description = "The CIDR range for the VPC - we have a default range set to 10.0.0.0/16. enter the input default into the module section to change it"
  default = "10.0.0.0/16"
}
variable "cluster_name" {
description = "The name to use for all the cluster resources"
type = string
}

It is important to understand that whenever you create a new module all the variables that are required for that module need to be declared in the modules’ variable.tf file. Also the fact that variables are duplicated across different Terraform modules is not a problem because you only pass the required value once in your main.tf file in the root of your environment.

To use this you now need to add the following to your main.if file:

module "vpc" {
              source = "D:\\Terraform\\Stage\\modules\\vpc"
              cluster_name = "Stage-LampStack"
              }

Remember to save this change and remove the code for the VPC creation from the lamp.tf file in the module directory.

To remove any possibilities of conflicts create another subfolder called lamp and move the current module and associated files into to it, remember to change the source location in the main.tf file to reflect the new location.

There is one more piece of the puzzle left and that is how to create a variable in one module that has an input from another module.

Passing Variables from one module to another.

If you look at the code in lamp.tf you will see that there is a reference to a variable vpc_id in several of the remaining code stanzas.

If we run the code now we would receive several validation errors relating to a reference to undeclared resource from each of the code stanzas where the object vpc_id is declared,  because this reference is referring to the now-nonexistent VPC creation stanza that we have just created our new module for.

To fix this issue, we first need to create an outputs.tf file in the root of the VPC modules folder and add the following text.

output "vpc_id" {
   value = "${aws_vpc.vpc.id}"
}

Next move to the folder containing the other module and open the variable.tf file in your favourite text editor and add a new variable:

variable "vpc_id" {}

Open main.tf in your favourite text editor and add to the module lamp-stack stanza the following line:

vpc_id = "$(module.vpc.vpc_id)"

Finally in the lamp.tf file, change every instance of aws_vpc.vpc.id with var.vpc_id. For an example of what it should look like:

## create public subnet
resource "aws_subnet" "vpc_public_subnet" {
  vpc_id                        = “$(var.vpc_id)”
  cidr_block                  = "${var.subnet_one_cidr}"
  availability_zone       = "${data.aws_availability_zones.availability_zones.names[0]}"
  map_public_ip_on_launch = true
  tags = {
    Name = "vpc_public_subnet"
  }
}

Run a terraform init to initialize the new module in your code.

Terraform Init

Everything looks dandy, so let’s see if the Terraform plan throws up any errors.

Running terraform apply will now deploy the new infrastructure to AWS. Remember to run a terraform destroy as you do not want any nasty bill shock.

Summary

Summary

We have shown how to create Terraform modules. These are the core building block of reusable code. Firstly we created a module out of the code, then we split out the creation of the VPC into a second module, this introduced the concept of passing one modules output as an input to another module.

There is still a lot of work to do to complete the modularization our current code:

  • Modularize your subnet creation and security groups
  • An EC2 Instance Rule Module
  • A DB subnet Module
  • A DB Instance Module

However we will leave you to finalize that. Refer to the modules that need to be created in the section above and do not hesitate to reach out with any questions.

We will review the new modularized code in the next post where we will look to building out the environment across multiple availability zones.