Streamline Terraform Automation Using Github Actions

Kubernetes resources and definitions can be deployed to clusters using gitops which ensures the state of the cluster is present in git.

Can we do that for terraform also?

There is a way using GitHub actions which helps us deploy resources to our desired cloud provider.

This blog defines how to create terraform resources using GitHub actions. All files can be found in this repository - learn devops using projects.

Defining the cloud permissions

To perform this action we are going to use AWS so we first need an IAM user which has the required permissions.

Define a policy which contains the relevant permissions using the following json.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:DeleteSubnet",
                "rds:*",
                "ec2:CreateTags",
                "ec2:CreateVpc",
                "ssm:GetParameters",
                "ec2:Describe*",
                "ec2:CreateSecurityGroup",
                "ec2:DeleteSecurityGroup",
                "ec2:ModifyVpcAttribute",
                "ec2:DeleteVpc",
                "ec2:CreateSubnet",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:AuthorizeSecurityGroupEgress",
                "iam:CreateServiceLinkedRole"
            ],
            "Resource": "*"
        }
    ]
}

Create an IAM user attaching the policy to the user.

In the security credentials tab generate the AWS access key and secret key which will be required later to create the resources.

Developing the terraform file

For this example we are going to create a RDS database instance inside a VPC.

The following things need to be created.

  • Create a fresh VPC in a region.

  • Make two subnets which are going to be attached to the VPC in different availability zones.

  • Develop a subnet group of the two subnets.

  • Generate a security group attached to the vpc which allows ingress and egress traffic.

  • Finally deploy the rds instance attaching the subnet group and security group.

provider "aws" {
    region     = "ap-south-1"
}

resource "aws_vpc" "ditto_vpc" {
  cidr_block = "10.0.0.0/16"
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name = "ditto_vpc"
  }
} 

resource "aws_subnet" "ditto_subnet1" {
  vpc_id            = aws_vpc.ditto_vpc.id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "ap-south-1a"

  tags = {
    Name = "ditto_subnet1"
  }
}

resource "aws_subnet" "ditto_subnet2" {
  vpc_id            = aws_vpc.ditto_vpc.id
  cidr_block        = "10.0.2.0/24"
  availability_zone = "ap-south-1b"

  tags = {
    Name = "ditto_subnet2"
  }
}

resource "aws_db_subnet_group" "ditto_db_subnet_group" {
  name       = "ditto-db-subnet-group"
  subnet_ids = [aws_subnet.ditto_subnet1.id, aws_subnet.ditto_subnet2.id]

  tags = {
    Name = "Ditto DB Subnet Group"
  }
}

resource "aws_security_group" "rds_ditto" {
  name        = "rds-dt"
  description = "Security group for dittodb RDS MySQL instance"
  vpc_id      = aws_vpc.ditto_vpc.id

  ingress {
    description = "MySQL"
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_db_instance" "dittodb" {
  allocated_storage    = 10
  engine               = "mysql"
  engine_version       = "8.0.33"
  instance_class       = "db.t3.micro"
  username             = "clusteradmin"
  password             = "cladminuser"
  storage_type         = "gp2"
  db_name              = "dittodb"
  skip_final_snapshot  = true
  vpc_security_group_ids = [aws_security_group.rds_ditto.id]
  db_subnet_group_name = aws_db_subnet_group.ditto_db_subnet_group.name


  tags = {
    Name = "DittoDB"
  }
}

We also create an outputs.tf which prints the rds instance endpoint , a good idea to keep the outputs in a different file.

output "db_instance_endpoint" {
  value       = aws_db_instance.dittodb.endpoint
  description = "The connection endpoint for the database instance."
}

Setting the github actions workflow

Create a directory .github/workflows in the repository root.

Create a file terraform-apply.yaml which will be responsible for creating the terraform resources.

name: 'Create RDS Instance'

on:
  push:
    branches:
      - terr-action

jobs:
  terraform:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: ./terraform_github_actions
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2

      - name: Terraform Init
        run: terraform init

      - name: Terraform Apply
        run: terraform apply -auto-approve -input=false
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

The actions workflow is to init terraform and apply the files. We set the working directory to specify actions which directory to make use of to run the terraform commands.

The working directory is only used in the run type parts of the actions file.

For the code to work we need to provide the AWS access key and secret key.

Insert these into the repository secrets.

Pushing the code

Push your code to the desired branch and github actions must follow up and start running.

The init initializes terraform which will be used in the apply step.

The terraform apply step creates the terraform resources one by one.

Once finished it displays the output of the rds endpoint.

Verifying the resources in AWS

Click on VPC and we can see the association between the VPC subnets and route tables.

Creation of the security group.

Finally click on RDS and the rds instance will be present.

Thus this is how to use github actions to create aws resources using terraform.

(this does not illustrate how to store the state files so that terraform can remember the situation of the resources , more details in this project issue)