Skip to content
Aaron Wishnick edited this page Mar 15, 2022 · 2 revisions

AWS Proton GitHub Actions Sample

In this wiki we will provide a detailed walkthrough for how the github actions workflow actually works. It's important to keep in mind, none of this is a requirement. The beauty of the Self-managed provisioning option within AWS Proton is that it's really up to you how you want the flow to work. In the end of the all you absolutely have to do, is ensure that your automation (or manual processes) include a final call to NotifyResourceDeploymentStatusChange to provide AWS Proton with the result of the provisioning attempt.

Prerequisites

In general this will be easiest understood if you are already familiar with Terraform. We'll do our best to expand on specific areas you need to know, but having at least a baseline understanding will help.

Flow Chart

The following flow chart provides an overview of what our GitHub Actions workflow accomplishes. Note this is a slightly simplified overview, we've excluded some steps that we'll cover later on just to reduce the noice.

flow chart

Step 0. Determining Whether or not to Run

The first few steps of our flow chart are determining whether or not, for a given event (pull request / push / etc...) we want to actually run our workflow. In our case we want to run if it is a push to our main branch, and only if the commit is associated with an AWS Proton deployment. AWS Proton will always update the deployment-metadata.json file, so we'll use that to distinguish whether the commit is for a deployment or not.

name: 'proton-run'

on:
  push:
    branches:
      - main
    paths:
      - '**/.proton/deployment-metadata.json'

Step 1. Pre-Terraform Setup

Now we have decided that this is a commit we wish to deploy, so we have to get set up before we start running Terraform. Getting set up means parsing the deployment-metadata.json file for any information we need and determining the credentials we need. For credentials our approach is that we store a map in our repository from environment to some configuration, the file is called env_config.json. The reason for this is that in AWS Proton, Environments are typically a "container" for your applications. They might include a VPC/ECS Cluster/etc.. but more importantly they dictate the account to which your applications get deployed.

env_config.json

Let's take an example. Say we are setting up a beta environment. We want beta to run in account 111111111111 and deploy to us-west-2 and finally we want our terraform state files to be stored in a bucket titled acme_corp_beta_terraform_states. The following is what the env_config.json would look like

{
    "beta": {
        // This is the IAM role to use when deploying to beta
        "role": "arn:aws:iam::111111111111:role/TerraformGitHubActionsRole",
        // This is AWS region to deploy to 
        "region": "us-west-2",
        // This is the state bucket to use for persisting beta terraform states
        "state_bucket":"acme_corp_beta_terraform_states"
    }
}

So, now jumping back into our flow chart, we need to determine the environment we are deploying to, retrieve the deployment id (this will be necessary for notifying AWS Proton of the result), and then with that environment retrieve the associated configuration from env_config.json.

This is quite noisy so I'm not going to delve into anything specific, I'm just going to recommend that you go to this part of proton_run.yaml and read through it.

GitHub Actions Outputs

One specific thing I'll call your attention to our GitHub Actions Outputs. Outputs are a way for steps and jobs to pass information between each other. The way you set a step output is through something like the following:

echo "::set-output name=OUTPUT_KEY::$OUTPUT_VALUE"

And then job outputs you delcare at the top of the job declaration, and you can make them depend on step outputs, e.g.

outputs:
  JOB_OUTPUT_KEY: ${{ steps.STEP-ID.outputs.OUTPUT_KEY }}

And so if you look at the top of this first job get-deployment-data you'll see the data we are retrieving to perform the deployment.

outputs:
  role_arn: ${{ steps.get-data.outputs.role_arn }}
  environment: ${{ steps.get-data.outputs.environment }}
  resource_arn: ${{ steps.get-data.outputs.resource_arn }}
  working_directory: ${{ steps.get-data.outputs.working_directory }}
  deployment_id: ${{ steps.get-data.outputs.deployment_id }}
  target_region: ${{ steps.get-data.outputs.target_region }}
  proton_region: ${{ steps.get-data.outputs.proton_region }}
  state_bucket: ${{ steps.get-data.outputs.state_bucket }}

Step 2. Run Terraform

Now we're ready to run terraform!

Assume IAM Role

First we have to assume the role we retrieved from env_config.json. Thankful AWS provides a GitHub Action that we can call which will assume that role for us. The step looks like this:

- name: Configure AWS Credentials
  id: assume_role
  uses: aws-actions/configure-aws-credentials@v1
  with:
    aws-region: ${{ needs.get-deployment-data.outputs.target_region }}
    role-to-assume: ${{ needs.get-deployment-data.outputs.role_arn }}
    role-session-name: TF-Github-Actions

One thing to point out is the usage of ${{ needs....}}, at the top of this job definition we have needs: get-deployment-data which says that this job depends on the previous job. If you don't declare dependencies, GitHub Actions may execute your jobs in parallel, so if you need to access outputs in one job from another, make sure you set up those dependencies properly.

Initialize Terraform

Now we have to Terraform initialized, in this case that means installing the Terraform cli and then running terraform init. Thankfully, we are again able to reuse some work that someone else did, Hashicorp has offered a GitHub Action for installing Terraform.

- name: Setup Terraform
  id: tf_setup
  uses: hashicorp/setup-terraform@v1
  with:
    terraform_version: 1.0.7
    terraform_wrapper: false

# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
  id: tf_init
  run: terraform init \
  -backend-config="bucket=${{ needs.get-deployment-data.outputs.state_bucket }}" \
  -backend-config="key=${{ needs.get-deployment-data.outputs.working_directory }}terraform.tfstate" \
  -backend-config="region=${{ needs.get-deployment-data.outputs.target_region }}"

So what we're doing here, in the first step is setting up the Terraform CLI in our GitHub Actions runner, and then in the second we are initializing Terraform to point to the region we want to deploy to and the S3 location of our state file.

Run!

And now we are ready to deploy our infrastructure. Again, for simplicity we've left out some steps that we've included in the full worker. We'll discuss some recommended best practices at the end.

- name: Terraform Apply
  id: tf_apply
  run: terraform apply -auto-approve

Step 3. Notify AWS Proton

Now for the final job, we need to notify AWS Proton of the result of provisioning. Essentially what we need to do is determine 1) was the apply a success? and 2) if it was a success, retrieve the outputs.

Again, skipping over some sections, before the below steps we need to again assume the role and initialize terraform. The reason for needing to repeat this is that GitHub Actions doesn't guarantee that across jobs you will share a runner. So for each job anything you need to complete that job must be intialized.

  - name: Notify Proton Success
    id: notify_success
    if: needs.terraform.result == 'success' && steps.tf_init.outcome == 'success'
    run: |
      # Get outputs as json
      outputs_json=$(terraform output -json)
    
      # The outputs parameters expects a list of key=keyName,valueString=value key=key2Name,valueString=value2 etc...
      # So here we convert the output json into a shell array
      formatted_outputs=( $(echo $outputs_json | jq -r "to_entries|map(\"key=\(.key),valueString=\(.value.value|tostring)\")|.[]") )
  
      # Notify proton
      aws proton notify-resource-deployment-status-change --region ${{ needs.get-deployment-data.outputs.proton_region }} --resource-arn ${{ needs.get-deployment-data.outputs.resource_arn }} --status SUCCEEDED --deployment-id ${{ needs.get-deployment-data.outputs.deployment_id }} --outputs ${formatted_outputs[*]}

      echo "Notify success!"   
    
  - name: Notify Proton Failure
    if: needs.terraform.result == 'failure' || needs.terraform.result == 'cancelled' || steps.tf_init.outcome != 'success'
    run: |
      aws proton notify-resource-deployment-status-change --region ${{ needs.get-deployment-data.outputs.proton_region }} --resource-arn ${{ needs.get-deployment-data.outputs.resource_arn }} --status FAILED --deployment-id ${{ needs.get-deployment-data.outputs.deployment_id }}
      echo "Notify failure!"

Recommended Best Practices

Validate Plan on Pull Request

Above we only run the workflow when a push is completed. But it's good to validate the terraform before we get to that point. To do that, we can update our overall workflow configuration to also run on pull request, like the following.

on:
  pull_request:
    types:
      - opened
      - reopened
    paths:
      - '**/.proton/deployment-metadata.json'
  push:
    branches:
      - main
    paths:
      - '**/.proton/deployment-metadata.json'

And then we will add in some logic into our Terraform workflow that validates the code and generates the plan before executing the apply.

- name: Terraform Format
  id: tf_fmt
  run: terraform fmt -diff -check

- name: Terraform Plan
  id: tf_plan
  run: terraform plan -var="aws_region=${{ needs.get-deployment-data.outputs.target_region }}"

- name: Terraform Apply
  id: tf_apply
  if: github.ref == 'refs/heads/main' && github.event_name == 'push'
  run: terraform apply -auto-approve -var="aws_region=${{ needs.get-deployment-data.outputs.target_region }}"

Also notice that in the apply we've added an if which conditionally executes that step only if the condition is met. In this case, we are saying only run the apply if that was a push to main branch.

Handling Deletes

When a resource using Self-managed provisioning is deleted within AWS Proton. A final Pull Request will be submitted. In that Pull Request some comments will be updated but most importantly, a flag in the deployment-metadata.json called isResourceDeleted will be set to true. You can build automation, if you like, to run a terraform destroy. We have included an example for how to do this in proton_run.yaml as well.