Azure Pipelines, IaC

Azure Pipelines – Deploy AKS with Bicep

In this post we are going to look at deploying an AKS cluster using Azure Pipelines YAML and Bicep.

If you are new to AKS then take a look at the video series AKS Zero to Hero from Richard Hooper (aka PixelRobots) and Gregor Suttie as well as the learning path from Brendan Burns.

If you are new to Pipelines and Bicep then checkout this Microsoft Learn course to give an introduction.

So, on to creating the AKS cluster using Bicep.

The resources we are going to deploy are:

  • Virtual Network
  • Log Analytics Workspace
  • AKS Cluster
  • Container Registry

We are also going to add Azure AD groups to lockdown the cluster administration and connect the container registry to allow AKS to pull containers from the registry.

Bicep

So let’s start with creating a module for the Virtual network, we need a name for the network and subnet as well as some address prefixes and tags.

@description('The virtual network name')
param vnetName string
@description('The name of the subnet')
param subnetName string
@description('The virtual network address prefixes')
param vnetAddressPrefixes array
@description('The subnet address prefix')
param subnetAddressPrefix string
@description('Tags for the resources')
param tags object

resource vnet 'Microsoft.Network/virtualNetworks@2019-11-01' = {
  name: vnetName
  location: resourceGroup().location
  properties: {
    addressSpace: {
      addressPrefixes: vnetAddressPrefixes
    }
    subnets: [
      {
        name: subnetName
        properties: {
          addressPrefix: subnetAddressPrefix
        }
      }      
    ]
  }
  tags: tags
}

output subnetId string = '${vnet.id}/subnets/${subnetName}'

The next module then is the AKS cluster itself, there is a lot of settings that you might want to control but I’ve added defaults for some of them. This module also includes creation of an Log Analytics workspace and the renaming of the AKS resource group that normally is prefixed with MC_ to something inline with the used naming convention.

@description('The environment prefix of the Managed Cluster resource e.g. dev, prod, etc.')
param prefix string
@description('The name of the Managed Cluster resource')
param clusterName string
@description('Resource location')
param location string = resourceGroup().location
@description('Kubernetes version to use')
param kubernetesVersion string = '1.20.7'
@description('The VM Size to use for each node')
param nodeVmSize string
@minValue(1)
@maxValue(50)
@description('The number of nodes for the cluster.')
param nodeCount int
@maxValue(100)
@description('Max number of nodes to scale up to')
param maxNodeCount int
@description('The node pool name')
param nodePoolName string = 'linux1'
@minValue(0)
@maxValue(1023)
@description('Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize')
param osDiskSizeGB int
param nodeAdminUsername string
@description('Availability zones to use for the cluster nodes')
param availabilityZones array = [
  '1'
  '2'
  '3'
]
@description('Allow the cluster to auto scale to the max node count')
param enableAutoScaling bool = true
@description('SSH RSA public key for all the nodes')
@secure()
param sshPublicKey string
@description('Tags for the resources')
param tags object
@description('Log Analytics Workspace Tier')
@allowed([
  'Free'
  'Standalone'
  'PerNode'
  'PerGB2018'
  'Premium'
])
param workspaceTier string
@allowed([
  'azure'  
])
@description('Network plugin used for building Kubernetes network')
param networkPlugin string = 'azure'
@description('Subnet id to use for the cluster')
param subnetId string
@description('Cluster services IP range')
param serviceCidr string = '10.0.0.0/16'
@description('DNS Service IP address')
param dnsServiceIP string = '10.0.0.10'
@description('Docker Bridge IP range')
param dockerBridgeCidr string = '172.17.0.1/16'
@description('An array of AAD group object ids for administration')
param adminGroupObjectIDs array = []

resource logAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2020-10-01' = {
  name: '${prefix}-oms-${clusterName}-${resourceGroup().location}'
  location: location
  properties: {
    sku: {
      name: workspaceTier
    }
  }
  tags: tags
}

resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-03-01' = {
  name: '${prefix}-aks-${clusterName}-${location}'
  location: location
  identity: {
    type: 'SystemAssigned'
  }
  tags: tags  
  properties: {
    nodeResourceGroup: 'rg-${prefix}-aks-nodes-${clusterName}-${location}'
    kubernetesVersion: kubernetesVersion
    dnsPrefix: '${clusterName}-dns'
    enableRBAC: true    
    agentPoolProfiles: [
      {        
        name: nodePoolName
        osDiskSizeGB: osDiskSizeGB
        osDiskType: 'Ephemeral'        
        count: nodeCount
        enableAutoScaling: enableAutoScaling
        minCount: nodeCount
        maxCount: maxNodeCount
        vmSize: nodeVmSize        
        osType: 'Linux'
        type: 'VirtualMachineScaleSets'
        mode: 'System'
        availabilityZones: availabilityZones
        enableEncryptionAtHost: true
        vnetSubnetID: subnetId
      }
    ]
    networkProfile: {      
      loadBalancerSku: 'standard'
      networkPlugin: networkPlugin
      serviceCidr: serviceCidr
      dnsServiceIP: dnsServiceIP
      dockerBridgeCidr: dockerBridgeCidr
    }
    aadProfile: !empty(adminGroupObjectIDs) ? {
      managed: true
      adminGroupObjectIDs: adminGroupObjectIDs
    } : null
    addonProfiles: {
      azurepolicy: {
        enabled: false
      }
      omsAgent: {
        enabled: true
        config: {
          logAnalyticsWorkspaceResourceID: logAnalyticsWorkspace.id
        }
      }   
    }
    linuxProfile: {      
      adminUsername: nodeAdminUsername
      ssh: {
        publicKeys: [
          {
            keyData: sshPublicKey
          }
        ]
      }      
    }
  }

  dependsOn: [
    logAnalyticsWorkspace    
  ]
}

output controlPlaneFQDN string = reference('${prefix}-aks-${clusterName}-${location}').fqdn
output clusterPrincipalID string = aksCluster.properties.identityProfile.kubeletidentity.objectId

The final module is building an Azure Container Registry and assigning the ACR Pull role for the cluster

@description('The name of the container registry')
param registryName string
@description('The principal ID of the AKS cluster')
param aksPrincipalId string
@description('Tags for the resources')
param tags object

@allowed([
  'b24988ac-6180-42a0-ab88-20f7382dd24c' // Contributor
  'acdd72a7-3385-48ef-bd42-f606fba81ae7' // Reader
])
param roleAcrPull string = 'b24988ac-6180-42a0-ab88-20f7382dd24c'

resource containerRegistry 'Microsoft.ContainerRegistry/registries@2019-05-01' = {
  name: registryName
  location: resourceGroup().location
  sku: {
    name: 'Standard'
  }
  properties: {
    adminUserEnabled: true
  }
  tags: tags
}

resource assignAcrPullToAks 'Microsoft.Authorization/roleAssignments@2020-04-01-preview' = {
  name: guid(resourceGroup().id, registryName, aksPrincipalId, 'AssignAcrPullToAks')
  scope: containerRegistry
  properties: {
    description: 'Assign AcrPull role to AKS'
    principalId: aksPrincipalId
    principalType: 'ServicePrincipal'
    roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${roleAcrPull}'
  }
}

output name string = containerRegistry.name

So now we have the all the modules lets setup the main bicep file to put it all together

@description('Naming prefix for the resources e.g. dev, test, prod')
param prefix string
@description('The public SSH key')
@secure()
param publicsshKey string
@description('The name of the cluster')
param clusterName string
@description('The location of the resources')
param location string = resourceGroup().location
@description('The admin username for the nodes in the cluster')
param nodeAdminUsername string
@description('An array of AAD group object ids to give administrative access.')
param adminGroupObjectIDs array = []
@description('The VM size to use in the cluster')
param nodeVmSize string
@minValue(1)
@maxValue(50)
@description('The number of nodes for the cluster.')
param nodeCount int = 1
@maxValue(100)
@description('Max number of nodes to scale up to')
param maxNodeCount int = 3
@description('Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize')
param osDiskSizeGB int
@description('Log Analytics Workspace Tier')
@allowed([
  'Free'
  'Standalone'
  'PerNode'
  'PerGB2018'
  'Premium'
])
param workspaceTier string
@description('The virtual network address prefixes')
param vnetAddressPrefixes array
@description('The subnet address prefix')
param subnetAddressPrefix string
@description('Tags for the resources')
param tags object

module vnet 'vnet.bicep' = {
  name: 'vnetDeploy'
  params: {
    vnetName: '${prefix}-vnet-${clusterName}-${location}'
    subnetName: '${prefix}-snet-${clusterName}-${location}'
    vnetAddressPrefixes: vnetAddressPrefixes
    subnetAddressPrefix: subnetAddressPrefix
    tags: tags
  }
}

module aks 'aks.bicep' = {
  name: 'aksDeploy'
  params: {
    prefix: prefix
    clusterName: clusterName    
    subnetId: vnet.outputs.subnetId
    nodeAdminUsername: nodeAdminUsername
    adminGroupObjectIDs: adminGroupObjectIDs
    nodeVmSize: nodeVmSize
    nodeCount: nodeCount
    maxNodeCount: maxNodeCount
    osDiskSizeGB: osDiskSizeGB
    sshPublicKey: publicsshKey
    workspaceTier: workspaceTier
    tags: tags
  }

  dependsOn: [
    vnet
  ]
}

module registry 'registry.bicep' = {
  name: 'registryDeploy'
  params: {
    registryName: 'acr${clusterName}'
    aksPrincipalId: aks.outputs.clusterPrincipalID
    tags: tags
  }

  dependsOn: [
    aks
  ]
}

As with ARM templates you can use a Json File to configure the parameters in bicep and so I’ve added one for this

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "tags": {
            "value": {
                "project": "mjdemo",
                "resource": "AKS"
            }
        },
        "prefix": {
            "value": "dev"
        },
        "clusterName": {
            "value": "mjdemo"
        },
        "nodeVmSize": {
            "value": "Standard_D2s_V3"
        },
        "osDiskSizeGB": {
            "value": 50
        },
        "nodeCount": {
            "value": 1
        },
        "maxNodeCount": {
            "value": 3
        },
        "nodeAdminUsername": {
            "value": "aksAdminUser"
        },
        "adminGroupObjectIDs": {
            "value": []
        },
        "publicsshKey": {
            "value": ""
        },
        "workspaceTier": {
            "value": "PerGB2018"
        },
        "vnetAddressPrefixes": {
            "value": []
        },
        "subnetAddressPrefix": {
            "value": ""
        }
    }
}

Pipeline

Now we have all the Bicep files and a parameters file, we can create an Azure Pipeline but first we are going to need an SSH Key and upload it to Azure Pipelines, one way to generate an SSH key is to use the ssh-keygen command in Bash (I used Ubuntu in WSL) e.g.

ssh-keygen -q -t rsa -b 4096 -N '' -f aksKey

This will generate a private and public key pair, you can then upload the public key file e.g. aksKey.pub to Secure Files in Azure DevOps Pipelines (Pipelines->Library->Secure files)

We are going to add Azure AD Groups in this deployment and will need to assign the role ‘Azure Kubernetes Service Cluster User Role’ to each group, the Microsoft Docs detail how to do this.

Now we have the SSH key uploaded we can configure the parameters we want to set for our AKS cluster and network.

trigger: none
pr: none

pool:
  vmImage: ubuntu-latest

parameters:
  - name: azureSubscription
    type: string
    default: 'Sandbox'
  - name: location
    displayName: 'Resource Location'
    type: string
    default: 'uksouth'
  - name: prefix
    displayName: 'Environment Prefix'
    type: string
    default: 'prod'
  - name: clusterName
    displayName: 'Name of the AKS Cluster'
    type: string
    default: 'demo'
  - name: nodeVmSize
    displayName: 'VM Size for the Nodes'
    type: string
    default: 'Standard_D2s_V3'
    values:
      - 'Standard_D2s_V3'
      - 'Standard_DS2_v2'
      - 'Standard_D4s_V3'
      - 'Standard_DS3_v2'
      - 'Standard_DS4_v2'
      - 'Standard_D8s_v3'
  - name: osDiskSizeGB
    displayName: 'Size of OS disk (0 means use vm size)'
    type: number
    default: 50
  - name: nodeCount
    displayName: 'The number of nodes'
    type: number
    default: 3
  - name: maxNodeCount
    displayName: 'Max node to scale out to'
    type: number
    default: 10
  - name: workspaceTier
    displayName: Log Analytics Workspace Tier
    type: string
    default: 'PerGB2018'
    values:
      - 'Free'
      - 'Standalone'
      - 'PerNode'
      - 'PerGB2018'
      - 'Premium'
  - name: tags
    displayName: 'Tags'
    type: object
    default:
     Environment: "prod"
     Resource: "AKS"
     Project: "Demo"
  - name: nodeAdminUsername
    displayName: 'Admin username for the nodes'
    type: string
    default: 'adminUserName'
  - name: vnetAddressPrefixes
    displayName: 'Virtual Network Address Prefixes'
    type: object
    default: 
      - '10.240.0.0/16'
  - name: subnetAddressPrefix
    displayName: 'Subnet Address Prefix'
    type: string
    default: '10.240.0.0/20'
  - name: adGroupNames
    type: object
    default: 
      - 'demo-group'

variables:
  resourceGroupName: 'rg-${{ parameters.prefix }}-${{ parameters.clusterName }}-${{ parameters.location }}'

With the parameters set the next part is to build up the steps, starting with downloading the SSH key from the secure files using the DownloadSecureFile task

steps:
- task: DownloadSecureFile@1
  displayName: 'Download Public SSH Key'
  name: SSHfile
  inputs:
    secureFile: 'aksKey.pub'
- bash: |
    value=`cat $(SSHfile.secureFilePath)`
    echo '##vso[task.setvariable variable=publicsshKey;issecret=true]'$value
  displayName: Obtain SSH key value

Next we can get the object IDs for the groups

- task: AzureCLI@2
  displayName: 'Get AD Group Object Ids'
  inputs:
    azureSubscription: ${{ parameters.azureSubscription }}
    scriptType: pscore
    scriptLocation: inlineScript
    inlineScript: |    
      $objectIds = '${{ join(',',parameters.adGroupNames) }}'.Split(',') | ForEach { 
        "$(az ad group list --query "[?displayName == '$_'].{objectId:objectId}" -o tsv)" 
      }

      $output = ConvertTo-Json -Compress @($objectIds)
      Write-Host '##vso[task.setvariable variable=groupIds]'$output

This next section is taking those parameters and turning them into variables to then substitute the values in parameters json file

- template: objectparameters.yml
  parameters:
    tags: ${{ parameters.tags }}
    vnetAddressPrefixes: ${{ parameters.vnetAddressPrefixes }}
- template: parameters.yml
  parameters:
    prefix: ${{ parameters.prefix }}
    clusterName: ${{ parameters.clusterName }}
    nodeVmSize: ${{ parameters.nodeVmSize }}
    osDiskSizeGB: ${{ parameters.osDiskSizeGB }}
    nodeCount: ${{ parameters.nodeCount }}
    maxNodeCount: ${{ parameters.maxNodeCount }}
    nodeAdminUsername: ${{ parameters.nodeAdminUsername }}
    publicsshKey: $(publicsshKey)
    workspaceTier: ${{ parameters.workspaceTier }}    
    subnetAddressPrefix: ${{ parameters.subnetAddressPrefix }}
    adminGroupObjectIDs: $(groupIds)
- task: FileTransform@2
  displayName: "Transform Parameters"
  inputs:
    folderPath: '$(System.DefaultWorkingDirectory)'
    xmlTransformationRules: ''
    jsonTargetFiles: 'deploy.parameters.json'

If you need to debug the transform then you can add another step to output the file contents, I find this a useful technique to make sure the transform worked as expected

- bash: |
    cat deploy.parameters.json
  displayName: "Debug show parameters file"

If we put all that together then the final pipeline looks like this:

trigger: none
pr: none

pool:
  vmImage: ubuntu-latest

parameters:
  - name: azureSubscription
    type: string
    default: 'Sandbox'
  - name: location
    displayName: 'Resource Location'
    type: string
    default: 'uksouth'
  - name: prefix
    displayName: 'Environment Prefix'
    type: string
    default: 'prod'
  - name: clusterName
    displayName: 'Name of the AKS Cluster'
    type: string
    default: 'demo'
  - name: nodeVmSize
    displayName: 'VM Size for the Nodes'
    type: string
    default: 'Standard_D2s_V3'
    values:
      - 'Standard_D2s_V3'
      - 'Standard_DS2_v2'
      - 'Standard_D4s_V3'
      - 'Standard_DS3_v2'
      - 'Standard_DS4_v2'
      - 'Standard_D8s_v3'
  - name: osDiskSizeGB
    displayName: 'Size of OS disk (0 means use vm cache size)'
    type: number
    default: 50
  - name: nodeCount
    displayName: 'The number of nodes'
    type: number
    default: 3
  - name: maxNodeCount
    displayName: 'Max node to scale out to'
    type: number
    default: 10
  - name: workspaceTier
    displayName: Log Analytics Workspace Tier
    type: string
    default: 'PerGB2018'
    values:
      - 'Free'
      - 'Standalone'
      - 'PerNode'
      - 'PerGB2018'
      - 'Premium'
  - name: tags
    displayName: 'Tags'
    type: object
    default:
     Environment: "prod"
     Resource: "AKS"
     Project: "Demo"
  - name: nodeAdminUsername
    displayName: 'Admin username for the nodes'
    type: string
    default: 'adminUserName'
  - name: vnetAddressPrefixes
    displayName: 'Virtual Network Address Prefixes'
    type: object
    default: 
      - '10.240.0.0/16'
  - name: subnetAddressPrefix
    displayName: 'Subnet Address Prefix'
    type: string
    default: '10.240.0.0/20'
  - name: adGroupNames
    type: object
    default: 
      - 'demo-group'

variables:
  resourceGroupName: 'rg-${{ parameters.prefix }}-${{ parameters.clusterName }}-${{ parameters.location }}'

steps:
- task: DownloadSecureFile@1
  displayName: 'Download Public SSH Key'
  name: SSHfile
  inputs:
    secureFile: 'aksKey.pub'
- bash: |
    value=`cat $(SSHfile.secureFilePath)`
    echo '##vso[task.setvariable variable=publicsshKey;issecret=true]'$value
  displayName: Obtain SSH key value  
- task: AzureCLI@2
  displayName: 'Get AD Group Object Ids'
  inputs:
    azureSubscription: ${{ parameters.azureSubscription }}
    scriptType: pscore
    scriptLocation: inlineScript
    inlineScript: |    
      $objectIds = '${{ join(',',parameters.adGroupNames) }}'.Split(',') | ForEach { 
        "$(az ad group list --query "[?displayName == '$_'].{objectId:objectId}" -o tsv)" 
      }

      $output = ConvertTo-Json -Compress @($objectIds)
      Write-Host '##vso[task.setvariable variable=groupIds]'$output
- template: objectparameters.yml
  parameters:
    tags: ${{ parameters.tags }}
    vnetAddressPrefixes: ${{ parameters.vnetAddressPrefixes }}
- template: parameters.yml
  parameters:
    prefix: ${{ parameters.prefix }}
    clusterName: ${{ parameters.clusterName }}
    nodeVmSize: ${{ parameters.nodeVmSize }}
    osDiskSizeGB: ${{ parameters.osDiskSizeGB }}
    nodeCount: ${{ parameters.nodeCount }}
    maxNodeCount: ${{ parameters.maxNodeCount }}
    nodeAdminUsername: ${{ parameters.nodeAdminUsername }}
    publicsshKey: $(publicsshKey)
    workspaceTier: ${{ parameters.workspaceTier }}    
    subnetAddressPrefix: ${{ parameters.subnetAddressPrefix }}
    adminGroupObjectIDs: $(groupIds)
- task: FileTransform@2
  displayName: "Transform Parameters"
  inputs:
    folderPath: '$(System.DefaultWorkingDirectory)'
    xmlTransformationRules: ''
    jsonTargetFiles: 'deploy.parameters.json'
- task: AzureCLI@2
  displayName: 'Deploy AKS Cluster'
  inputs:
    azureSubscription: ${{ parameters.azureSubscription }}
    scriptType: bash
    scriptLocation: inlineScript
    inlineScript: |
      az group create --name "$(resourceGroupName)" --location ${{ parameters.location }} 
      az deployment group create --name "${{ parameters.clusterName }}-deploy" --resource-group "$(resourceGroupName)" --template-file deploy.bicep --parameters deploy.parameters.json

The template file objectparameters.yml looks like this:

parameters: 
  - name: tags
    type: object
  - name: vnetAddressPrefixes
    type: object

steps:
- ${{ each item in parameters }}: 
  - bash: |
      value='${{ convertToJson(item.value) }}'
      echo '##vso[task.setvariable variable=parameters.${{ item.key }}.value]'$value
    displayName: "Create Variable ${{ item.key }}"

And the template file parameters.yml looks like this:

parameters: 
  - name: prefix
    type: string
  - name: clusterName
    type: string
  - name: nodeVmSize
    type: string
  - name: osDiskSizeGB
    type: number
  - name: nodeCount
    type: number
  - name: maxNodeCount
    type: number
  - name: nodeAdminUsername
    type: string
  - name: publicsshKey
    type: string
  - name: workspaceTier
    type: string
  - name: subnetAddressPrefix
    type: string
  - name: adminGroupObjectIDs
    type: string

steps:
- ${{ each item in parameters }}:  
    - bash: |
        echo '##vso[task.setvariable variable=parameters.${{ item.key }}.value]${{ item.value }}'
      displayName: "Create Variable ${{ item.key }}"

Now we have an AKS cluster setup we might want to deploy some applications to the cluster. CoderDave has a great video tutorial to do this with Azure Pipelines.

All the files shown above can be found on my GitHub

Azure

IaC with Containers

In a previous article about IaC co-located with your application I discussed having application specific infrastructure with your code and then in following articles (IaC ARM templates, IaC Ansible and IaC Terraform) I discussed deploying IaC using various methods with Azure Pipelines. What I haven’t discussed is the development environment used in order to test out my infrastructure code, and that is what this article is about.

Lately I have been building various infrastructure using ARM templates, Ansible and Terraform separately and sometimes together to deploy to Azure. And I have lost track of what was installed in order to create and run my infrastructure code. For creating templates, playbooks, etc. I use Visual Studio Code. There are extensions for ARM templates, Ansible and Terraform which provide great help in creating infrastructure code.

For Ansible and Terraform I was using the WSL (Windows Subsystem for Linux) and running a script to install them into that environment and then using PowerShell Core and the Azure CLI for ARM templates.

I thought this was a good setup and provided me with everything I needed but what if I want to share this environment with my team. I could simply put together a list of commands to install all of the tools that I had and maybe create a page in our wiki, that would be a start, but then what about versions of tools and different operating systems and the fact it can be time consuming getting it all set up.

To help solve this I decided that creating a container using Docker would provide a way of consistently building an IaC environment. An environment I can share with others and I could use in a CI/CD pipeline. I decided that the editor is up to the developer so I am not including Visual Studio Code but definitely recommend it. (if you are interested Microsoft provide a guide to installing VS Code into containers https://code.visualstudio.com/docs/remote/containers).

I can imagine that most use only one of the tools for their infrastructure code, PowerShell or Azure CLI or either Ansible or Terraform even though they can work well together. RedHat and HashiCorp presented the concept of using Ansible and Terraform together in this video, it is very interesting to see how the strengths of each can be used together rather than a pro or con for choosing one or the other.

So on to creating some docker files, Microsoft have official images on Docker Hub for Powershell and Azure CLI, Hashicorp have an official image for Terraform. I still prefer to create my own image for Terraform though so I can add other tools to the image.

Note: All dockerfiles shown here are available from my GitHub repository.

Create Image

Terraform dockerfile

ARG IMAGE_VERSION=latest
ARG IMAGE_REPO=alpine

FROM ${IMAGE_REPO}:${IMAGE_VERSION} AS installer-env
ARG TERRAFORM_VERSION=0.12.26
ARG TERRAFORM_PACKAGE=terraform_${TERRAFORM_VERSION}_linux_amd64.zip
ARG TERRAFORM_URL=https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/${TERRAFORM_PACKAGE}

# Install packages to get terraform
RUN apk upgrade --update && \
    apk add --no-cache wget
RUN wget --quiet ${TERRAFORM_URL} && \
    unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
    mv terraform /usr/bin

# New stage to remove tar.gz layers from the final image
FROM ${IMAGE_REPO}:${IMAGE_VERSION}

# Copy only the files we need from the previous stage
COPY --from=installer-env ["/usr/bin/terraform", "/usr/bin/terraform"]

# Install additional packages
RUN apk upgrade --update && \
    apk add --no-cache bash 

CMD tail -f /dev/null

Ansible dockerfile

ARG IMAGE_VERSION=latest
ARG IMAGE_REPO=alpine

FROM ${IMAGE_REPO}:${IMAGE_VERSION}

ENV ANSIBLE_VERSION=2.9.9
ENV ALPINE_ANSIBLE_VERSION=2.9.9-r0
ENV WHEEL_VERSION=0.30.0
ENV APK_ADD="bash py3-pip ansible=${ALPINE_ANSIBLE_VERSION}"

# Install core libs
RUN apk upgrade --update && \
    apk add --no-cache ${APK_ADD}

# Install Ansible 
RUN python3 -m pip install --upgrade pip && \
    pip install wheel==${WHEEL_VERSION} && \
    pip install ansible[azure]==${ANSIBLE_VERSION} mitogen && \
    pip install netaddr xmltodict openshift

CMD tail -f /dev/null

As I mentioned at the beginning, I’ve be using a mix of the tools and could do with an image with multiple tools. I ended up with the following dockerfile.

ARG IMAGE_VERSION=latest
ARG IMAGE_REPO=alpine

FROM ${IMAGE_REPO}:${IMAGE_VERSION} AS installer-env

ARG TERRAFORM_VERSION=0.12.26
ARG TERRAFORM_PACKAGE=terraform_${TERRAFORM_VERSION}_linux_amd64.zip
ARG TERRAFORM_URL=https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/${TERRAFORM_PACKAGE}

ARG POWERSHEL_VERSION=7.0.1
ARG POWERSHELL_PACKAGE=powershell-${POWERSHEL_VERSION}-linux-alpine-x64.tar.gz
ARG POWERSHLL_DOWNLOAD_PACKAGE=powershell.tar.gz
ARG POWERSHELL_URL=https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHEL_VERSION}/${POWERSHELL_PACKAGE}

# Install packages
RUN apk upgrade --update && \
    apk add --no-cache bash wget curl python3 libffi openssl

# Get Terraform
RUN wget --quiet ${TERRAFORM_URL} && \
    unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
    mv terraform /usr/bin

# Get PowerShell Core
RUN curl -L ${POWERSHELL_URL} -o /tmp/${POWERSHLL_DOWNLOAD_PACKAGE}&& \
    mkdir -p /opt/microsoft/powershell/7 && \
    tar zxf /tmp/${POWERSHLL_DOWNLOAD_PACKAGE} -C /opt/microsoft/powershell/7 && \
    chmod +x /opt/microsoft/powershell/7/pwsh

# New stage to remove tar.gz layers from the final image
FROM ${IMAGE_REPO}:${IMAGE_VERSION}

# Copy only the files we need from the previous stage
COPY --from=installer-env ["/usr/bin/terraform", "/usr/bin/terraform"]
COPY --from=installer-env ["/opt/microsoft/powershell/7", "/opt/microsoft/powershell/7"]
RUN ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh

ENV ANSIBLE_VERSION=2.9.9
ENV ALPINE_ANSIBLE_VERSION=2.9.9-r0
ENV WHEEL_VERSION=0.30.0
ENV APK_ADD="bash py3-pip ansible=${ALPINE_ANSIBLE_VERSION}"
ENV APK_POWERSHELL="ca-certificates less ncurses-terminfo-base krb5-libs libgcc libintl libssl1.1 libstdc++ tzdata userspace-rcu zlib icu-libs"

# Install core packages
RUN apk upgrade --update && \
    apk add --no-cache ${APK_ADD} ${APK_POWERSHELL} && \
    apk -X https://dl-cdn.alpinelinux.org/alpine/edge/main add --no-cache lttng-ust

# Install Ansible and other packages
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
    pip install --no-cache-dir wheel==${WHEEL_VERSION} && \
    pip install --no-cache-dir ansible[azure]==${ANSIBLE_VERSION} mitogen && \
    pip install --no-cache-dir netaddr xmltodict openshift

CMD tail -f /dev/null

Now I have dockerfiles, I need to create the image. Docker build is the command that I need.

docker build -f <dockerfile to use> -t <image name>
e.g.
docker build -f terraform.dockerfile -t iac_terraform_image

Run Image

Once I have the image, I need to test it out. As I installed bash as part of my image I can run an interactive command that will provide me with the bash shell.

e.g.
docker run -it --entrypoint=/bin/bash iac_terraform_image

from the shell I can then run commands like terraform –version

So now I have an image running in a container, what about my infrastructure I want to run? I could update the image to include git and pull my source code into the container or I could mount a volume to the container pointing to my source code on the host machine. I will use a volume for this example.

docker run -it --entrypoint=/bin/bash --volume <my source code>:/<name of the mount> <image name>
e.g.
docker run -it --entrypoint=/bin/bash --volumeĀ c:\users\<your profile>\Source:/mycode iac_terraform_image

As I want to deploy infrastructure to Azure I need to define some credentials, for this example I will add them as environment variables in a separate local file. Note: this file should not be added to source control.

The file contents looks like this for Terraform:

ARM_SUBSCRIPTION_ID=<subscription id>
ARM_TENANT_ID=<tenant id>
ARM_CLIENT_ID=<app id>
ARM_CLIENT_SECRET=<client secret>

The file contents looks like this for Ansible:

AZURE_SUBSCRIPTION_ID=<subscription id>
AZURE_TENANT=<tenant id>
AZURE_CLIENT_ID=<app id>
AZURE_SECRET=<client secret>

Combining the Terraform and Ansible environment values in the file would be used for the image that uses both of them.

I’ve called the file docker_env.txt and can now add that to the docker run command to include those environment variables.

e.g.
docker run -it --entrypoint=/bin/bash --volume c:\users\<your profile>\Source:/mycode --env-file docker_env.txt iac_terraform_image

Share Image

I now have an image, attached my source code, added my environment and I can run commands. Next step is to be able to share this image with others, I can upload it to Docker Hub (public) or I can use some other container registry like Azure Container Registry (private).  I will use the Azure Container Registry as it’s private.

I need to login to the registry, so I’ll use the Azure CLI to do this.

az acr login --name <your container registry>

Next I need to tag the image I want to push using the Azure Container Registry name with docker tag.

e.g.
docker tag terraform yourcontainerregistry.azurecr.io/iac_terraform_image

And now I can push the image to the Azure Container Registry using docker push.

e.g.
docker push yourcontainerregistry.azurecr.io/iac_terraform_image

For others to now use the image (assuming they have access to the Azure Container Registry) they can use docker pull.

e.g.
docker pull yourcontainerregistry.azurecr.io/iac_terraform_image

Running the newly pulled image.

docker run -it --entrypoint=/bin/bash --volumeĀ  c:\users\<your profile>\Source:/mycode --env-file docker_env.txt yourcontainerregistry.azurecr.io/iac_terraform_image

Summary

Using containers really helps get an environment setup quickly and easily and allows you to concentrate on the task at hand rather than setting up the tools. If you have many developers this can save a lot of time and the image can be easily updated and versioned.

I hope this was useful and helps create your own IaC development environments.