Architecture, Azure Pipelines, Diagrams

Azure Pipelines – Diagrams as Code

Following on from my previous post on Architecture Diagrams I thought I would share my experiences with another tool, Diagrams.

Diagrams uses the Python language to describe diagrams, Python is not a language I use generally but it was simple enough to learn building diagrams.

The documentation describes how to get started and setup. I am a big fan of containers and so I created a container for using Diagrams.

The following dockerfile will create an environment:

FROM python:alpine3.13
ENV APK_ADD "bash py3-pip graphviz ttf-freefont"

RUN apk upgrade --update && \
    apk add --no-cache --virtual .pipeline-deps readline linux-pam && \
    apk add --no-cache ${APK_ADD} && \
    # Install Diagrams
    pip --no-cache-dir install --upgrade pip && \
    pip --no-cache-dir install diagrams && \
    # Tidy up
    apk del .pipeline-deps

RUN echo "PS1='\n\[\033[01;35m\][\[\033[0m\]Diagrams\[\033[01;35m\]]\[\033[0m\]\n\[\033[01;35m\][\[\033[0m\]\[\033[01;32m\]\w\[\033[0m\]\[\033[01;35m\]]\[\033[0m\]\n \[\033[01;33m\]->\[\033[0m\] '" >> ~/.bashrc

CMD tail -f /dev/null

To build and run: (I used a windows 10 machine)

docker build -f diagrams.dockerfile -t my-diagrams
docker run -it --entrypoint=/bin/bash --volume $env:USERPROFILE\source\repos:/mycode my-diagrams

Diagram

With my new environment I can use an editor of my choice to create the diagrams, my current go to is Visual Studio Code and there is a extension for Python.

The purpose of using this tool was to draw a diagram of an Azure Tenant and Subscription setup, I needed something that would allow the diagram to be changed quickly as the multiple people were collaborating.

The code below shows a simple example of the diagram being created:

from diagrams import Cluster, Diagram
from diagrams.azure.general import Managementgroups
from diagrams.azure.general import Subscriptions
from diagrams.azure.identity import ActiveDirectory

with Diagram("Azure Tenant Design", show=False, direction="TB"):
    tenant = ActiveDirectory("Tenant AD")  
    topGroup = Managementgroups("Main\r\nManagement Group")
    sandbox = Subscriptions("Sandbox\r\nSubscription")

    with Cluster("Business Units"):
        with Cluster("Unit1"):
          mainGroup = Managementgroups("Unit1\r\nManagement Group")
          topGroup >> mainGroup
          with Cluster("Project1"):
            group = Managementgroups("Project1\r\nManagement Group")
            sub = [Subscriptions("Project1\r\nDev/Test\r\nSubscription"), Subscriptions("Project1\r\nProduction\r\nSubscription")]
            group - sub
            mainGroup >> group

          with Cluster("Project2"):
            group = Managementgroups("Project2\r\nManagement Group")
            sub = [Subscriptions("Project2\r\nDev/Test\r\nSubscription"), Subscriptions("Project2\r\nProduction\r\nSubscription")]
            group - sub
            mainGroup >> group

        with Cluster("Infrastructure"):
          group = Managementgroups("Infrastructure\r\nManagement Group")
          sub = [Subscriptions("Test\r\nSubscription"), Subscriptions("Infrastructure\r\nProduction\r\nSubscription")]
          group - sub
          topGroup >> group

    tenant >> topGroup >> sandbox

The diagram can be created by simply calling python and the name of the file (this code was executed from the container).

The code produces the following diagram:

Azure Pipelines

Now the diagram code is created, it would be good to be able to have a pipeline building the diagram and providing the image. Building the diagrams in an Azure Pipeline would be easier if I could use the container created earlier.

Fortunately I can, Azure Pipelines allows container jobs, but that means the dockerfile needs a few modifications to use it in Azure Pipelines. The Microsoft Docs explain in more detail but for this I need node installed, a special label and some additional packages.

The new dockerfile looks like this:

FROM node:lts-alpine3.13 AS node_base

RUN echo "NODE Version:" && node --version
RUN echo "NPM Version:" && npm --version

FROM python:alpine3.13

ENV NODE_HOME /usr/local/bin/node
COPY --from=node_base ["${NODE_HOME}", "${NODE_HOME}"] 

LABEL maintainer="Tazmainiandevil"
LABEL "com.azure.dev.pipelines.agent.handler.node.path"="${NODE_HOME}" 

ENV APK_ADD "bash sudo shadow py3-pip graphviz ttf-freefont"

RUN apk upgrade --update && \
    apk add --no-cache --virtual .pipeline-deps readline linux-pam && \
    apk add --no-cache ${APK_ADD} && \
    # Install Diagrams
    pip --no-cache-dir install --upgrade pip && \
    pip --no-cache-dir install diagrams && \
    # Tidy up
    apk del .pipeline-deps

RUN echo "PS1='\n\[\033[01;35m\][\[\033[0m\]Diagrams\[\033[01;35m\]]\[\033[0m\]\n\[\033[01;35m\][\[\033[0m\]\[\033[01;32m\]\w\[\033[0m\]\[\033[01;35m\]]\[\033[0m\]\n \[\033[01;33m\]->\[\033[0m\] '" >> ~/.bashrc

CMD tail -f /dev/null

Container Build

Using Azure Pipelines I can build the container and added it to an Azure Container Registry (if you need to know how to setup ACR see my previous post on Configuring ACR)

trigger: 
 branches:
    include:
    - main
 paths:
    include: 
     - diagrams.dockerfile

pr: none

variables:
- group: Azure Connections
- name: dockerFilePath
  value: diagrams.dockerfile
- name: imageRepository
  value: dac/diagrams

pool:
  vmImage: "ubuntu-latest"

steps:
  - task: Docker@2
    displayName: "Build Diagram Image"
    inputs:
      containerRegistry: "$(myContainerRegistry)"
      repository: '$(imageRepository)'
      command: 'buildAndPush'
      Dockerfile: '$(dockerfilePath)'
      tags: |
        $(Build.BuildNumber)
        latest

With the container added to my registry I can use it in a pipeline to create my diagrams.

Image Build

The pipeline needs to create an image as an artifact and only when on the main branch to make sure only the final diagrams are published and not ones in progress.

The YAML below defines the pipeline:

trigger: 
   - main

pr: none

variables:
  isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]

jobs:
- job: creatediagram
  displayName: Create Diagram  
  pool:
    vmImage: ubuntu-latest
  container:
    image: $(myContainerRegistry)/dac/diagrams:latest
    endpoint: 'My Registry Service Connection'
  variables:
    workspaceFolder: 'TenantDesign'
  steps:
  - script: | 
      python tenant.py
      cp *.png $(Build.ArtifactStagingDirectory)
    displayName: Run Python
  - publish: '$(Build.ArtifactStagingDirectory)'
    displayName: Publish Diagrams
    artifact: $(workspaceFolder)
    condition: eq(variables.isMain, true)

Conclusion

I found using Diagrams simple and the documentation was good to allow picking up what was needed quickly. I will certainly be looking at using it for other diagrams in the future. I like the fact that it is easy to use, open source and supports custom images so you are not limited to the provided icons (Custom Docs).

Diagrams GitHub

Diagrams Docs

I hope that others find this useful and use Diagrams as Code for their projects.

Azure Pipelines, IaC

Azure Pipelines – Object Parameters and Terraform

I’ve been using Terraform for a while with Azure Pipelines and have always passed the pipeline parameters or variables to Terraform using the -var command line parameter. This has worked really well until I encountered a need to pass more complex objects into Terraform which supports objects, maps and lists.

The Problem

When attempting to pass complex objects into -var Azure Pipelines outputs errors like ‘object is not a string’. After trying a number of work arounds that failed I ended up changing my Terraform to take strings and then perform actions on them e.g. an array as string then using split in Terraform to re-create the array.

This lead me to thinking “There has to be a better way”. Naturally one option is to create a .tfvars.json file and then substitute the variables using the same technique I used in my previous article Azure Pipelines – Parameters + JSON File Substitution. This would work for the most part but would not solve using array types.

I started thinking, could I get a parameter into a script and then somehow workout if it was a complex object and then write code to extract the value into something useful like JSON. This lead me to a community post that mentioned a function convertToJson.

A Solution

Based on using convertToJson and combing the technique from my previous article I came up with a step to create a HCL formatted .auto.tfvars file. The only thing is that for objects the colons ‘:’ need converting to equals ‘=’.

- ${{ each item in parameters }}:
     - script: echo '${{ item.key }}=${{ replace(convertToJson(item.value), ':', '=')}}' >> parameters.auto.tfvars
       displayName: "JsonVar ${{ item.key }}"

The .auto.tfvars file is automatically loaded by Terraform which removes the need to specify any -var or -var-file options.

Example Pipeline

For my example pipeline I have used an object for Tags and an array for a list of Network addresses for use with a Network Security Group.
The initial pipeline setups up the parameters and the Azure Storage Account for my Terraform state files.

trigger: none
pr: none
parameters:
  - name: resourceGroup
    displayName: Resource Group
    type: string
    default: 'terraform-test'
  - name: resourceLocation
    displayName: Resource Location
    type: string
    default: 'uksouth'
  - name: projectName
    displayName: Project Tag Name
    type: string
    default: 'Demo'
  - name: tags
    type: object
    default:
      Project: "Demo"
      Environment: "Dev"
  - name: network_source_addresses
    displayName: Network Address List
    type: object
    default:
      - "192.168.1.20"
      - "192.168.1.254"
variables:
  subscription: 'My Subscription'
  terraformVersion: '0.14.6'
  terraformResourceGroup: 'test-deployment'
  terraformStorageName: 'demoterraformstore'
  terrformStorageSku: Standard_LRS
  terraformContainerName: 'terraform'
  terraformStateFilename: test.tfstate
pool:
  vmImage: "ubuntu-latest"
steps:
- task: AzureCLI@2
  displayName: 'Azure CLI'
  inputs:
    azureSubscription: $(subscription)
    scriptType: bash
    scriptLocation: inlineScript
    inlineScript: |
      az group create --location ${{ parameters.resourceLocation }} --name $(terraformResourceGroup)
      az storage account create --name $(terraformStorageName) --resource-group $(terraformResourceGroup) --location ${{ parameters.resourceLocation }} --sku $(terrformStorageSku) --tags "project=${{ parameters.projectName }}"
      az storage container create --name $(terraformContainerName) --account-name $(terraformStorageName)
    addSpnToEnvironment: false
- template: deploy.yml
  parameters:
    resourceGroup: ${{ parameters.resourceGroup }}
    resourceLocation: ${{ parameters.resourceLocation }}
    tags: ${{ parameters.tags }}
    network_source_addresses: ${{ parameters.network_source_addresses }}
    secret_value: $(secret_value)

I separated the Terraform parts into a template so that the loop only uses the parameters that are needed for Terraform and not any others that are in the main pipeline.
Note: I use the Microsoft Terraform Azure Pipelines Extension to deploy the Terraform scripts.

parameters:
  - name: resourceGroup
    type: string
  - name: resourceLocation
    type: string
  - name: tags
    type: object
  - name: network_source_addresses
    type: object
  - name: secret_value
    type: string
steps:
- ${{ each item in parameters }}:
     - script: echo '${{ item.key }}=${{ replace(convertToJson(item.value), ':', '=')}}' >> parameters.auto.tfvars
       displayName: "JsonVar ${{ item.key }}"
- bash: |
    cat parameters.auto.tfvars
  displayName: "Debug show new file"
- task: TerraformInstaller@0
  displayName: 'Install Terraform'
  inputs:
    terraformVersion: $(terraformVersion)
  
- task: TerraformTaskV1@0
  displayName: 'Terraform Init'
  inputs:
    backendServiceArm: $(subscription)
    backendAzureRmResourceGroupName: '$(terraformResourceGroup)'
    backendAzureRmStorageAccountName: '$(terraformStorageName)'
    backendAzureRmContainerName: '$(terraformContainerName)'
    backendAzureRmKey:  '$(terraformStateFilename)'
- task: TerraformTaskV1@0
  displayName: 'Terraform Plan'
  inputs:
    command: plan    
    environmentServiceNameAzureRM: $(subscription)
- task: TerraformTaskV1@0
  displayName: 'Terraform Apply'
  inputs:
    command: apply
    commandOptions: -auto-approve
    environmentServiceNameAzureRM: $(subscription)

Conclusion

I think this is a nice technique for using complex types in Azure Pipelines for use with Terraform deployments or anything else that would benefit from this idea. This was a fun problem to try and solve and I hope that sharing this helps others who have encountered the same problem.