Azure, IaC

Azure ACI – SonarQube

After moving into a new role I found we needed a SonarQube server to perform code analysis. I thought of looking again at using ACI (Azure Container Instances) as when previously trying ACI with an external database I found that any version of SonarQube after 7.7 throws an error:

ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

After doing some reading and investigation I found that this is due to elastic search being embedded into SonarQube. In order to fix this it would mean changing the host OS settings to increase the max_map_count, on a Linux OS this would be changing the /etc/sysctl.conf file to update the max_map_count

vm.max_map_count=262144

The problem with ACI is that there is no access to the host, so how can the latest SonarQube (latest version at the time of writing was 8.6.0) be ran in ACI If this cannot be changed.

In this article I am going to detail a way of running SonarQube in ACI with an external database.

What do we need to do?

The first thing is to address the max_map_count issue, for this we need a sonar.properties file that contains the following setting:

sonar.search.javaAdditionalOpts=-Dnode.store.allow_mmap=false

This setting provides the ability to disable memory mapping in elastic search, which is needed when running SonarQube inside containers where you cannot change the hosts vm.max_map_count. (See elastic search documentation)

Now we have our sonar.properties file we need to create a custom container so we can add that into the setup. A small dockerfile can achieve this:

FROM sonarqube:8.6.0-community
COPY sonar.properties /opt/sonarqube/conf/sonar.properties
RUN chown sonarqube:sonarqube /opt/sonarqube/conf/sonar.properties

This dockerfile can now be built using Docker and pushed to an ACR (Azure Container Registry) ready to be used. If you are not sure how to build a container and/or push to an ACR then have a look at the Docker and Microsoft documentation which have easy to follow instructions.

Build Infrastructure

So now that we have a container uploaded to a container server we can look at the rest of the configuration.

There are a number of parts to create:

  • File shares
  • External Database
  • Container Group
    • SonarQube
    • Reverse Proxy

Being a big advocate of IaC (Infrastructure as Code) I am going to use Terraform to configure the SonarQube deployment.

File Shares

The SonarQube documentation mentions setting up volume mounts for data, extensions and logs, for this we can use an Azure Storage Account and Shares.

To make sure that the storage account has a unique name a random string is created to be appended to the storage name.

resource "random_string" "random" {
  length  = 16
  special = false
  upper   = false
}

resource "azurerm_storage_account" "storage" {
  name                     = lower(substr("${var.storage_config.name}${random_string.random.result}", 0, 24))
  resource_group_name      = var.resource_group_name
  location                 = var.resource_group_location
  account_kind             = var.storage_config.kind
  account_tier             = var.storage_config.tier
  account_replication_type = var.storage_config.sku
  tags                     = var.tags
}

resource "azurerm_storage_share" "data-share" {
  name                 = "data"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.data
}

resource "azurerm_storage_share" "extensions-share" {
  name                 = "extensions"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.extensions
}

resource "azurerm_storage_share" "logs-share" {
  name                 = "logs"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.logs
}

External Database

For the external database part we can use Azure SQL Server, a SQL Database and setup a firewall rule to allow azure services to access the database. Normally you would add specific IP addresses but as the IP address is not guaranteed when a container is stopped and restarted it cannot be added here. If you want to create a static IP then this article might help.

SQL Server and Firewall configuration:

resource "azurerm_sql_server" "sql" {
  name                         = lower("${var.sql_server_config.name}${random_string.random.result}")
  resource_group_name          = var.resource_group_name
  location                     = var.resource_group_location
  version                      = var.sql_server_config.version
  administrator_login          = var.sql_server_credentials.admin_username
  administrator_login_password = var.sql_server_credentials.admin_password
  tags                         = var.tags
}

resource "azurerm_sql_firewall_rule" "sqlfirewall" {
  name                = "AllowAllWindowsAzureIps"
  resource_group_name = var.resource_group_name
  server_name         = azurerm_sql_server.sql.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

For the database we can use the serverless tier, this will provide scaling when needed. Check out the Microsoft Docs for more information.

# SQL Database
resource "azurerm_mssql_database" "sqldb" {
  name                        = var.sql_database_config.name
  server_id                   = azurerm_sql_server.sql.id
  collation                   = "SQL_Latin1_General_CP1_CS_AS"
  license_type                = "LicenseIncluded"
  max_size_gb                 = var.sql_database_config.max_db_size_gb
  min_capacity                = var.sql_database_config.min_cpu_capacity
  read_scale                  = false
  sku_name                    = "${var.sql_database_config.sku}_${var.sql_database_config.max_cpu_capacity}"
  zone_redundant              = false
  auto_pause_delay_in_minutes = var.sql_database_config.auto_pause_delay_in_minutes
  tags                        = var.tags
}

Container Group

Setting up the container group requires credentials to access to the Azure Container Registry to run the custom SonarQube container. Using the data resource allows retrieval of the details without passing them as variables:

data "azurerm_container_registry" "registry" {
  name                = var.container_registry_config.name
  resource_group_name = var.container_registry_config.resource_group
}

For this setup we are going to have two containers the custom SonarQube container and a Caddy container. Caddy can be used as a reverse proxy and is small, lightweight and provides management of certificates automatically with Let’s Encrypt. Note: there are some rate limits with Let’s encrypt see the website for more information.

The SonarQube container configuration connects the SQL Database and Azure Storage Account Shares configured earlier.

The Caddy container configuration sets up the reverse proxy to the SonarQube instance.

resource "azurerm_container_group" "container" {
  name                = var.sonar_config.container_group_name
  resource_group_name = var.resource_group_name
  location            = var.resource_group_location
  ip_address_type     = "public"
  dns_name_label      = var.sonar_config.dns_name
  os_type             = "Linux"
  restart_policy      = "OnFailure"
  tags                = var.tags
  
  image_registry_credential {
      server = data.azurerm_container_registry.registry.login_server
      username = data.azurerm_container_registry.registry.admin_username
      password = data.azurerm_container_registry.registry.admin_password
  }

  container {
    name   = "sonarqube-server"
    image  = "${data.azurerm_container_registry.registry.login_server}/${var.sonar_config.image_name}"
    cpu    = var.sonar_config.required_vcpu
    memory = var.sonar_config.required_memory_in_gb
    environment_variables = {
      WEBSITES_CONTAINER_START_TIME_LIMIT = 400
    }    
    secure_environment_variables = {
      SONARQUBE_JDBC_URL      = "jdbc:sqlserver://${azurerm_sql_server.sql.name}.database.windows.net:1433;database=${azurerm_mssql_database.sqldb.name};user=${azurerm_sql_server.sql.administrator_login}@${azurerm_sql_server.sql.name};password=${azurerm_sql_server.sql.administrator_login_password};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
      SONARQUBE_JDBC_USERNAME = var.sql_server_credentials.admin_username
      SONARQUBE_JDBC_PASSWORD = var.sql_server_credentials.admin_password
    }

    ports {
      port     = 9000
      protocol = "TCP"
    }

    volume {
      name                 = "data"
      mount_path           = "/opt/sonarqube/data"
      share_name           = "data"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "extensions"
      mount_path           = "/opt/sonarqube/extensions"
      share_name           = "extensions"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "logs"
      mount_path           = "/opt/sonarqube/logs"
      share_name           = "logs"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }   
  }

  container {
    name     = "caddy-ssl-server"
    image    = "caddy:latest"
    cpu      = "1"
    memory   = "1"
    commands = ["caddy", "reverse-proxy", "--from", "${var.sonar_config.dns_name}.${var.resource_group_location}.azurecontainer.io", "--to", "localhost:9000"]

    ports {
      port     = 443
      protocol = "TCP"
    }

    ports {
      port     = 80
      protocol = "TCP"
    }
  }
}

You have no doubt noticed that there are many variables used for the configuration, so here are all the ones and the defaults:

variable "resource_group_name" {
  type = string
  description = "(Required) Resource Group to deploy to"
}

variable "resource_group_location" {
  type = string
  description = "(Required) Resource Group location"
}

variable "tags" {
  description = "(Required) Tags for SonarQube"
}

variable "container_registry_config" {
    type = object({
        name           = string
        resource_group = string
    })
    description = "(Required) Container Registry Configuration"
}

variable "sonar_config" {
    type = object({
        image_name            = string
        container_group_name  = string
        dns_name              = string
        required_memory_in_gb = string
        required_vcpu         = string
    })

    description = "(Required) SonarQube Configuration"
}

variable "sql_server_credentials" {
    type = object({
        admin_username = string
        admin_password = string
    })
    sensitive = true
}

variable "sql_database_config" {
    type = object({
        name                        = string
        sku                         = string
        auto_pause_delay_in_minutes = number
        min_cpu_capacity            = number
        max_cpu_capacity            = number
        max_db_size_gb              = number
    })
    default = {
        name                        = "sonarqubedb"
        sku                         = "GP_S_Gen5"
        auto_pause_delay_in_minutes = 60
        min_cpu_capacity            = 0.5
        max_cpu_capacity            = 1
        max_db_size_gb              = 50
    }
}

variable "sql_server_config" {
   type = object({
        name    = string
        version = string
   })
   default = {
       name    = "sql-sonarqube"
       version = "12.0"
   }
}

variable "storage_share_quota_gb" {
  type = object({
    data       = number
    extensions = number
    logs       = number
  })
  default = {
      data       = 10
      extensions = 10
      logs       = 10
  }
}

variable "storage_config" {
    type = object({
        name = string
        kind = string
        sku  = string        
        tier = string
    })
    default = {
        name = "sonarqubestore"
        kind = "StorageV2"
        sku  = "LRS"
        tier = "Standard"
    }
}

To make this easy to configure I added all of this to a Terrform module and then the main terraform file would be something like:

terraform {  
  required_version = ">= 0.14"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.37.0"
    }
  }
}

provider "azurerm" {  
  features {}
}

# Create a resource group
resource "azurerm_resource_group" "instance" {
  name     = "test-sonar"
  location = "uksouth"
}

# Generate Password
resource "random_password" "password" {
  length = 24
  special = true
  override_special = "_%@"
}

# Module
module "sonarqube" {
    depends_on                        = [azurerm_resource_group.instance]
    source                            = "./modules/sonarqube"
    tags                              = { Project = "Sonar", Environment = "Dev" }
    resource_group_name               = azurerm_resource_group.instance.name
    resource_group_location           = azurerm_resource_group.instance.location
    
    sql_server_credentials            = {
        admin_username = "sonaradmin"
        admin_password = random_password.password.result
    }

    container_registry_config         = {
        name           = "myregistry"
        resource_group = "my-registry-rg"
    }

    sonar_config                      = {
        container_group_name  = "sonarqubecontainer"
        required_memory_in_gb = "4"
        required_vcpu         = "2"
        image_name            = "my-sonar:latest"
        dns_name              = "my-custom-sonar"
    }

    sql_server_config                = {
       name    = "sql-sonarqube"
       version = "12.0"
    }

    sql_database_config              = {
        name                        = "sonarqubedb"
        sku                         = "GP_S_Gen5"
        auto_pause_delay_in_minutes = 60
        min_cpu_capacity            = 0.5
        max_cpu_capacity            = 2
        max_db_size_gb              = 250
    }

    storage_share_quota_gb            = {  
        data       = 50
        extensions = 10
        logs       = 20
    }
}

By using the random_password resource to create a SQL password no secrets are included and there is no need to know the password as long as the SonarQube Server does.
The full code used here can be found in my GitHub repo.

I am sure there are still improvements that could be made to this setup but hopefully it will help anyone wanting to use ACI for running a SonarQube server.

Next Steps

Once the container instance is running you might not want it running 24/7 so using an Azure Function or Logic App to stop and start the instance when its not needed will definitely save money. I plan to run Azure Functions to start the container at 08:00 and stop the container at 18:00 Monday to Friday.

As this setup is public, a version that uses your own network and is private might be a good next step.

Azure, Azure Pipelines

Azure Pipelines – Multistage YAML

Azure Pipelines YAML allows us to create PaC (Pipeline as Code) to build and deploy applications to multiple stages e.g. Staging, Production.

To demonstrate this process I will cover the following:

  • Build a simple web application with UI tests
  • Publish the web application to an ACR (Azure Container Registry)
  • Create an Azure Web App with IaC (Infrastructure as Code)
  • Deploy the web application container to the Azure Web App
  • Run basic UI tests on multiple stages

This article assumes that you are familiar with building YAML pipelines in Azure DevOps Pipelines.

The Web Application

For simplicity I have used the default ASP.NET Core Web Application in Visual Studio 2019 with Docker Support enabled for Linux to create the web application.

The only thing added to the default web application is a few UI tests using Selenium. You can find all the code used and the deployment files on my GitHub.

The Pipeline

After creating a new pipeline in Azure Pipelines, I need to configure the Azure and ACR connection variables in the pipeline UI.

If you need to know how to configure the ACR service connection see my previous article Configure ACR – Azure DevOps.

Build Image

Now everything is configured, I can create the initial YAML to build and push the application to an ACR.

As this will be a multistage pipeline I will create the first Stage to build and push the image.

trigger:
- master

resources:
- repo: self

variables:  
  imageRepository: 'multistagepipelines'   
  tag: '$(Build.BuildId)' 
  vmImageName: 'ubuntu-latest'
  uiTestFolder: 'uitests'

stages:
- stage: Build
  displayName: Build and push stage
  jobs:  
  - job: Build
    displayName: Build
    pool:
      vmImage: $(vmImageName)  
    steps:
      - task: Docker@2
        displayName: Build and push an image to container registry
        inputs:
          containerRegistry: 'ACR Connection'
          repository: '$(imageRepository)'
          command: 'buildAndPush'
          Dockerfile: '**/Dockerfile'
          tags: |
            latest
            $(tag)

Now I can run this pipeline and see if it was successful.

And I can check the ACR in Azure to confirm the image has successfully been created.

Define the Web App

Now I have the image uploaded to the ACR, I need to define the Azure Web App that I will be deploying to.

For this I will use an ARM (Azure Resource Manager) template.

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "siteName": {
            "type": "string",
            "metadata": {
                "description": "The unique name of your Web Site."
            }
        },
        "appImageName": {
            "type": "string",
            "metadata": {
                "description": "The name of the container image for this web app"
            }
        },
        "containerRegistryName": {
            "type": "string",
            "metadata": {
                "description": "The name of the azure container registry that contains the webapp"
            }
        },
        "containerRegistryUserName": {
            "type": "string",
            "metadata": {
                "description": "The user name to access the azure container registry"
            }
        },
        "containerRegistryPassword": {
            "type": "string",
            "metadata": {
                "description": "The password to access the azure container registry"
            }
        }

    },
    "variables": {
        "hostingPlanName": "[concat('hpn-',  parameters('siteName'))]",
        "siteApiVersion": "2019-08-01"
    },
    "resources": [
        {
            "type": "Microsoft.Web/serverfarms",
            "apiVersion": "[variables('siteApiVersion')]",
            "name": "[variables('hostingPlanName')]",
            "location": "[resourceGroup().location]",
            "properties": {
                "name": "[variables('hostingPlanName')]",
                "workerSizeId": "1",
                "reserved": true,
                "numberOfWorkers": "1"
            },
            "sku": {
                "Tier": "Standard",
                "Name": "S1"
            },
            "kind": "linux"
        },
        {
            "name": "[parameters('siteName')]",
            "type": "Microsoft.Web/sites",
            "apiVersion": "[variables('siteApiVersion')]",
            "kind": "app,linux,container",
            "location": "[resourceGroup().location]",
            "tags": {
                "hostingPlan": "[variables('hostingPlanName')]",
                "displayName": "[parameters('siteName')]"
            },
            "dependsOn": [
                "[variables('hostingPlanName')]"
            ],
            "properties": {
                "name": "[parameters('siteName')]",
                "serverFarmId": "[variables('hostingPlanName')]",
                "siteConfig": {
                    "use32BitWorkerProcess": false,
                    "http20Enabled": true,
                    "minTlsVersion": "1.2",
                    "alwaysOn": true,
                    "linuxFxVersion": "[concat('DOCKER|', parameters('appImageName'))]",
                    "appSettings": [
                        {
                            "name": "DOCKER_REGISTRY_SERVER_USERNAME",
                            "value": "[parameters('containerRegistryUserName')]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_URL",
                            "value": "[concat('https://',parameters('containerRegistryName'))]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
                            "value": "[parameters('containerRegistryPassword')]"
                        }
                    ]
                }
            }
        }
    ],
    "outputs": {}
}

There are few things to note in this template, firstly that we are deploying to a linux container so the website configuration is a little different to normal. The kind property needs to include more information than just app.

"kind": "app,linux,container"

And the reserved property must be set to true.

 "reserved": true

There are also a couple of settings that aren’t really documented in the Microsoft Docs to configure the app settings to connect to the ACR to retrieve the image. Adding these appSettings will setup the connection.

"appSettings": [
 {
   "name": "DOCKER_REGISTRY_SERVER_USERNAME",
   "value": "[parameters('containerRegistryUserName')]"
 },
 {
   "name": "DOCKER_REGISTRY_SERVER_URL",
   "value": "[concat('https://',parameters('containerRegistryName'))]"
 },
 {
   "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
   "value": "[parameters('containerRegistryPassword')]"
  }
]

Publish Template

The first thing to change in the pipeline is to add a step to upload the ARM template to an artifact to use later in the deployment.

Adding a PublishBuildArtifacts task to the build steps will perform the artifact creation.

- task: PublishBuildArtifacts@1
  displayName: Publish ARM template
  inputs:
    PathtoPublish: 'deploy.json'
    ArtifactName: 'template'
    publishLocation: 'Container'

Publish Tests

You may have noticed in the pipeline that I used “Jobs” and created a single job, this could be seen as unnecessary, but now I am going to add another job that will run in parallel with the Build Job.

So I need to add some tasks to build my UI tests. I’ve also added a variable “vmWindowsImageName” as for this job I am going to use a windows image. The test project is .NET Core 3.1 so I will use the DotNetCoreCLI tasks to restore packages and build the tests.

- job: BuildTests
  displayName: Build UI Tests
  pool:
    vmImage: $(vmWindowsImageName)
  steps:
    - task: DotNetCoreCLI@2
      displayName: Restore Packages
      inputs:
        command: 'restore'
        projects: 'multistagepipelinestests/*.csproj'
    - task: DotNetCoreCLI@2
      displayName: Build Tests
      inputs:
        command: 'build'
        projects: '**/multistagepipelinestests.csproj'
        arguments: '--configuration Release -o $(Build.ArtifactStagingDirectory)/uitests'

As with the ARM template, the UI tests need publishing to use later.

- task: PublishBuildArtifacts@1
  displayName: Publish UI Tests
  inputs:
    PathtoPublish: '$(Build.ArtifactStagingDirectory)/$(uiTestFolder)'
    ArtifactName: $(uiTestFolder)
    publishLocation: 'Container'

Deployment

Now the pipeline builds and publishes the necessary artifacts to the pipeline and the ACR, I can now add a new stage to deploy the application.

This new stage uses a special job, a ‘deployment’ job and uses a strategy. The Microsoft Docs have a lot of information about different strategies, for this I will use the ‘runonce’ strategy as the other strategies are not supported here.

- stage: Staging
  displayName: Deploy to Staging
  jobs:
  - deployment: DeployWeb
    displayName: Deploy Web App
    pool:
     vmImage: $(vmWindowsImageName)
    environment: Staging
    variables:
      siteName: staging-taz-app
      siteResourceGroup: stag-taz-webapp
      siteLocation: UK South
      appImageName: $(containerRegistryName)/$(imageRepository):latest
      baseSiteUrl: 'https://$(siteName).azurewebsites.net/'
    strategy:
      runOnce:       
        deploy:
          steps:

With the job and strategy configured, I can now add the first step to execute the ARM template and create the Web App.

- task: AzureResourceManagerTemplateDeployment@3
    displayName: Create or Update Azure Web App
    inputs:
      deploymentScope: 'Resource Group'
      azureResourceManagerConnection: $(SubscriptionName)
      subscriptionId: $(subscriptionId)
      action: 'Create Or Update Resource Group'
      resourceGroupName: $(siteResourceGroup)
      location: $(siteLocation)
      templateLocation: 'Linked artifact'
      csmFile: '$(Pipeline.Workspace)/template/deploy.json'
      overrideParameters: '-siteName $(siteName) -appImageName $(appImageName) -containerRegistryName $(containerRegistryName) -containerRegistryUserName $(containerRegistryUserName) -containerRegistryPassword $(containerRegistryPassword)'
      deploymentMode: 'Incremental'

Once the Web App is created I can deploy the application container into the new Web App. As this is a container application I will use the AzureWebAppContainer task.

- task: AzureWebAppContainer@1
  displayName: Deploy Application
  inputs:
    azureSubscription: $(SubscriptionName)
    appName: '$(siteName)'
    containers: '$(appImageName)'

Once the app is deployed I can then run the UI tests, but first I’ll need to add a FileTranform task to make sure my settings file has the correct URL configured to run the tests against.

- task: FileTransform@2
  displayName: Configure Staging
  inputs:
    folderPath: '$(Pipeline.Workspace)'
    xmlTransformationRules: ''
    jsonTargetFiles: '**/*settings.json'

If you want to check that the settings file correctly transformed you can add a simple PowerShell task to output the file contents.

- task: PowerShell@2
  inputs:
    targetType: 'inline'
    script: 'Get-Content -Path $(Pipeline.Workspace)/$(uiTestFolder)/testsettings.json'
    pwsh: true

And now a task to run the UI tests, for this I will use the VSTest task to run and publish the test results to the Azure Pipeline UI.

- task: VSTest@2
  displayName: Run UI Tests
  inputs:
    testSelector: 'testAssemblies'
    testAssemblyVer2: |
      ***tests.dll
      !***TestAdapter.dll
      !**obj**
    searchFolder: '$(Pipeline.Workspace)/$(uiTestFolder)'
    uiTests: true
    testRunTitle: 'Basic UI Tests'

There have been a lot of changes added, so let’s see the full pipeline so far:

trigger:
- master

resources:
- repo: self

variables:  
  imageRepository: 'multistagepipelines'   
  tag: '$(Build.BuildId)' 
  vmImageName: 'ubuntu-latest'
  vmWindowsImageName: 'windows-latest'
  uiTestFolder: 'uitests'

stages:
- stage: Build
  displayName: Build and push stage
  jobs:  
  - job: Build
    displayName: Build
    pool:
      vmImage: $(vmImageName)
    steps:
      - task: Docker@2
        displayName: Build and push an image to container registry
        inputs:
          containerRegistry: 'ACR Connection'
          repository: '$(imageRepository)'
          command: 'buildAndPush'
          Dockerfile: '**/Dockerfile'
          tags: |
            latest
            $(tag)
      - task: PublishBuildArtifacts@1
        displayName: Publish ARM template
        inputs:
          PathtoPublish: 'deploy.json'
          ArtifactName: 'template'
          publishLocation: 'Container'
  - job: BuildTests
    displayName: Build UI Tests
    pool:
      vmImage: $(vmWindowsImageName)
    steps:
      - task: DotNetCoreCLI@2
        displayName: Restore Packages
        inputs:
          command: 'restore'
          projects: 'multistagepipelinestests/*.csproj'
      - task: DotNetCoreCLI@2
        displayName: Build Tests
        inputs:
          command: 'build'
          projects: '**/multistagepipelinestests.csproj'
          arguments: '--configuration Release -o $(Build.ArtifactStagingDirectory)/uitests'
      - task: PublishBuildArtifacts@1
        displayName: Publish UI Tests
        inputs:
          PathtoPublish: '$(Build.ArtifactStagingDirectory)/$(uiTestFolder)'
          ArtifactName: $(uiTestFolder)
          publishLocation: 'Container'
- stage: Staging
  displayName: Deploy to Staging
  jobs:
  - deployment: DeployWeb
    displayName: Deploy Web App
    pool:
     vmImage: $(vmWindowsImageName)
    environment: Staging
    variables:
      siteName: staging-taz-app
      siteResourceGroup: stag-taz-webapp
      siteLocation: UK South
      appImageName: $(containerRegistryName)/$(imageRepository):latest
      baseSiteUrl: 'https://$(siteName).azurewebsites.net/'
    strategy:
      runOnce:       
        deploy:
          steps:
          - task: AzureResourceManagerTemplateDeployment@3
            displayName: Create or Update Azure Web App
            inputs:
              deploymentScope: 'Resource Group'
              azureResourceManagerConnection: $(SubscriptionName)
              subscriptionId: $(subscriptionId)
              action: 'Create Or Update Resource Group'
              resourceGroupName: $(siteResourceGroup)
              location: $(siteLocation)
              templateLocation: 'Linked artifact'
              csmFile: '$(Pipeline.Workspace)/template/deploy.json'
              overrideParameters: '-siteName $(siteName) -appImageName $(appImageName) -containerRegistryName $(containerRegistryName) -containerRegistryUserName $(containerRegistryUserName) -containerRegistryPassword $(containerRegistryPassword)'
              deploymentMode: 'Incremental'
          - task: AzureWebAppContainer@1
            displayName: Deploy Application
            inputs:
              azureSubscription: $(SubscriptionName)
              appName: '$(siteName)'
              containers: '$(appImageName)'
          - task: FileTransform@2
            displayName: Configure Staging
            inputs:
              folderPath: '$(Pipeline.Workspace)'
              xmlTransformationRules: ''
              jsonTargetFiles: '**/*settings.json'
          - task: VSTest@2
            displayName: Run UI Tests
            inputs:
              testSelector: 'testAssemblies'
              testAssemblyVer2: |
                ***tests.dll
                !***TestAdapter.dll
                !**obj**
              searchFolder: '$(Pipeline.Workspace)/$(uiTestFolder)'
              uiTests: true
              testRunTitle: 'Basic UI Tests'

Enhance the Pipeline

Currently the pipeline:

  • Builds a web application image and uploads it to an ACR
  • Deploys an Azure Web App using an ARM Template
  • Deploys the image into the Azure Web App
  • And runs UI tests against the newly deployed application

This is great but I would guess most of us don’t just have one environment that we need to deploy to and will need at least another one and maybe a manual intervention step too.

To create another environment I could just copy and paste the ‘Staging’ stage, rename it and update the variables. Whilst this approach would work, it would introduce a maintenance overhead we don’t want.

Fortunately Azure Pipelines YAML includes Templates for variables, jobs, steps and stages to handle this.

So, I will move the steps for the ‘Staging’ deployment into a template and call it web-deploy-steps.yml. The template file will look like:

steps:
- task: AzureResourceManagerTemplateDeployment@3
  displayName: Create or Update Azure Web App
  inputs:
    deploymentScope: 'Resource Group'
    azureResourceManagerConnection: $(SubscriptionName)
    subscriptionId: $(subscriptionId)
    action: 'Create Or Update Resource Group'
    resourceGroupName: $(siteResourceGroup)
    location: $(siteLocation)
    templateLocation: 'Linked artifact'
    csmFile: '$(Pipeline.Workspace)/template/deploy.json'
    overrideParameters: '-siteName $(siteName) -appImageName $(appImageName) -containerRegistryName $(containerRegistryName) -containerRegistryUserName $(containerRegistryUserName) -containerRegistryPassword $(containerRegistryPassword)'
    deploymentMode: 'Incremental'
- task: AzureWebAppContainer@1
  displayName: Deploy Application
  inputs:
    azureSubscription: $(SubscriptionName)
    appName: '$(siteName)'
    containers: '$(appImageName)'
- task: FileTransform@2
  displayName: Configure Staging
  inputs:
    folderPath: '$(Pipeline.Workspace)'
    xmlTransformationRules: ''
    jsonTargetFiles: '**/*settings.json'
- task: PowerShell@2
  inputs:
    targetType: 'inline'
    script: 'Get-Content -Path $(Pipeline.Workspace)/$(uiTestFolder)/testsettings.json'
    pwsh: true
- task: VSTest@2
  displayName: Run UI Tests
  inputs:
    testSelector: 'testAssemblies'
    testAssemblyVer2: |
      **\*tests.dll
      !**\*TestAdapter.dll
      !**\obj\**
    searchFolder: '$(Pipeline.Workspace)/$(uiTestFolder)'
    uiTests: true
    testRunTitle: 'Basic UI Tests'

Now I can update the ‘Staging’ stage to use the new template.

- stage: Staging
  displayName: Deploy to Staging
  jobs:
  - deployment: DeployWeb
    displayName: Deploy Web App
    pool:
     vmImage: $(vmWindowsImageName)
    environment: Staging
    variables:
      siteName: staging-taz-app
      siteResourceGroup: stag-taz-webapp
      siteLocation: UK South
      appImageName: $(containerRegistryName)/$(imageRepository):latest
      baseSiteUrl: 'https://$(siteName).azurewebsites.net/'
    strategy:
      runOnce:       
        deploy:
          steps:
          - template: web-deploy-steps.yml

It is now easy to add another stage using the same steps. I’ll add a production stage and update the variables.

- stage: Production
  displayName: Deploy to Production
  jobs:
  - deployment: DeployWeb
    displayName: Deploy Web App
    pool:
     vmImage: $(vmWindowsImageName)
    environment: Production
    variables:
      siteName: production-taz-app
      siteResourceGroup: prod-taz-webapp
      siteLocation: UK South
      appImageName: $(containerRegistryName)/$(imageRepository):latest
      baseSiteUrl: 'https://$(siteName).azurewebsites.net/'
    strategy:
      runOnce:       
        deploy:
          steps:
          - template: web-deploy-steps.yml

The full pipeline with the template now looks like:

trigger:
- master

resources:
- repo: self

variables:  
  imageRepository: 'multistagepipelines'   
  tag: '$(Build.BuildId)' 
  vmImageName: 'ubuntu-latest'
  vmWindowsImageName: 'windows-latest'
  uiTestFolder: 'uitests'

stages:
- stage: Build
  displayName: Build and push stage
  jobs:  
  - job: Build
    displayName: Build
    pool:
      vmImage: $(vmImageName)  
    steps:
      - task: Docker@2
        displayName: Build and push an image to container registry
        inputs:
          containerRegistry: 'ACR Connection'
          repository: '$(imageRepository)'
          command: 'buildAndPush'
          Dockerfile: '**/Dockerfile'
          tags: |
            latest
            $(tag)
      - task: PublishBuildArtifacts@1
        displayName: Publish ARM template
        inputs:
          PathtoPublish: 'deploy.json'
          ArtifactName: 'template'
          publishLocation: 'Container'
  - job: BuildTests
    displayName: Build UI Tests
    pool:
      vmImage: $(vmWindowsImageName)
    steps:
      - task: DotNetCoreCLI@2
        displayName: Restore Packages
        inputs:
          command: 'restore'
          projects: 'multistagepipelinestests/*.csproj'
      - task: DotNetCoreCLI@2
        displayName: Build Tests
        inputs:
          command: 'build'
          projects: '**/multistagepipelinestests.csproj'
          arguments: '--configuration Release -o $(Build.ArtifactStagingDirectory)/uitests'
      - task: PublishBuildArtifacts@1
        displayName: Publish UI Tests
        inputs:
          PathtoPublish: '$(Build.ArtifactStagingDirectory)/$(uiTestFolder)'
          ArtifactName: $(uiTestFolder)
          publishLocation: 'Container'
- stage: Staging
  displayName: Deploy to Staging
  jobs:
  - deployment: DeployWeb
    displayName: Deploy Web App
    pool:
     vmImage: $(vmWindowsImageName)
    environment: Staging
    variables:
      siteName: staging-taz-app
      siteResourceGroup: stag-taz-webapp
      siteLocation: UK South
      appImageName: $(containerRegistryName)/$(imageRepository):latest
      baseSiteUrl: 'https://$(siteName).azurewebsites.net/'
    strategy:
      runOnce:       
        deploy:
          steps:
          - template: web-deploy-steps.yml
- stage: Production
  displayName: Deploy to Production
  jobs:
  - deployment: DeployWeb
    displayName: Deploy Web App
    pool:
     vmImage: $(vmWindowsImageName)
    environment: Production
    variables:
      siteName: production-taz-app
      siteResourceGroup: prod-taz-webapp
      siteLocation: UK South
      appImageName: $(containerRegistryName)/$(imageRepository):latest
      baseSiteUrl: 'https://$(siteName).azurewebsites.net/'
    strategy:
      runOnce:       
        deploy:
          steps:
          - template: web-deploy-steps.yml

Review Output

Now the pipeline has ran, let’s check the results.

And let’s see if the resources were deployed into Azure.

Approvals and Checks

If the stage needs a manual intervention or approval step you can configure them in Azure Pipelines, just select ‘Environments’.

Once the list of environments is displayed you can select the one you need to add approvals and checks to e.g. Production.

Selecting the 3 dots on the right hand side and then selecting ‘Approvals and checks’ will allow a variety of options to be added.

There are a number of checks that can be added, here I will just select approvals.

Approvals simply need the users or groups that can approve the stage you want to control.

There are a few more settings for approvals, how many need to approve, approval timeout, etc. but I am not going to go into detail about them.

Conclusion

Azure Pipelines YAML provides a flexible way to create build and deployment pipelines that can be source controlled. Changes can be approved, tracked and are visible to everyone instead of a change via a UI that goes unnoticed and difficult to track if there is a problem caused by a change.

Being able to control the full application deployment flow this way is very powerful and allows the whole team to understand how their application is built and deployed.