Azure Pipelines, DevOps

Dynamic Multistage Azure Pipelines Part 1

In a previous post I looked at multistage YAML pipelines. In this post I am going to look at dynamic multistage YAML pipelines.

What do I mean by dynamic multistage? What I mean is running multiple stages but all of the configuration is loaded dynamically from one or more sources e.g. parameters, variable templates, variable groups, etc..

Why?

What problem am I trying to solve with this? Firstly, reduce duplication, in a lot of cases the difference between dev and prod is just the configuration. Secondly, provide the ground work to get a base setup so that I can concentrate on what steps are needed in the pipeline and not worry about the environments.

Anything else? Well, I often have multiple projects that all need to deploy to the same set of environments, it would be good to share the configuration for that as well between projects.

Next Steps

Ok, I need a pipeline, lets start with something simple, a pipeline with an initial build stage and then multiple deployment stages defined by a parameter:

trigger: none 
pr: none 

pool:  
  vmImage: 'ubuntu-latest' 

parameters:
- name: stages
  type: object
  default:
    - 'dev'
    - 'prod'

stages:
- stage: build
  displayName: 'Build/Package Code or IaC'  
  jobs:  
  - job: build
    displayName: 'Build/Package Code'
    steps:
    # Steps to perform the build and/or package of code or IaC

- ${{ each stage in parameters.stages }}:
  - stage: ${{ stage }}
    displayName: 'Deploy to ${{ stage }}'
    jobs:
    - deployment: deploy_${{ stage }}
      displayName: 'Deploy app to ${{ stage }}'
      environment: ${{ stage }}
      strategy:
        runOnce:
          deploy:
            steps:
            # Steps to perform the deployment

This very small example achieves configuring multiple deployment stages, adding another stage to this would be very easy to do, just update the parameter to include a new stage name.

Now we have the basic configuration lets add loading of a variable group. This could be done by using dynamic naming or by changing the stages parameter.

I have a variable group for each environment, groupvars_dev, groupvars_prod with a single variable mygroupvar.

Dynamic Naming

I’ll add the variable group to the variables at the Stage level (this could also be done at the job level) and include the stage name dynamically.

- ${{ each stage in parameters.stages }}:
  - stage: ${{ stage }}
    displayName: 'Deploy to ${{ stage }}'
    variables:
      - group: groupvars_${{ stage }}
    jobs:
    - deployment: deploy_${{ stage }}
      displayName: 'Deploy app to ${{ stage }}'
      environment: ${{ stage }}
      strategy:
        runOnce:
          deploy:
            steps:
            - bash: |
                echo '$(mygroupvar)'
              displayName: 'Deploy Steps'

Parameter Change

Another way to define the dynamic group is to update the parameter object to provide additional configuration e.g.

parameters:
- name: stages
  type: object
  default:
    - name: 'dev'
      group: 'groupvars_dev'
    - name: 'prod'
      group: 'groupvars_prod'

   ...

- ${{ each stage in parameters.stages }}:
  - stage: ${{ stage.name }}
    displayName: 'Deploy to ${{ stage.name }}'
    variables:
      - group: ${{ stage.group }}
    jobs:
    - deployment: deploy_${{ stage.name }}
      displayName: 'Deploy app to ${{ stage.name }}'
      environment: ${{ stage.name }}
      strategy:
        runOnce:
          deploy:
            steps:
            - bash: |
                echo '$(mygroupvar)'
              displayName: 'Deploy Steps'

Both ways of adding the variable group dynamically achieved the same goal and loaded in the expected group when each stage ran.

Variable Templates

Variable groups are not the only way to dynamically load variables, you could also use variable templates, lets say I have variable templates for each environment, vars_dev.yml and vars_prod.yml

Using dynamic naming you can load the variables like this:

- ${{ each stage in parameters.stages }}:
  - stage: ${{ stage }}
    displayName: 'Deploy to ${{ stage }}'
    variables:
      - template: vars_${{ stage }}.yml
    jobs:
    - deployment: deploy_${{ stage }}
      displayName: 'Deploy app to ${{ stage }}'
      environment: ${{ stage }}
      strategy:
        runOnce:
          deploy:
            steps:
            - bash: |
                echo '$(myfilevar)'
              displayName: 'Deploy Steps'

Now with variable files and groups being added, updating to add a new stage becomes a little more complex as I would need to add those as well.

Shared Template

Now I have a dynamic multistage pipeline, how can I create a template to share with other projects?

Before I answer that I should say that I usually use a separate repository for shared templates that way I can version them. I covered this is a previous post if you want some more information.

Ok, on to the how, based on the above scenario wouldn’t it be great to have a really simple pipeline that concentrated on just the steps, like this?

trigger: none
pr: none

pool: 
  vmImage: 'ubuntu-latest'

resources:
  repositories:
    - repository: templates
      type: git
      name: shared-templates
      ref: main

extends:
  template: environments.yml@templates
  parameters:
    variableFilePrefix: 'vars'
    buildSteps:
        # Steps to perform the build and/or package of code or IaC
    releaseSteps:
       # Steps to perform the deployment

This could be your boilerplate code for multiple projects extending from a base template. You might be asking but how do I create such a template?

Lets convert what we started with into a template a bit at a time.

Firstly create a new file e.g. environments.yml to be the base template and add the parameters that make up the stage configuration

parameters:
- name: stages
  type: object
  default:
    - 'dev'
    - 'prod'

Next, add the build stage up to the steps

stages:
- stage: build
  displayName: 'Build/Package Code or IaC'  
  jobs:  
  - job: build
    displayName: 'Build/Package Code'
    steps:

At this point we need to be able to pass in the build steps, using the Azure Pipeline built-in type stepList we can add a parameter ‘buildSteps’:

parameters:
- name: stages
  type: object
  default:
    - 'dev'
    - 'prod'
- name: buildSteps  
  type: stepList  
  default: []

stages:
- stage: build
  displayName: 'Build/Package Code or IaC'  
  jobs:  
  - job: build
    displayName: 'Build/Package Code'
    steps: ${{ parameters.buildSteps }}

Next, add the dynamic stages up to the steps

- ${{ each stage in parameters.stages }}:
  - stage: ${{ stage }}
    displayName: 'Deploy to ${{ stage }}'
    jobs:
    - deployment: deploy_${{ stage }}
      displayName: 'Deploy app to ${{ stage }}'
      environment: ${{ stage }}
      strategy:
        runOnce:
          deploy:
            steps:

And then as before, add a stepList for the release steps

parameters:
- name: stages
  type: object
  default:
    - 'dev'
    - 'prod'
- name: buildSteps  
  type: stepList  
  default: []
- name: releaseSteps  
  type: stepList  
  default: []

stages:
- stage: build
  displayName: 'Build/Package Code or IaC'  
  jobs:  
  - job: build
    displayName: 'Build/Package Code'
    # Steps to perform the build and/or package of code or IaC
    steps: ${{ parameters.buildSteps }}

- ${{ each stage in parameters.stages }}:
  - stage: ${{ stage }}
    displayName: 'Deploy to ${{ stage }}'
    variables:
      - template: vars_${{ stage }}.yml
    jobs:
    - deployment: deploy_${{ stage }}
      displayName: 'Deploy app to ${{ stage }}'
      environment: ${{ stage }}
      strategy:
        runOnce:
          deploy:
            steps: ${{ parameters.releaseSteps }}

The next part is adding support for variable groups and/or templates. This can be achieved by the addition of 2 parameters for the name prefixes e.g.

- name: variableGroupPrefix  
  type: string  
  default: ''  
- name: variableFilePrefix  
  type: string  
  default: ''  

There will also need to a be check to only load the group and/or file if the parameter is not empty ”.

parameters:
- name: stages
  type: object
  default:
    - 'dev'
    - 'prod'
- name: buildSteps  
  type: stepList  
  default: []
- name: releaseSteps  
  type: stepList  
  default: []
- name: variableGroupPrefix  
  type: string  
  default: ''  
- name: variableFilePrefix  
  type: string  
  default: ''

stages:
- stage: build
  displayName: 'Build/Package Code or IaC'  
  jobs:  
  - job: build
    displayName: 'Build/Package Code'
    # Steps to perform the build and/or package of code or IaC
    steps: ${{ parameters.buildSteps }}

- ${{ each stage in parameters.stages }}:
  - stage: ${{ stage }}
    displayName: 'Deploy to ${{ stage }}'
    variables:
      - ${{ if ne(parameters.variableGroupPrefix, '') }}:
        - group: ${{ parameters.variableGroupPrefix }}_${{ stage }}
      - ${{ if ne(parameters.variableFilePrefix, '') }}:
        - template: ${{ parameters.variableFilePrefix }}_${{ stage }}.yml
    jobs:
    - deployment: deploy_${{ stage }}
      displayName: 'Deploy app to ${{ stage }}'
      environment: ${{ stage }}
      strategy:
        runOnce:
          deploy:
            steps: ${{ parameters.releaseSteps }}

Note: If I was running this template from the same repository, loading of the variable file would be fine but when it’s in a separate repository there needs to be a slight adjustment to add @self on the end so it will load from the calling repository instead of the remote repository.

- template: ${{ parameters.variableFilePrefix }}_${{ stage }}.yml@self

And that is it, one base template that handles the desired configuration and ready for reuse.

Expanding the Concept

Lets say you had a requirement to deploy multiple projects IaC (Infrastructure as Code) and applications to multiple subscriptions and multiple regions in your Azure Estate. How nice would it be to be able to define that in a central configuration. Here is one possible configuration for such a requirement

parameters:
- name: environments
  type: object
  default:
  - name: 'dev'
    subscriptions:
      - subscription: 'Dev Subscription'
        regions:
          - location: 'westus'
            locationShort: 'wus'
  - name: 'prod'
    subscriptions:
      - subscription: 'Prod Subscription'
        regions:
          - location: 'eastus'
            locationShort: 'eus'
          - location: 'westus'
            locationShort: 'wus'
- name: buildSteps
  type: stepList
  default: []
- name: releaseSteps
  type: stepList
  default: []
- name: customReleaseTemplate
  type: string
  default: ''
- name: variableGroupPrefix
  type: string
  default: ''
- name: variableFilePrefix
  type: string
  default: ''

stages:
- stage: build
  displayName: 'Build/Package Code or IaC'
  jobs:
  - job: build
    displayName: 'Build/Package Code'
    steps: ${{ parameters.buildSteps }}

- ${{ each env in parameters.environments }}:
  - stage: ${{ env.name }}
    displayName: 'Deploy to ${{ env.name }}'
    condition: succeeded()
    variables:
      - ${{ if ne(parameters.variableFilePrefix, '') }}:
        - template: ${{ parameters.variableFilePrefix }}_${{ env.name }}.yml@self
      - ${{ if ne(parameters.variableGroupPrefix, '') }}:
        - group: ${{ parameters.variableGroupPrefix }}_${{ env.name }}
    jobs:
    - ${{ each sub in env.subscriptions }}:
      - ${{ each region in sub.regions }}:
        - ${{ if ne(parameters.customReleaseTemplate, '') }}:
          - template: ${{ parameters.customReleaseTemplate }}
            parameters:
               env: ${{ env.name }}
               location: ${{ region.location }}
               locationShort: ${{ region.locationShort }}
               subscription: ${{ sub.subscription }}
        - ${{ else }}:
          - deployment: deploy_${{ region.locationShort }}
            displayName: 'Deploy app to ${{ env.name }} in ${{ region.location }}'
            environment: ${{ env.name }}_${{ region.locationShort }}
            strategy:
              runOnce:
                deploy:
                  steps:
                  - ${{ parameters.releaseSteps }}

You may notice with this configuration there is an option for a custom release template where you could override the job(s) required, you would just need to make sure the template included the parameters supplied from the base template:

parameters:
- name: env
  type: string
- name: location
  type: string
- name: locationShort
  type: string
- name: subscription
  type: string

Then you can add the custom jobs for a given project.

Final Thoughts

Shared templates are so powerful to use and combined with the often forgotten about built-in types step, stepList, job, jobList, deployment, deploymentList, stage and stageList, really allows for some interesting templates to be created.

For additional information see the Azure Pipelines Parameters docs.

You are no doubt thinking, this all sounds very good but what about real application of such a template? In the next post I will use this last template to deploy some Infrastructure as Code to Azure and then deploy an application into that infrastructure to show real usage.

Azure, IaC

Azure ACI – SonarQube

After moving into a new role I found we needed a SonarQube server to perform code analysis. I thought of looking again at using ACI (Azure Container Instances) as when previously trying ACI with an external database I found that any version of SonarQube after 7.7 throws an error:

ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

After doing some reading and investigation I found that this is due to elastic search being embedded into SonarQube. In order to fix this it would mean changing the host OS settings to increase the max_map_count, on a Linux OS this would be changing the /etc/sysctl.conf file to update the max_map_count

vm.max_map_count=262144

The problem with ACI is that there is no access to the host, so how can the latest SonarQube (latest version at the time of writing was 8.6.0) be ran in ACI If this cannot be changed.

In this article I am going to detail a way of running SonarQube in ACI with an external database.

What do we need to do?

The first thing is to address the max_map_count issue, for this we need a sonar.properties file that contains the following setting:

sonar.search.javaAdditionalOpts=-Dnode.store.allow_mmap=false

This setting provides the ability to disable memory mapping in elastic search, which is needed when running SonarQube inside containers where you cannot change the hosts vm.max_map_count. (See elastic search documentation)

Now we have our sonar.properties file we need to create a custom container so we can add that into the setup. A small dockerfile can achieve this:

FROM sonarqube:8.6.0-community
COPY sonar.properties /opt/sonarqube/conf/sonar.properties
RUN chown sonarqube:sonarqube /opt/sonarqube/conf/sonar.properties

This dockerfile can now be built using Docker and pushed to an ACR (Azure Container Registry) ready to be used. If you are not sure how to build a container and/or push to an ACR then have a look at the Docker and Microsoft documentation which have easy to follow instructions.

Build Infrastructure

So now that we have a container uploaded to a container server we can look at the rest of the configuration.

There are a number of parts to create:

  • File shares
  • External Database
  • Container Group
    • SonarQube
    • Reverse Proxy

Being a big advocate of IaC (Infrastructure as Code) I am going to use Terraform to configure the SonarQube deployment.

File Shares

The SonarQube documentation mentions setting up volume mounts for data, extensions and logs, for this we can use an Azure Storage Account and Shares.

To make sure that the storage account has a unique name a random string is created to be appended to the storage name.

resource "random_string" "random" {
  length  = 16
  special = false
  upper   = false
}

resource "azurerm_storage_account" "storage" {
  name                     = lower(substr("${var.storage_config.name}${random_string.random.result}", 0, 24))
  resource_group_name      = var.resource_group_name
  location                 = var.resource_group_location
  account_kind             = var.storage_config.kind
  account_tier             = var.storage_config.tier
  account_replication_type = var.storage_config.sku
  tags                     = var.tags
}

resource "azurerm_storage_share" "data-share" {
  name                 = "data"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.data
}

resource "azurerm_storage_share" "extensions-share" {
  name                 = "extensions"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.extensions
}

resource "azurerm_storage_share" "logs-share" {
  name                 = "logs"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.logs
}

External Database

For the external database part we can use Azure SQL Server, a SQL Database and setup a firewall rule to allow azure services to access the database. Normally you would add specific IP addresses but as the IP address is not guaranteed when a container is stopped and restarted it cannot be added here. If you want to create a static IP then this article might help.

SQL Server and Firewall configuration:

resource "azurerm_sql_server" "sql" {
  name                         = lower("${var.sql_server_config.name}${random_string.random.result}")
  resource_group_name          = var.resource_group_name
  location                     = var.resource_group_location
  version                      = var.sql_server_config.version
  administrator_login          = var.sql_server_credentials.admin_username
  administrator_login_password = var.sql_server_credentials.admin_password
  tags                         = var.tags
}

resource "azurerm_sql_firewall_rule" "sqlfirewall" {
  name                = "AllowAllWindowsAzureIps"
  resource_group_name = var.resource_group_name
  server_name         = azurerm_sql_server.sql.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

For the database we can use the serverless tier, this will provide scaling when needed. Check out the Microsoft Docs for more information.

# SQL Database
resource "azurerm_mssql_database" "sqldb" {
  name                        = var.sql_database_config.name
  server_id                   = azurerm_sql_server.sql.id
  collation                   = "SQL_Latin1_General_CP1_CS_AS"
  license_type                = "LicenseIncluded"
  max_size_gb                 = var.sql_database_config.max_db_size_gb
  min_capacity                = var.sql_database_config.min_cpu_capacity
  read_scale                  = false
  sku_name                    = "${var.sql_database_config.sku}_${var.sql_database_config.max_cpu_capacity}"
  zone_redundant              = false
  auto_pause_delay_in_minutes = var.sql_database_config.auto_pause_delay_in_minutes
  tags                        = var.tags
}

Container Group

Setting up the container group requires credentials to access to the Azure Container Registry to run the custom SonarQube container. Using the data resource allows retrieval of the details without passing them as variables:

data "azurerm_container_registry" "registry" {
  name                = var.container_registry_config.name
  resource_group_name = var.container_registry_config.resource_group
}

For this setup we are going to have two containers the custom SonarQube container and a Caddy container. Caddy can be used as a reverse proxy and is small, lightweight and provides management of certificates automatically with Let’s Encrypt. Note: there are some rate limits with Let’s encrypt see the website for more information.

The SonarQube container configuration connects the SQL Database and Azure Storage Account Shares configured earlier.

The Caddy container configuration sets up the reverse proxy to the SonarQube instance.

resource "azurerm_container_group" "container" {
  name                = var.sonar_config.container_group_name
  resource_group_name = var.resource_group_name
  location            = var.resource_group_location
  ip_address_type     = "public"
  dns_name_label      = var.sonar_config.dns_name
  os_type             = "Linux"
  restart_policy      = "OnFailure"
  tags                = var.tags
  
  image_registry_credential {
      server = data.azurerm_container_registry.registry.login_server
      username = data.azurerm_container_registry.registry.admin_username
      password = data.azurerm_container_registry.registry.admin_password
  }

  container {
    name   = "sonarqube-server"
    image  = "${data.azurerm_container_registry.registry.login_server}/${var.sonar_config.image_name}"
    cpu    = var.sonar_config.required_vcpu
    memory = var.sonar_config.required_memory_in_gb
    environment_variables = {
      WEBSITES_CONTAINER_START_TIME_LIMIT = 400
    }    
    secure_environment_variables = {
      SONARQUBE_JDBC_URL      = "jdbc:sqlserver://${azurerm_sql_server.sql.name}.database.windows.net:1433;database=${azurerm_mssql_database.sqldb.name};user=${azurerm_sql_server.sql.administrator_login}@${azurerm_sql_server.sql.name};password=${azurerm_sql_server.sql.administrator_login_password};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
      SONARQUBE_JDBC_USERNAME = var.sql_server_credentials.admin_username
      SONARQUBE_JDBC_PASSWORD = var.sql_server_credentials.admin_password
    }

    ports {
      port     = 9000
      protocol = "TCP"
    }

    volume {
      name                 = "data"
      mount_path           = "/opt/sonarqube/data"
      share_name           = "data"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "extensions"
      mount_path           = "/opt/sonarqube/extensions"
      share_name           = "extensions"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "logs"
      mount_path           = "/opt/sonarqube/logs"
      share_name           = "logs"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }   
  }

  container {
    name     = "caddy-ssl-server"
    image    = "caddy:latest"
    cpu      = "1"
    memory   = "1"
    commands = ["caddy", "reverse-proxy", "--from", "${var.sonar_config.dns_name}.${var.resource_group_location}.azurecontainer.io", "--to", "localhost:9000"]

    ports {
      port     = 443
      protocol = "TCP"
    }

    ports {
      port     = 80
      protocol = "TCP"
    }
  }
}

You have no doubt noticed that there are many variables used for the configuration, so here are all the ones and the defaults:

variable "resource_group_name" {
  type = string
  description = "(Required) Resource Group to deploy to"
}

variable "resource_group_location" {
  type = string
  description = "(Required) Resource Group location"
}

variable "tags" {
  description = "(Required) Tags for SonarQube"
}

variable "container_registry_config" {
    type = object({
        name           = string
        resource_group = string
    })
    description = "(Required) Container Registry Configuration"
}

variable "sonar_config" {
    type = object({
        image_name            = string
        container_group_name  = string
        dns_name              = string
        required_memory_in_gb = string
        required_vcpu         = string
    })

    description = "(Required) SonarQube Configuration"
}

variable "sql_server_credentials" {
    type = object({
        admin_username = string
        admin_password = string
    })
    sensitive = true
}

variable "sql_database_config" {
    type = object({
        name                        = string
        sku                         = string
        auto_pause_delay_in_minutes = number
        min_cpu_capacity            = number
        max_cpu_capacity            = number
        max_db_size_gb              = number
    })
    default = {
        name                        = "sonarqubedb"
        sku                         = "GP_S_Gen5"
        auto_pause_delay_in_minutes = 60
        min_cpu_capacity            = 0.5
        max_cpu_capacity            = 1
        max_db_size_gb              = 50
    }
}

variable "sql_server_config" {
   type = object({
        name    = string
        version = string
   })
   default = {
       name    = "sql-sonarqube"
       version = "12.0"
   }
}

variable "storage_share_quota_gb" {
  type = object({
    data       = number
    extensions = number
    logs       = number
  })
  default = {
      data       = 10
      extensions = 10
      logs       = 10
  }
}

variable "storage_config" {
    type = object({
        name = string
        kind = string
        sku  = string        
        tier = string
    })
    default = {
        name = "sonarqubestore"
        kind = "StorageV2"
        sku  = "LRS"
        tier = "Standard"
    }
}

To make this easy to configure I added all of this to a Terrform module and then the main terraform file would be something like:

terraform {  
  required_version = ">= 0.14"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.37.0"
    }
  }
}

provider "azurerm" {  
  features {}
}

# Create a resource group
resource "azurerm_resource_group" "instance" {
  name     = "test-sonar"
  location = "uksouth"
}

# Generate Password
resource "random_password" "password" {
  length = 24
  special = true
  override_special = "_%@"
}

# Module
module "sonarqube" {
    depends_on                        = [azurerm_resource_group.instance]
    source                            = "./modules/sonarqube"
    tags                              = { Project = "Sonar", Environment = "Dev" }
    resource_group_name               = azurerm_resource_group.instance.name
    resource_group_location           = azurerm_resource_group.instance.location
    
    sql_server_credentials            = {
        admin_username = "sonaradmin"
        admin_password = random_password.password.result
    }

    container_registry_config         = {
        name           = "myregistry"
        resource_group = "my-registry-rg"
    }

    sonar_config                      = {
        container_group_name  = "sonarqubecontainer"
        required_memory_in_gb = "4"
        required_vcpu         = "2"
        image_name            = "my-sonar:latest"
        dns_name              = "my-custom-sonar"
    }

    sql_server_config                = {
       name    = "sql-sonarqube"
       version = "12.0"
    }

    sql_database_config              = {
        name                        = "sonarqubedb"
        sku                         = "GP_S_Gen5"
        auto_pause_delay_in_minutes = 60
        min_cpu_capacity            = 0.5
        max_cpu_capacity            = 2
        max_db_size_gb              = 250
    }

    storage_share_quota_gb            = {  
        data       = 50
        extensions = 10
        logs       = 20
    }
}

By using the random_password resource to create a SQL password no secrets are included and there is no need to know the password as long as the SonarQube Server does.
The full code used here can be found in my GitHub repo.

I am sure there are still improvements that could be made to this setup but hopefully it will help anyone wanting to use ACI for running a SonarQube server.

Next Steps

Once the container instance is running you might not want it running 24/7 so using an Azure Function or Logic App to stop and start the instance when its not needed will definitely save money. I plan to run Azure Functions to start the container at 08:00 and stop the container at 18:00 Monday to Friday.

As this setup is public, a version that uses your own network and is private might be a good next step.