Azure, Azure Functions

Migrate Azure Functions from .NET Core 3.1 to .NET 5

Migrating Azure Functions v3 from .NET Core 3.1 to .NET 5 (isolated) requires a number of changes. In this post I am going to walk through steps I went through to upgrade an Azure Function.

The Microsoft Docs and the examples on Microsoft’s GitHub are well worth looking at as they give more details about what has changed.

The function that I am going to convert is a very simple one, it uses a Timer Trigger to check for messages every hour, a Blob to track when messages were last processed and add the messages to an Event Grid.

Here is the function code:

using Microsoft.Azure.EventGrid.Models;
using Microsoft.Azure.Storage.Blob;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.EventGrid;
using Microsoft.Extensions.Logging;
using System;
using System.IO;
using System.Linq;
using System.Text.Json;
using System.Threading.Tasks;
using JsonSerializer = System.Text.Json.JsonSerializer;

namespace MessagePoller
{
    public class MessageFunction
    {
        private readonly IMessageProcessor _messageProcessor;

        public MessageFunction(IMessageProcessor messageProcessor)
        {
            _messageProcessor = messageProcessor;
        }

        [FunctionName("MessageFunction")]
        public async Task RunAsync([TimerTrigger("0 0 * * * *")] TimerInfo timerInfo,
            [Blob("tracking/trackingdate.json", FileAccess.ReadWrite, Connection = "StorageConnection")] ICloudBlob trackingBlob,
            [EventGrid(TopicEndpointUri = "MessageEventGridEndpoint", TopicKeySetting = "MessageEventGridTopicKey")] IAsyncCollector<EventGridEvent> outputEvents,
            ILogger log)
        {
            log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

            await using var stream = await trackingBlob.OpenReadAsync().ConfigureAwait(false);
            var tracking = await JsonSerializer.DeserializeAsync<TrackingObject>(stream, new JsonSerializerOptions
            {
                PropertyNameCaseInsensitive = true
            }).ConfigureAwait(false);

            var messages = await _messageProcessor.GetMessagesAsync(tracking).ConfigureAwait(false);
            await foreach (var message in messages.ToAsyncEnumerable())
            {
                var currentEvent = new EventGridEvent
                {
                    Id = Guid.NewGuid().ToString("D"),
                    Subject = "Message event",
                    EventTime = DateTime.UtcNow,
                    EventType = "Message",
                    Data = message,
                    DataVersion = "1"
                };

                await outputEvents.AddAsync(currentEvent).ConfigureAwait(false);
            }

            var bytes = JsonSerializer.SerializeToUtf8Bytes(new TrackingObject { ModificationDate = DateTime.UtcNow }, new JsonSerializerOptions
            {
                WriteIndented = true,
                PropertyNamingPolicy = JsonNamingPolicy.CamelCase
            });

            await trackingBlob.UploadFromByteArrayAsync(bytes, 0, bytes.Length).ConfigureAwait(false);
        }
    }
}

Step 1 – Update Project Files

The current .NET Core 3.1 project file looks like this:

<Project Sdk="Microsoft.NET.Sdk">
	<PropertyGroup>
		<TargetFramework>netcoreapp3.1</TargetFramework>
		<AzureFunctionsVersion>v3</AzureFunctionsVersion>
	</PropertyGroup>
	<ItemGroup>
		<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="3.1.16" />
		<PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.1.0" />
		<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.13" />
		<PackageReference Include="Microsoft.Azure.EventGrid" Version="3.2.0" />
		<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.EventGrid" Version="2.1.0" />
		<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage" Version="4.0.4" />
        <PackageReference Include="System.Linq.Async" Version="5.0.0" />
	</ItemGroup>
	<ItemGroup>
		<None Update="host.json">
			<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
		</None>
		<None Update="local.settings.json">
			<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
			<CopyToPublishDirectory>Never</CopyToPublishDirectory>
		</None>
	</ItemGroup>
</Project>

In order to update it to use .NET 5 we need to:

  • Update the target framework to net5.0
  • Add the OutputType of Exe
  • Upgrade Microsoft.Extensions.DependencyInjection to 5.x
  • Remove Packages
    • Microsoft.Azure.Functions.Extensions
    • Microsoft.NET.Sdk.Functions
    • Microsoft.Azure.WebJobs.Extensions.EventGrid
    • Microsoft.Azure.WebJobs.Extensions.Storage
  • Add Packages
    • Microsoft.Azure.Functions.Worker
    • Microsoft.Azure.Functions.Worker.Sdk
    • Microsoft.Azure.Functions.Worker.Extensions.EventGrid
    • Microsoft.Azure.Functions.Worker.Extensions.Storage
    • Microsoft.Azure.Functions.Worker.Extensions.Timer

After changing those, the csproj file now looks like this:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>net5.0</TargetFramework>
    <AzureFunctionsVersion>v3</AzureFunctionsVersion>
    <OutputType>Exe</OutputType>
  </PropertyGroup>
  <ItemGroup>
  <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.3.0" />
    <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.0.3" />
    <PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="5.0.1" />
    <PackageReference Include="Microsoft.Azure.EventGrid" Version="3.2.0" />
    <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.EventGrid" Version="2.1.0" />
    <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Storage" Version="4.0.4" />
    <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Timer" Version="4.1.0" />
    <PackageReference Include="System.Linq.Async" Version="5.0.0" />
  </ItemGroup>
  <ItemGroup>
    <None Update="host.json">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
    <None Update="local.settings.json">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <CopyToPublishDirectory>Never</CopyToPublishDirectory>
    </None>
  </ItemGroup>
</Project>

NOTE: Microsoft.Azure.Functions.Worker.Extensions.Timer has been added as it is now a separate package

Step 2 – Create Entry point

Firstly for .NET 5 (isolated) we need a Program.cs file to work as the entry point

using Microsoft.Extensions.Hosting;

namespace MessagePoller
{
    public class Program
    {
        public static void Main()
        {
            var host = new HostBuilder()
                .ConfigureFunctionsWorkerDefaults()
                .Build();

            host.Run();
        }
    }
}

The function uses Dependency Injection and the dependencies are currently in Startup.cs and look like this:

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;

[assembly: FunctionsStartup(typeof(MessagePoller.Startup))]

namespace MessagePoller
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
           builder.Services.AddTransient<IMessageProcessor, MessageProcessor>();
        }
    }
}

This configuration needs moving to Program.cs in ConfigureServices:

using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace MessagePoller
{
    public class Program
    {
        public static void Main()
        {
            var host = new HostBuilder()
                .ConfigureFunctionsWorkerDefaults()
                .ConfigureServices(s =>
                {
                    s.AddTransient<IMessageProcessor, MessageProcessor>();
                })
                .Build();

            host.Run();
        }
    }
}

Now Startup.cs can be removed as it is no longer required.

Step 3 – Update Local Settings

The local.settings.json needs updating to change the Functions Runtime to dotnet-isolated

“FUNCTIONS_WORKER_RUNTIME”: “dotnet”

becomes

“FUNCTIONS_WORKER_RUNTIME”: “dotnet-isolated”

Step 4 – Code Changes

The first thing to change is “FunctionName” to “Function”

e.g.

[FunctionName("MessageFunction")]

becomes

[Function("MessageFunction")]

Next thing to change is ILogger, ILogger is no longer passed to the function method FunctionContext is and therefore we need to get the ILogger instance from the context.

e.g.

[Function("MessageFunction")]
public async Task<MessageOutputType> RunAsync([TimerTrigger("0 0 * * * *")] TimerInfo timerInfo,
       ...,
       FunctionContext executionContext)
{
    // Call to get the logger
    var log = executionContext.GetLogger("MessageFunction");
    ...
}

Now on to the next the function itself. The code uses ICloudBlob and IAsyncCollector<EventGridEvent> which are not supported.

Note from the Microsoft Docs on limitations of binding classes

The code also uses Blob as an Input and Output Binding which would need to now be split to use BlobInput and BlobOutput.

So if we change the current function declaration

[FunctionName("MessageFunction")]
public async Task RunAsync([TimerTrigger("0 0 * * * *")] TimerInfo timerInfo,
       [Blob("tracking/trackingdate.json", FileAccess.ReadWrite, Connection = "StorageConnection")] ICloudBlob trackingBlob,
       [EventGrid(TopicEndpointUri = "MessageEventGridEndpoint", TopicKeySetting = "MessageEventGridTopicKey")] IAsyncCollector<EventGridEvent> outputEvents,
       ILogger log)
{
    ...
}

to add the outputs, which would be Blob and EventGrid and then change ICloudBlob to something supported like the class object to deserialize to, we would end up with this:

[Function("MessageFunction")]
[EventGridOutput(TopicEndpointUri = "MessageEventGridEndpoint", TopicKeySetting = "MessageEventGridTopicKey")]
[BlobOutput("tracking/trackingdate.json")]
public async Task RunAsync([TimerTrigger("0 0 * * * *")] TimerInfo timerInfo,
       [BlobInput("tracking/trackingdate.json", Connection = "StorageConnection")] TrackingObject tracking,
       FunctionContext executionContext)
{
    ...
}

This of course will not work as you can only have one output binding and we now need a return type.

To solve this we need to create a custom return type which contains both the output bindings and then use this as the return type.

public class MessageOutputType
{
    [EventGridOutput(TopicEndpointUri = "MessageEventGridEndpoint", TopicKeySetting = "MessageEventGridTopicKey")]
    public IList<EventGridEvent> Events { get; set; }

    [BlobOutput("tracking/trackingdate.json")]
    public byte[] Tracking { get; set; }
}
[Function("MessageFunction")]
public async Task<MessageOutputType> RunAsync([TimerTrigger("0 0 * * * *")] TimerInfo timerInfo,
       [BlobInput("tracking/trackingdate.json", Connection = "StorageConnection")] TrackingObject tracking,
       FunctionContext executionContext)
{
    ...
}

Now all the code has been converted over the final result looks like this:

using Microsoft.Azure.EventGrid.Models;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace MessagePoller
{
    public class MessageFunction
    {
        private readonly IMessageProcessor _messageProcessor;

        public MessageFunction(IMessageProcessor messageProcessor)
        {
            _messageProcessor = messageProcessor;
        }

        [Function("MessageFunction")]
        public async Task<MessageOutputType> RunAsync([TimerTrigger("0 0 * * * *")] TimerInfo timerInfo,
            [BlobInput("tracking/trackingdate.json", Connection = "StorageConnection")] TrackingObject trackingBlob,
            FunctionContext executionContext)
        {
            var log = executionContext.GetLogger("MessageFunction");
            log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

            var messages = await _messageProcessor.GetMessagesAsync(trackingBlob).ConfigureAwait(false);

            var eventMessages = await messages.ToAsyncEnumerable()
                .Select(message => new EventGridEvent
                {
                    Id = Guid.NewGuid().ToString("D"),
                    Subject = "Message event",
                    EventTime = DateTime.UtcNow,
                    EventType = "Message",
                    Data = message,
                    DataVersion = "1"
                })
                .ToListAsync().ConfigureAwait(false);

            return new MessageOutputType
            {
                Events = eventMessages,
                Tracking = new TrackingObject { ModificationDate = DateTime.UtcNow }
            };
        }
    }

    public class MessageOutputType
    {
        [EventGridOutput(TopicEndpointUri = "MessageEventGridEndpoint", TopicKeySetting = "MessageEventGridTopicKey")]
        public IList<EventGridEvent> Events { get; set; }

        [BlobOutput("tracking/trackingdate.json")]
        public TrackingObject Tracking { get; set; }
    }
}

Conclusion

Upgrading this particular function was surprisingly easy despite needing multiple output bindings, the Microsoft Docs and GitHub examples were extremely helpful in understanding the changes for the particular binding.

I have to say I like the multiple return type as well as the binding attributes being explicit about if they are Input or Output.

I deployed my new function from Visual Studio 2019 and worked as expected, now time to update the IaC (Infrastructure as Code) and deploy using my pipeline, hopefully there will be no surprises there.

I hope functions with other bindings are as easy to convert and can’t wait to see what other changes are added to Azure Functions in the future with the Isolated process, .NET 6 and Functions v4.

Happy Azure Functions upgrading!!

Azure, IaC

Azure ACI – SonarQube

After moving into a new role I found we needed a SonarQube server to perform code analysis. I thought of looking again at using ACI (Azure Container Instances) as when previously trying ACI with an external database I found that any version of SonarQube after 7.7 throws an error:

ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

After doing some reading and investigation I found that this is due to elastic search being embedded into SonarQube. In order to fix this it would mean changing the host OS settings to increase the max_map_count, on a Linux OS this would be changing the /etc/sysctl.conf file to update the max_map_count

vm.max_map_count=262144

The problem with ACI is that there is no access to the host, so how can the latest SonarQube (latest version at the time of writing was 8.6.0) be ran in ACI If this cannot be changed.

In this article I am going to detail a way of running SonarQube in ACI with an external database.

What do we need to do?

The first thing is to address the max_map_count issue, for this we need a sonar.properties file that contains the following setting:

sonar.search.javaAdditionalOpts=-Dnode.store.allow_mmap=false

This setting provides the ability to disable memory mapping in elastic search, which is needed when running SonarQube inside containers where you cannot change the hosts vm.max_map_count. (See elastic search documentation)

Now we have our sonar.properties file we need to create a custom container so we can add that into the setup. A small dockerfile can achieve this:

FROM sonarqube:8.6.0-community
COPY sonar.properties /opt/sonarqube/conf/sonar.properties
RUN chown sonarqube:sonarqube /opt/sonarqube/conf/sonar.properties

This dockerfile can now be built using Docker and pushed to an ACR (Azure Container Registry) ready to be used. If you are not sure how to build a container and/or push to an ACR then have a look at the Docker and Microsoft documentation which have easy to follow instructions.

Build Infrastructure

So now that we have a container uploaded to a container server we can look at the rest of the configuration.

There are a number of parts to create:

  • File shares
  • External Database
  • Container Group
    • SonarQube
    • Reverse Proxy

Being a big advocate of IaC (Infrastructure as Code) I am going to use Terraform to configure the SonarQube deployment.

File Shares

The SonarQube documentation mentions setting up volume mounts for data, extensions and logs, for this we can use an Azure Storage Account and Shares.

To make sure that the storage account has a unique name a random string is created to be appended to the storage name.

resource "random_string" "random" {
  length  = 16
  special = false
  upper   = false
}

resource "azurerm_storage_account" "storage" {
  name                     = lower(substr("${var.storage_config.name}${random_string.random.result}", 0, 24))
  resource_group_name      = var.resource_group_name
  location                 = var.resource_group_location
  account_kind             = var.storage_config.kind
  account_tier             = var.storage_config.tier
  account_replication_type = var.storage_config.sku
  tags                     = var.tags
}

resource "azurerm_storage_share" "data-share" {
  name                 = "data"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.data
}

resource "azurerm_storage_share" "extensions-share" {
  name                 = "extensions"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.extensions
}

resource "azurerm_storage_share" "logs-share" {
  name                 = "logs"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.logs
}

External Database

For the external database part we can use Azure SQL Server, a SQL Database and setup a firewall rule to allow azure services to access the database. Normally you would add specific IP addresses but as the IP address is not guaranteed when a container is stopped and restarted it cannot be added here. If you want to create a static IP then this article might help.

SQL Server and Firewall configuration:

resource "azurerm_sql_server" "sql" {
  name                         = lower("${var.sql_server_config.name}${random_string.random.result}")
  resource_group_name          = var.resource_group_name
  location                     = var.resource_group_location
  version                      = var.sql_server_config.version
  administrator_login          = var.sql_server_credentials.admin_username
  administrator_login_password = var.sql_server_credentials.admin_password
  tags                         = var.tags
}

resource "azurerm_sql_firewall_rule" "sqlfirewall" {
  name                = "AllowAllWindowsAzureIps"
  resource_group_name = var.resource_group_name
  server_name         = azurerm_sql_server.sql.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

For the database we can use the serverless tier, this will provide scaling when needed. Check out the Microsoft Docs for more information.

# SQL Database
resource "azurerm_mssql_database" "sqldb" {
  name                        = var.sql_database_config.name
  server_id                   = azurerm_sql_server.sql.id
  collation                   = "SQL_Latin1_General_CP1_CS_AS"
  license_type                = "LicenseIncluded"
  max_size_gb                 = var.sql_database_config.max_db_size_gb
  min_capacity                = var.sql_database_config.min_cpu_capacity
  read_scale                  = false
  sku_name                    = "${var.sql_database_config.sku}_${var.sql_database_config.max_cpu_capacity}"
  zone_redundant              = false
  auto_pause_delay_in_minutes = var.sql_database_config.auto_pause_delay_in_minutes
  tags                        = var.tags
}

Container Group

Setting up the container group requires credentials to access to the Azure Container Registry to run the custom SonarQube container. Using the data resource allows retrieval of the details without passing them as variables:

data "azurerm_container_registry" "registry" {
  name                = var.container_registry_config.name
  resource_group_name = var.container_registry_config.resource_group
}

For this setup we are going to have two containers the custom SonarQube container and a Caddy container. Caddy can be used as a reverse proxy and is small, lightweight and provides management of certificates automatically with Let’s Encrypt. Note: there are some rate limits with Let’s encrypt see the website for more information.

The SonarQube container configuration connects the SQL Database and Azure Storage Account Shares configured earlier.

The Caddy container configuration sets up the reverse proxy to the SonarQube instance.

resource "azurerm_container_group" "container" {
  name                = var.sonar_config.container_group_name
  resource_group_name = var.resource_group_name
  location            = var.resource_group_location
  ip_address_type     = "public"
  dns_name_label      = var.sonar_config.dns_name
  os_type             = "Linux"
  restart_policy      = "OnFailure"
  tags                = var.tags
  
  image_registry_credential {
      server = data.azurerm_container_registry.registry.login_server
      username = data.azurerm_container_registry.registry.admin_username
      password = data.azurerm_container_registry.registry.admin_password
  }

  container {
    name   = "sonarqube-server"
    image  = "${data.azurerm_container_registry.registry.login_server}/${var.sonar_config.image_name}"
    cpu    = var.sonar_config.required_vcpu
    memory = var.sonar_config.required_memory_in_gb
    environment_variables = {
      WEBSITES_CONTAINER_START_TIME_LIMIT = 400
    }    
    secure_environment_variables = {
      SONARQUBE_JDBC_URL      = "jdbc:sqlserver://${azurerm_sql_server.sql.name}.database.windows.net:1433;database=${azurerm_mssql_database.sqldb.name};user=${azurerm_sql_server.sql.administrator_login}@${azurerm_sql_server.sql.name};password=${azurerm_sql_server.sql.administrator_login_password};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
      SONARQUBE_JDBC_USERNAME = var.sql_server_credentials.admin_username
      SONARQUBE_JDBC_PASSWORD = var.sql_server_credentials.admin_password
    }

    ports {
      port     = 9000
      protocol = "TCP"
    }

    volume {
      name                 = "data"
      mount_path           = "/opt/sonarqube/data"
      share_name           = "data"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "extensions"
      mount_path           = "/opt/sonarqube/extensions"
      share_name           = "extensions"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "logs"
      mount_path           = "/opt/sonarqube/logs"
      share_name           = "logs"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }   
  }

  container {
    name     = "caddy-ssl-server"
    image    = "caddy:latest"
    cpu      = "1"
    memory   = "1"
    commands = ["caddy", "reverse-proxy", "--from", "${var.sonar_config.dns_name}.${var.resource_group_location}.azurecontainer.io", "--to", "localhost:9000"]

    ports {
      port     = 443
      protocol = "TCP"
    }

    ports {
      port     = 80
      protocol = "TCP"
    }
  }
}

You have no doubt noticed that there are many variables used for the configuration, so here are all the ones and the defaults:

variable "resource_group_name" {
  type = string
  description = "(Required) Resource Group to deploy to"
}

variable "resource_group_location" {
  type = string
  description = "(Required) Resource Group location"
}

variable "tags" {
  description = "(Required) Tags for SonarQube"
}

variable "container_registry_config" {
    type = object({
        name           = string
        resource_group = string
    })
    description = "(Required) Container Registry Configuration"
}

variable "sonar_config" {
    type = object({
        image_name            = string
        container_group_name  = string
        dns_name              = string
        required_memory_in_gb = string
        required_vcpu         = string
    })

    description = "(Required) SonarQube Configuration"
}

variable "sql_server_credentials" {
    type = object({
        admin_username = string
        admin_password = string
    })
    sensitive = true
}

variable "sql_database_config" {
    type = object({
        name                        = string
        sku                         = string
        auto_pause_delay_in_minutes = number
        min_cpu_capacity            = number
        max_cpu_capacity            = number
        max_db_size_gb              = number
    })
    default = {
        name                        = "sonarqubedb"
        sku                         = "GP_S_Gen5"
        auto_pause_delay_in_minutes = 60
        min_cpu_capacity            = 0.5
        max_cpu_capacity            = 1
        max_db_size_gb              = 50
    }
}

variable "sql_server_config" {
   type = object({
        name    = string
        version = string
   })
   default = {
       name    = "sql-sonarqube"
       version = "12.0"
   }
}

variable "storage_share_quota_gb" {
  type = object({
    data       = number
    extensions = number
    logs       = number
  })
  default = {
      data       = 10
      extensions = 10
      logs       = 10
  }
}

variable "storage_config" {
    type = object({
        name = string
        kind = string
        sku  = string        
        tier = string
    })
    default = {
        name = "sonarqubestore"
        kind = "StorageV2"
        sku  = "LRS"
        tier = "Standard"
    }
}

To make this easy to configure I added all of this to a Terrform module and then the main terraform file would be something like:

terraform {  
  required_version = ">= 0.14"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.37.0"
    }
  }
}

provider "azurerm" {  
  features {}
}

# Create a resource group
resource "azurerm_resource_group" "instance" {
  name     = "test-sonar"
  location = "uksouth"
}

# Generate Password
resource "random_password" "password" {
  length = 24
  special = true
  override_special = "_%@"
}

# Module
module "sonarqube" {
    depends_on                        = [azurerm_resource_group.instance]
    source                            = "./modules/sonarqube"
    tags                              = { Project = "Sonar", Environment = "Dev" }
    resource_group_name               = azurerm_resource_group.instance.name
    resource_group_location           = azurerm_resource_group.instance.location
    
    sql_server_credentials            = {
        admin_username = "sonaradmin"
        admin_password = random_password.password.result
    }

    container_registry_config         = {
        name           = "myregistry"
        resource_group = "my-registry-rg"
    }

    sonar_config                      = {
        container_group_name  = "sonarqubecontainer"
        required_memory_in_gb = "4"
        required_vcpu         = "2"
        image_name            = "my-sonar:latest"
        dns_name              = "my-custom-sonar"
    }

    sql_server_config                = {
       name    = "sql-sonarqube"
       version = "12.0"
    }

    sql_database_config              = {
        name                        = "sonarqubedb"
        sku                         = "GP_S_Gen5"
        auto_pause_delay_in_minutes = 60
        min_cpu_capacity            = 0.5
        max_cpu_capacity            = 2
        max_db_size_gb              = 250
    }

    storage_share_quota_gb            = {  
        data       = 50
        extensions = 10
        logs       = 20
    }
}

By using the random_password resource to create a SQL password no secrets are included and there is no need to know the password as long as the SonarQube Server does.
The full code used here can be found in my GitHub repo.

I am sure there are still improvements that could be made to this setup but hopefully it will help anyone wanting to use ACI for running a SonarQube server.

Next Steps

Once the container instance is running you might not want it running 24/7 so using an Azure Function or Logic App to stop and start the instance when its not needed will definitely save money. I plan to run Azure Functions to start the container at 08:00 and stop the container at 18:00 Monday to Friday.

As this setup is public, a version that uses your own network and is private might be a good next step.