After moving into a new role I found we needed a SonarQube server to perform code analysis. I thought of looking again at using ACI (Azure Container Instances) as when previously trying ACI with an external database I found that any version of SonarQube after 7.7 throws an error:
ERROR: [1] bootstrap checks failed [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
After doing some reading and investigation I found that this is due to elastic search being embedded into SonarQube. In order to fix this it would mean changing the host OS settings to increase the max_map_count, on a Linux OS this would be changing the /etc/sysctl.conf file to update the max_map_count
vm.max_map_count=262144
The problem with ACI is that there is no access to the host, so how can the latest SonarQube (latest version at the time of writing was 8.6.0) be ran in ACI If this cannot be changed.
In this article I am going to detail a way of running SonarQube in ACI with an external database.
What do we need to do?
The first thing is to address the max_map_count issue, for this we need a sonar.properties file that contains the following setting:
sonar.search.javaAdditionalOpts=-Dnode.store.allow_mmap=false
This setting provides the ability to disable memory mapping in elastic search, which is needed when running SonarQube inside containers where you cannot change the hosts vm.max_map_count. (See elastic search documentation)
Now we have our sonar.properties file we need to create a custom container so we can add that into the setup. A small dockerfile can achieve this:
FROM sonarqube:8.6.0-community COPY sonar.properties /opt/sonarqube/conf/sonar.properties RUN chown sonarqube:sonarqube /opt/sonarqube/conf/sonar.properties
This dockerfile can now be built using Docker and pushed to an ACR (Azure Container Registry) ready to be used. If you are not sure how to build a container and/or push to an ACR then have a look at the Docker and Microsoft documentation which have easy to follow instructions.
Build Infrastructure
So now that we have a container uploaded to a container server we can look at the rest of the configuration.
There are a number of parts to create:
- File shares
- External Database
- Container Group
- SonarQube
- Reverse Proxy
Being a big advocate of IaC (Infrastructure as Code) I am going to use Terraform to configure the SonarQube deployment.
File Shares
The SonarQube documentation mentions setting up volume mounts for data, extensions and logs, for this we can use an Azure Storage Account and Shares.
To make sure that the storage account has a unique name a random string is created to be appended to the storage name.
resource "random_string" "random" { length = 16 special = false upper = false } resource "azurerm_storage_account" "storage" { name = lower(substr("${var.storage_config.name}${random_string.random.result}", 0, 24)) resource_group_name = var.resource_group_name location = var.resource_group_location account_kind = var.storage_config.kind account_tier = var.storage_config.tier account_replication_type = var.storage_config.sku tags = var.tags } resource "azurerm_storage_share" "data-share" { name = "data" storage_account_name = azurerm_storage_account.storage.name quota = var.storage_share_quota_gb.data } resource "azurerm_storage_share" "extensions-share" { name = "extensions" storage_account_name = azurerm_storage_account.storage.name quota = var.storage_share_quota_gb.extensions } resource "azurerm_storage_share" "logs-share" { name = "logs" storage_account_name = azurerm_storage_account.storage.name quota = var.storage_share_quota_gb.logs }
External Database
For the external database part we can use Azure SQL Server, a SQL Database and setup a firewall rule to allow azure services to access the database. Normally you would add specific IP addresses but as the IP address is not guaranteed when a container is stopped and restarted it cannot be added here. If you want to create a static IP then this article might help.
SQL Server and Firewall configuration:
resource "azurerm_sql_server" "sql" { name = lower("${var.sql_server_config.name}${random_string.random.result}") resource_group_name = var.resource_group_name location = var.resource_group_location version = var.sql_server_config.version administrator_login = var.sql_server_credentials.admin_username administrator_login_password = var.sql_server_credentials.admin_password tags = var.tags } resource "azurerm_sql_firewall_rule" "sqlfirewall" { name = "AllowAllWindowsAzureIps" resource_group_name = var.resource_group_name server_name = azurerm_sql_server.sql.name start_ip_address = "0.0.0.0" end_ip_address = "0.0.0.0" }
For the database we can use the serverless tier, this will provide scaling when needed. Check out the Microsoft Docs for more information.
# SQL Database resource "azurerm_mssql_database" "sqldb" { name = var.sql_database_config.name server_id = azurerm_sql_server.sql.id collation = "SQL_Latin1_General_CP1_CS_AS" license_type = "LicenseIncluded" max_size_gb = var.sql_database_config.max_db_size_gb min_capacity = var.sql_database_config.min_cpu_capacity read_scale = false sku_name = "${var.sql_database_config.sku}_${var.sql_database_config.max_cpu_capacity}" zone_redundant = false auto_pause_delay_in_minutes = var.sql_database_config.auto_pause_delay_in_minutes tags = var.tags }
Container Group
Setting up the container group requires credentials to access to the Azure Container Registry to run the custom SonarQube container. Using the data resource allows retrieval of the details without passing them as variables:
data "azurerm_container_registry" "registry" { name = var.container_registry_config.name resource_group_name = var.container_registry_config.resource_group }
For this setup we are going to have two containers the custom SonarQube container and a Caddy container. Caddy can be used as a reverse proxy and is small, lightweight and provides management of certificates automatically with Let’s Encrypt. Note: there are some rate limits with Let’s encrypt see the website for more information.
The SonarQube container configuration connects the SQL Database and Azure Storage Account Shares configured earlier.
The Caddy container configuration sets up the reverse proxy to the SonarQube instance.
resource "azurerm_container_group" "container" { name = var.sonar_config.container_group_name resource_group_name = var.resource_group_name location = var.resource_group_location ip_address_type = "public" dns_name_label = var.sonar_config.dns_name os_type = "Linux" restart_policy = "OnFailure" tags = var.tags image_registry_credential { server = data.azurerm_container_registry.registry.login_server username = data.azurerm_container_registry.registry.admin_username password = data.azurerm_container_registry.registry.admin_password } container { name = "sonarqube-server" image = "${data.azurerm_container_registry.registry.login_server}/${var.sonar_config.image_name}" cpu = var.sonar_config.required_vcpu memory = var.sonar_config.required_memory_in_gb environment_variables = { WEBSITES_CONTAINER_START_TIME_LIMIT = 400 } secure_environment_variables = { SONARQUBE_JDBC_URL = "jdbc:sqlserver://${azurerm_sql_server.sql.name}.database.windows.net:1433;database=${azurerm_mssql_database.sqldb.name};user=${azurerm_sql_server.sql.administrator_login}@${azurerm_sql_server.sql.name};password=${azurerm_sql_server.sql.administrator_login_password};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;" SONARQUBE_JDBC_USERNAME = var.sql_server_credentials.admin_username SONARQUBE_JDBC_PASSWORD = var.sql_server_credentials.admin_password } ports { port = 9000 protocol = "TCP" } volume { name = "data" mount_path = "/opt/sonarqube/data" share_name = "data" storage_account_name = azurerm_storage_account.storage.name storage_account_key = azurerm_storage_account.storage.primary_access_key } volume { name = "extensions" mount_path = "/opt/sonarqube/extensions" share_name = "extensions" storage_account_name = azurerm_storage_account.storage.name storage_account_key = azurerm_storage_account.storage.primary_access_key } volume { name = "logs" mount_path = "/opt/sonarqube/logs" share_name = "logs" storage_account_name = azurerm_storage_account.storage.name storage_account_key = azurerm_storage_account.storage.primary_access_key } } container { name = "caddy-ssl-server" image = "caddy:latest" cpu = "1" memory = "1" commands = ["caddy", "reverse-proxy", "--from", "${var.sonar_config.dns_name}.${var.resource_group_location}.azurecontainer.io", "--to", "localhost:9000"] ports { port = 443 protocol = "TCP" } ports { port = 80 protocol = "TCP" } } }
You have no doubt noticed that there are many variables used for the configuration, so here are all the ones and the defaults:
variable "resource_group_name" { type = string description = "(Required) Resource Group to deploy to" } variable "resource_group_location" { type = string description = "(Required) Resource Group location" } variable "tags" { description = "(Required) Tags for SonarQube" } variable "container_registry_config" { type = object({ name = string resource_group = string }) description = "(Required) Container Registry Configuration" } variable "sonar_config" { type = object({ image_name = string container_group_name = string dns_name = string required_memory_in_gb = string required_vcpu = string }) description = "(Required) SonarQube Configuration" } variable "sql_server_credentials" { type = object({ admin_username = string admin_password = string }) sensitive = true } variable "sql_database_config" { type = object({ name = string sku = string auto_pause_delay_in_minutes = number min_cpu_capacity = number max_cpu_capacity = number max_db_size_gb = number }) default = { name = "sonarqubedb" sku = "GP_S_Gen5" auto_pause_delay_in_minutes = 60 min_cpu_capacity = 0.5 max_cpu_capacity = 1 max_db_size_gb = 50 } } variable "sql_server_config" { type = object({ name = string version = string }) default = { name = "sql-sonarqube" version = "12.0" } } variable "storage_share_quota_gb" { type = object({ data = number extensions = number logs = number }) default = { data = 10 extensions = 10 logs = 10 } } variable "storage_config" { type = object({ name = string kind = string sku = string tier = string }) default = { name = "sonarqubestore" kind = "StorageV2" sku = "LRS" tier = "Standard" } }
To make this easy to configure I added all of this to a Terrform module and then the main terraform file would be something like:
terraform { required_version = ">= 0.14" required_providers { azurerm = { source = "hashicorp/azurerm" version = "=2.37.0" } } } provider "azurerm" { features {} } # Create a resource group resource "azurerm_resource_group" "instance" { name = "test-sonar" location = "uksouth" } # Generate Password resource "random_password" "password" { length = 24 special = true override_special = "_%@" } # Module module "sonarqube" { depends_on = [azurerm_resource_group.instance] source = "./modules/sonarqube" tags = { Project = "Sonar", Environment = "Dev" } resource_group_name = azurerm_resource_group.instance.name resource_group_location = azurerm_resource_group.instance.location sql_server_credentials = { admin_username = "sonaradmin" admin_password = random_password.password.result } container_registry_config = { name = "myregistry" resource_group = "my-registry-rg" } sonar_config = { container_group_name = "sonarqubecontainer" required_memory_in_gb = "4" required_vcpu = "2" image_name = "my-sonar:latest" dns_name = "my-custom-sonar" } sql_server_config = { name = "sql-sonarqube" version = "12.0" } sql_database_config = { name = "sonarqubedb" sku = "GP_S_Gen5" auto_pause_delay_in_minutes = 60 min_cpu_capacity = 0.5 max_cpu_capacity = 2 max_db_size_gb = 250 } storage_share_quota_gb = { data = 50 extensions = 10 logs = 20 } }
By using the random_password resource to create a SQL password no secrets are included and there is no need to know the password as long as the SonarQube Server does.
The full code used here can be found in my GitHub repo.
I am sure there are still improvements that could be made to this setup but hopefully it will help anyone wanting to use ACI for running a SonarQube server.
Next Steps
Once the container instance is running you might not want it running 24/7 so using an Azure Function or Logic App to stop and start the instance when its not needed will definitely save money. I plan to run Azure Functions to start the container at 08:00 and stop the container at 18:00 Monday to Friday.
As this setup is public, a version that uses your own network and is private might be a good next step.