Azure, IaC

Azure ACI – SonarQube

After moving into a new role I found we needed a SonarQube server to perform code analysis. I thought of looking again at using ACI (Azure Container Instances) as when previously trying ACI with an external database I found that any version of SonarQube after 7.7 throws an error:

ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

After doing some reading and investigation I found that this is due to elastic search being embedded into SonarQube. In order to fix this it would mean changing the host OS settings to increase the max_map_count, on a Linux OS this would be changing the /etc/sysctl.conf file to update the max_map_count

vm.max_map_count=262144

The problem with ACI is that there is no access to the host, so how can the latest SonarQube (latest version at the time of writing was 8.6.0) be ran in ACI If this cannot be changed.

In this article I am going to detail a way of running SonarQube in ACI with an external database.

What do we need to do?

The first thing is to address the max_map_count issue, for this we need a sonar.properties file that contains the following setting:

sonar.search.javaAdditionalOpts=-Dnode.store.allow_mmap=false

This setting provides the ability to disable memory mapping in elastic search, which is needed when running SonarQube inside containers where you cannot change the hosts vm.max_map_count. (See elastic search documentation)

Now we have our sonar.properties file we need to create a custom container so we can add that into the setup. A small dockerfile can achieve this:

FROM sonarqube:8.6.0-community
COPY sonar.properties /opt/sonarqube/conf/sonar.properties
RUN chown sonarqube:sonarqube /opt/sonarqube/conf/sonar.properties

This dockerfile can now be built using Docker and pushed to an ACR (Azure Container Registry) ready to be used. If you are not sure how to build a container and/or push to an ACR then have a look at the Docker and Microsoft documentation which have easy to follow instructions.

Build Infrastructure

So now that we have a container uploaded to a container server we can look at the rest of the configuration.

There are a number of parts to create:

  • File shares
  • External Database
  • Container Group
    • SonarQube
    • Reverse Proxy

Being a big advocate of IaC (Infrastructure as Code) I am going to use Terraform to configure the SonarQube deployment.

File Shares

The SonarQube documentation mentions setting up volume mounts for data, extensions and logs, for this we can use an Azure Storage Account and Shares.

To make sure that the storage account has a unique name a random string is created to be appended to the storage name.

resource "random_string" "random" {
  length  = 16
  special = false
  upper   = false
}

resource "azurerm_storage_account" "storage" {
  name                     = lower(substr("${var.storage_config.name}${random_string.random.result}", 0, 24))
  resource_group_name      = var.resource_group_name
  location                 = var.resource_group_location
  account_kind             = var.storage_config.kind
  account_tier             = var.storage_config.tier
  account_replication_type = var.storage_config.sku
  tags                     = var.tags
}

resource "azurerm_storage_share" "data-share" {
  name                 = "data"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.data
}

resource "azurerm_storage_share" "extensions-share" {
  name                 = "extensions"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.extensions
}

resource "azurerm_storage_share" "logs-share" {
  name                 = "logs"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = var.storage_share_quota_gb.logs
}

External Database

For the external database part we can use Azure SQL Server, a SQL Database and setup a firewall rule to allow azure services to access the database. Normally you would add specific IP addresses but as the IP address is not guaranteed when a container is stopped and restarted it cannot be added here. If you want to create a static IP then this article might help.

SQL Server and Firewall configuration:

resource "azurerm_sql_server" "sql" {
  name                         = lower("${var.sql_server_config.name}${random_string.random.result}")
  resource_group_name          = var.resource_group_name
  location                     = var.resource_group_location
  version                      = var.sql_server_config.version
  administrator_login          = var.sql_server_credentials.admin_username
  administrator_login_password = var.sql_server_credentials.admin_password
  tags                         = var.tags
}

resource "azurerm_sql_firewall_rule" "sqlfirewall" {
  name                = "AllowAllWindowsAzureIps"
  resource_group_name = var.resource_group_name
  server_name         = azurerm_sql_server.sql.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

For the database we can use the serverless tier, this will provide scaling when needed. Check out the Microsoft Docs for more information.

# SQL Database
resource "azurerm_mssql_database" "sqldb" {
  name                        = var.sql_database_config.name
  server_id                   = azurerm_sql_server.sql.id
  collation                   = "SQL_Latin1_General_CP1_CS_AS"
  license_type                = "LicenseIncluded"
  max_size_gb                 = var.sql_database_config.max_db_size_gb
  min_capacity                = var.sql_database_config.min_cpu_capacity
  read_scale                  = false
  sku_name                    = "${var.sql_database_config.sku}_${var.sql_database_config.max_cpu_capacity}"
  zone_redundant              = false
  auto_pause_delay_in_minutes = var.sql_database_config.auto_pause_delay_in_minutes
  tags                        = var.tags
}

Container Group

Setting up the container group requires credentials to access to the Azure Container Registry to run the custom SonarQube container. Using the data resource allows retrieval of the details without passing them as variables:

data "azurerm_container_registry" "registry" {
  name                = var.container_registry_config.name
  resource_group_name = var.container_registry_config.resource_group
}

For this setup we are going to have two containers the custom SonarQube container and a Caddy container. Caddy can be used as a reverse proxy and is small, lightweight and provides management of certificates automatically with Let’s Encrypt. Note: there are some rate limits with Let’s encrypt see the website for more information.

The SonarQube container configuration connects the SQL Database and Azure Storage Account Shares configured earlier.

The Caddy container configuration sets up the reverse proxy to the SonarQube instance.

resource "azurerm_container_group" "container" {
  name                = var.sonar_config.container_group_name
  resource_group_name = var.resource_group_name
  location            = var.resource_group_location
  ip_address_type     = "public"
  dns_name_label      = var.sonar_config.dns_name
  os_type             = "Linux"
  restart_policy      = "OnFailure"
  tags                = var.tags
  
  image_registry_credential {
      server = data.azurerm_container_registry.registry.login_server
      username = data.azurerm_container_registry.registry.admin_username
      password = data.azurerm_container_registry.registry.admin_password
  }

  container {
    name   = "sonarqube-server"
    image  = "${data.azurerm_container_registry.registry.login_server}/${var.sonar_config.image_name}"
    cpu    = var.sonar_config.required_vcpu
    memory = var.sonar_config.required_memory_in_gb
    environment_variables = {
      WEBSITES_CONTAINER_START_TIME_LIMIT = 400
    }    
    secure_environment_variables = {
      SONARQUBE_JDBC_URL      = "jdbc:sqlserver://${azurerm_sql_server.sql.name}.database.windows.net:1433;database=${azurerm_mssql_database.sqldb.name};user=${azurerm_sql_server.sql.administrator_login}@${azurerm_sql_server.sql.name};password=${azurerm_sql_server.sql.administrator_login_password};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
      SONARQUBE_JDBC_USERNAME = var.sql_server_credentials.admin_username
      SONARQUBE_JDBC_PASSWORD = var.sql_server_credentials.admin_password
    }

    ports {
      port     = 9000
      protocol = "TCP"
    }

    volume {
      name                 = "data"
      mount_path           = "/opt/sonarqube/data"
      share_name           = "data"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "extensions"
      mount_path           = "/opt/sonarqube/extensions"
      share_name           = "extensions"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "logs"
      mount_path           = "/opt/sonarqube/logs"
      share_name           = "logs"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }   
  }

  container {
    name     = "caddy-ssl-server"
    image    = "caddy:latest"
    cpu      = "1"
    memory   = "1"
    commands = ["caddy", "reverse-proxy", "--from", "${var.sonar_config.dns_name}.${var.resource_group_location}.azurecontainer.io", "--to", "localhost:9000"]

    ports {
      port     = 443
      protocol = "TCP"
    }

    ports {
      port     = 80
      protocol = "TCP"
    }
  }
}

You have no doubt noticed that there are many variables used for the configuration, so here are all the ones and the defaults:

variable "resource_group_name" {
  type = string
  description = "(Required) Resource Group to deploy to"
}

variable "resource_group_location" {
  type = string
  description = "(Required) Resource Group location"
}

variable "tags" {
  description = "(Required) Tags for SonarQube"
}

variable "container_registry_config" {
    type = object({
        name           = string
        resource_group = string
    })
    description = "(Required) Container Registry Configuration"
}

variable "sonar_config" {
    type = object({
        image_name            = string
        container_group_name  = string
        dns_name              = string
        required_memory_in_gb = string
        required_vcpu         = string
    })

    description = "(Required) SonarQube Configuration"
}

variable "sql_server_credentials" {
    type = object({
        admin_username = string
        admin_password = string
    })
    sensitive = true
}

variable "sql_database_config" {
    type = object({
        name                        = string
        sku                         = string
        auto_pause_delay_in_minutes = number
        min_cpu_capacity            = number
        max_cpu_capacity            = number
        max_db_size_gb              = number
    })
    default = {
        name                        = "sonarqubedb"
        sku                         = "GP_S_Gen5"
        auto_pause_delay_in_minutes = 60
        min_cpu_capacity            = 0.5
        max_cpu_capacity            = 1
        max_db_size_gb              = 50
    }
}

variable "sql_server_config" {
   type = object({
        name    = string
        version = string
   })
   default = {
       name    = "sql-sonarqube"
       version = "12.0"
   }
}

variable "storage_share_quota_gb" {
  type = object({
    data       = number
    extensions = number
    logs       = number
  })
  default = {
      data       = 10
      extensions = 10
      logs       = 10
  }
}

variable "storage_config" {
    type = object({
        name = string
        kind = string
        sku  = string        
        tier = string
    })
    default = {
        name = "sonarqubestore"
        kind = "StorageV2"
        sku  = "LRS"
        tier = "Standard"
    }
}

To make this easy to configure I added all of this to a Terrform module and then the main terraform file would be something like:

terraform {  
  required_version = ">= 0.14"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.37.0"
    }
  }
}

provider "azurerm" {  
  features {}
}

# Create a resource group
resource "azurerm_resource_group" "instance" {
  name     = "test-sonar"
  location = "uksouth"
}

# Generate Password
resource "random_password" "password" {
  length = 24
  special = true
  override_special = "_%@"
}

# Module
module "sonarqube" {
    depends_on                        = [azurerm_resource_group.instance]
    source                            = "./modules/sonarqube"
    tags                              = { Project = "Sonar", Environment = "Dev" }
    resource_group_name               = azurerm_resource_group.instance.name
    resource_group_location           = azurerm_resource_group.instance.location
    
    sql_server_credentials            = {
        admin_username = "sonaradmin"
        admin_password = random_password.password.result
    }

    container_registry_config         = {
        name           = "myregistry"
        resource_group = "my-registry-rg"
    }

    sonar_config                      = {
        container_group_name  = "sonarqubecontainer"
        required_memory_in_gb = "4"
        required_vcpu         = "2"
        image_name            = "my-sonar:latest"
        dns_name              = "my-custom-sonar"
    }

    sql_server_config                = {
       name    = "sql-sonarqube"
       version = "12.0"
    }

    sql_database_config              = {
        name                        = "sonarqubedb"
        sku                         = "GP_S_Gen5"
        auto_pause_delay_in_minutes = 60
        min_cpu_capacity            = 0.5
        max_cpu_capacity            = 2
        max_db_size_gb              = 250
    }

    storage_share_quota_gb            = {  
        data       = 50
        extensions = 10
        logs       = 20
    }
}

By using the random_password resource to create a SQL password no secrets are included and there is no need to know the password as long as the SonarQube Server does.
The full code used here can be found in my GitHub repo.

I am sure there are still improvements that could be made to this setup but hopefully it will help anyone wanting to use ACI for running a SonarQube server.

Next Steps

Once the container instance is running you might not want it running 24/7 so using an Azure Function or Logic App to stop and start the instance when its not needed will definitely save money. I plan to run Azure Functions to start the container at 08:00 and stop the container at 18:00 Monday to Friday.

As this setup is public, a version that uses your own network and is private might be a good next step.

Azure

Getting Started with Azure Front Door and Terraform

What is Azure Front Door?

Azure Front Door is basically a layer 7 global load balancer, global router with url based routing, WAF (Web Application Firewall) and web traffic manager all in one.

I recommend reading the Azure Front Door documentation for further details.

Create Azure Front Door

To create an Azure Front Door you can use the Azure Portal, there are a couple of examples you can follow to do that:

Creating Azure Front Door via the Azure Portal is a good start point to understand how it works, but for this example I am going to create IaC (Infrastructure as Code) to setup a basic Azure Front Door.

I have recently started using Terraform for building Azure resources and so I will use that here to create an Azure Front Door.

Requirements

  • Make sure I can build Terraform configurations (I am using a Docker container from my previous article – IaC with Containers)
  • Update Terraform to latest (at the time of writing it was 0.12.26)
  • Make sure the configuration is shareable
  • Support multiple configurations and rules

Right, I’ve got my container, updated Terraform and now need to look up sharing Terraform configurations.

Terraform uses modules for sharing configurations and the documentation is quite good. This seems a lot nicer than building linked ARM (Azure Resource Management) templates, as you can have shareable modules locally without having to use blob storage.

You can also take advantage of the Terraform Public Registry or sign up for Terraform Cloud which supports using a Private Registry.

Creation

So I need to create a folder for the module (I’ll name it frontdoor), a main.tf, variables.tf, outputs.tf and README.md.

Main.tf

Terraform includes azurerm_frontdoor resource in order to create an Azure Front Door.

Azure Front Door has a lot of settings and there are many parts, so let’s go through them a bit at a time.

Note: a lot of the sections allow a list of items (Load Balancing, Routing Rule, Backend Pool, Frontend Endpoint, etc.), this is to allow for multiple configurations and rules to be setup in one go.

Basic
  • Azure Front Door name
  • Resource Group name for Azure Front Door
  • Load balancer enabled
  • Backend pools
    • Certificate name check – enforce name check on HTTPS requests
    • Send/Receive Timeout – timeout forwarding the request to the backend
  • Tags – always good to tag your resources
# Create front door
resource "azurerm_frontdoor" "instance" {
  name                                         = var.frontdoor_name
  resource_group_name                          = var.frontdoor_resource_group_name
  enforce_backend_pools_certificate_name_check = var.enforce_backend_pools_certificate_name_check
  load_balancer_enabled                        = var.frontdoor_loadbalancer_enabled
  backend_pools_send_receive_timeout_seconds   = var.backend_pools_send_receive_timeout_seconds
  tags                                         = var.tags
}
Load Balancing
  • Name
  • Sample size – number of samples to use for load balancing decisions
  • Successful samples required – how many samples must succeed to be considered successful
  • Additional latency – how many milliseconds for probes to fall into the low latency bucket
  dynamic "backend_pool_load_balancing" {
    for_each = var.frontdoor_loadbalancer
    content {
      name                            = backend_pool_load_balancing.value.name
      sample_size                     = backend_pool_load_balancing.value.sample_size
      successful_samples_required     = backend_pool_load_balancing.value.successful_samples_required
      additional_latency_milliseconds = backend_pool_load_balancing.value.successful_samples_required
    }
  }
Routing Rule
  • Name
  • Accepted protocols – e.g. Http, Https
  • Patterns for route match – e.g. “/*”, “/mypath”, “/mypath/*”
  • Enabled
  • Forwarding or Redirect configuration
  dynamic "routing_rule" {
    for_each = var.frontdoor_routing_rule
    content {
        name               = routing_rule.value.name
        accepted_protocols = routing_rule.value.accepted_protocols
        patterns_to_match  = routing_rule.value.patterns_to_match        
        frontend_endpoints = values({for x, endpoint in var.frontend_endpoint : x => endpoint.name})
        dynamic "forwarding_configuration" {
          for_each = routing_rule.value.configuration == "Forwarding" ? routing_rule.value.forwarding_configuration : []
          content {
            backend_pool_name                     = forwarding_configuration.value.backend_pool_name
            cache_enabled                         = forwarding_configuration.value.cache_enabled                           
            cache_use_dynamic_compression         = forwarding_configuration.value.cache_use_dynamic_compression 
            cache_query_parameter_strip_directive = forwarding_configuration.value.cache_query_parameter_strip_directive
            custom_forwarding_path                = forwarding_configuration.value.custom_forwarding_path
            forwarding_protocol                   = forwarding_configuration.value.forwarding_protocol
          }
        }
        dynamic "redirect_configuration" {
          for_each = routing_rule.value.configuration == "Redirecting" ? routing_rule.value.redirect_configuration : []
          content {
            custom_host         = redirect_configuration.value.custom_host
            redirect_protocol   = redirect_configuration.value.redirect_protocol
            redirect_type       = redirect_configuration.value.redirect_type
            custom_fragment     = redirect_configuration.value.custom_fragment
            custom_path         = redirect_configuration.value.custom_path
            custom_query_string = redirect_configuration.value.custom_query_string
          }
        }
    }
  }

As the Frontend Endpoints are configured separately, being able to find a way to reuse the names to configure the frontend_endpoints for the routing was invaluable. The values function allows to read just the values from the given object field. The expression is very similar to C#, using a lambda (=>) to project just the name field to then get values from.

frontend_endpoints = values({for x, endpoint in var.frontend_endpoint : x => endpoint.name})
Health Probe
  • Name
  • Enabled
  • Path
  • Protocol – e.g. Http, Https
  • Probe method – e.g. HEAD, GET
  • Interval – interval between each health probe
 dynamic "backend_pool_health_probe" {
    for_each = var.frontdoor_health_probe
    content {
      name                = backend_pool_health_probe.value.name
      enabled             = backend_pool_health_probe.value.enabled
      path                = backend_pool_health_probe.value.path
      protocol            = backend_pool_health_probe.value.protocol
      probe_method        = backend_pool_health_probe.value.probe_method
      interval_in_seconds = backend_pool_health_probe.value.interval_in_seconds
    }  
  }
Backend Pool
  • Name
  • Load Balancer name
  • Health probe name
  • Backend
    • Enabled
    • Host Header
    • Address
    • HTTP port
    • HTTPS port
    • Priority
    • Weight
  dynamic "backend_pool" {
    for_each = var.frontdoor_backend
    content {
       name                = backend_pool.value.name      
       load_balancing_name = backend_pool.value.loadbalancing_name
       health_probe_name   = backend_pool.value.health_probe_name

       dynamic "backend" {
        for_each = backend_pool.value.backend
        content {
          enabled     = backend.value.enabled
          address     = backend.value.address
          host_header = backend.value.host_header
          http_port   = backend.value.http_port
          https_port  = backend.value.https_port
          priority    = backend.value.priority
          weight      = backend.value.weight
        }
      }
    }
  }

Frontend Endpoint
  • Name
  • Host Name
  • Custom Domain
  • Session Affinity
  • WAF Policy ID
  dynamic "frontend_endpoint" {
    for_each = var.frontend_endpoint
    content {
      name                                    = frontend_endpoint.value.name
      host_name                               = frontend_endpoint.value.host_name
      custom_https_provisioning_enabled       = frontend_endpoint.value.custom_https_provisioning_enabled    
      session_affinity_enabled                = frontend_endpoint.value.session_affinity_enabled
      session_affinity_ttl_seconds            = frontend_endpoint.value.session_affinity_ttl_seconds
      web_application_firewall_policy_link_id = frontend_endpoint.value.waf_policy_link_id
      dynamic "custom_https_configuration" {
        for_each = frontend_endpoint.value.custom_https_provisioning_enabled == false ? [] : list(frontend_endpoint.value.custom_https_configuration.certificate_source)
        content {
          certificate_source = custom_https_configuration.value.certificate_source
        }
      }
    }
  }

Variables.tf

All the variables that are defined for the Module.

variable "frontdoor_resource_group_name" {
  description = "(Required) Resource Group name"
  type = string
}

variable "frontdoor_name" {
  description = "(Required) Name of the Azure Front Door to create"
  type = string
}

variable "frontdoor_loadbalancer_enabled" {
  description = "(Required) Enable the load balancer for Azure Front Door"
  type = bool
}

variable "enforce_backend_pools_certificate_name_check" {
  description = "Enforce the certificate name check for Azure Front Door"
  type = bool
  default = false
}

variable "backend_pools_send_receive_timeout_seconds" {
  description = "Set the send/receive timeout for Azure Front Door"
  type = number
  default = 60
}

variable "tags" {
  description = "(Required) Tags for Azure Front Door"  
}

variable "frontend_endpoint" {
  description = "(Required) Frontend Endpoints for Azure Front Door"
}

variable "frontdoor_routing_rule" {
  description = "(Required) Routing rules for Azure Front Door"
}

variable "frontdoor_loadbalancer" {
  description = "(Required) Load Balancer settings for Azure Front Door"
}

variable "frontdoor_health_probe" {
  description = "(Required) Health Probe settings for Azure Front Door"
}

variable "frontdoor_backend" {
  description = "(Required) Backend settings for Azure Front Door"
}

Example of Use

Make sure that Terraform is not less than 0.12.x and that the provider (azurerm) is using the latest version (at the time of writing this was 2.14.0).

terraform {
  required_version = ">= 0.12"
}
# Configure the Azure Provider
provider "azurerm" {
  # whilst the `version` attribute is optional, we recommend pinning to a given version of the Provider
  version = "=2.14.0"
  features {}
}

# Create a resource group
resource "azurerm_resource_group" "instance" {
  name     = "my-frontdoor-rg"
  location = "westeurope"
}

# Create Front Door
module "front-door" {
  source                                            = "./modules/frontdoor"    
  tags                                              = { Department = "Ops"}
  frontdoor_resource_group_name                     = azurerm_resource_group.instance.name
  frontdoor_name                                    = "my-frontdoor"
  frontdoor_loadbalancer_enabled                    = true
  backend_pools_send_receive_timeout_seconds        = 240
    
  frontend_endpoint      = [{
      name                                    = "my-frontdoor-frontend-endpoint"
      host_name                               = "my-frontdoor.azurefd.net"
      custom_https_provisioning_enabled       = false
      custom_https_configuration              = { certificate_source = "FrontDoor"}
      session_affinity_enabled                = false
      session_affinity_ttl_seconds            = 0
      waf_policy_link_id                      = ""
  }]

  frontdoor_routing_rule = [{
      name               = "my-routing-rule"
      accepted_protocols = ["Http", "Https"] 
      patterns_to_match  = ["/*"]
      enabled            = true              
      configuration      = "Forwarding"
      forwarding_configuration = [{
        backend_pool_name                     = "backendBing"
        cache_enabled                         = false       
        cache_use_dynamic_compression         = false       
        cache_query_parameter_strip_directive = "StripNone" 
        custom_forwarding_path                = ""
        forwarding_protocol                   = "MatchRequest"   
      }]      
  }]

  frontdoor_loadbalancer =  [{      
      name                            = "loadbalancer"
      sample_size                     = 4
      successful_samples_required     = 2
      additional_latency_milliseconds = 0
  }]

  frontdoor_health_probe = [{      
      name                = "healthprobe"
      enabled             = true
      path                = "/"
      protocol            = "Http"
      probe_method        = "HEAD"
      interval_in_seconds = 60
  }]

  frontdoor_backend =  [{
      name               = "backendBing"
      loadbalancing_name = "loadbalancer"
      health_probe_name  = "healthprobe"
      backend = [{
        enabled     = true
        host_header = "www.bing.com"
        address     = "www.bing.com"
        http_port   = 80
        https_port  = 443
        priority    = 1
        weight      = 50
      }]
  }]
}

The code for this article and full module can be found in my GitHub repository.

I’ve ran the example with the newly created module, let’s take a look at the Azure Portal to see if an Azure Front Door was created.

It looks like everything was setup and working, selecting the link my-frontdoor.azurefd.net forwarded to Bing.com as the example was configured to do.

Note: Azure Front Door configuration can be viewed and updated via the Azure CLI.

Summary

I am sure there is more to learn about Terraform and Azure Front Door and this configuration may well get updated in the future as I learn more. I’ve not only gained a better understanding of what Terraform has to offer, but what Azure Front Door has to offer as well.