Azure

Getting Started with Azure Front Door and Terraform

What is Azure Front Door?

Azure Front Door is basically a layer 7 global load balancer, global router with url based routing, WAF (Web Application Firewall) and web traffic manager all in one.

I recommend reading the Azure Front Door documentation for further details.

Create Azure Front Door

To create an Azure Front Door you can use the Azure Portal, there are a couple of examples you can follow to do that:

Creating Azure Front Door via the Azure Portal is a good start point to understand how it works, but for this example I am going to create IaC (Infrastructure as Code) to setup a basic Azure Front Door.

I have recently started using Terraform for building Azure resources and so I will use that here to create an Azure Front Door.

Requirements

  • Make sure I can build Terraform configurations (I am using a Docker container from my previous article – IaC with Containers)
  • Update Terraform to latest (at the time of writing it was 0.12.26)
  • Make sure the configuration is shareable
  • Support multiple configurations and rules

Right, I’ve got my container, updated Terraform and now need to look up sharing Terraform configurations.

Terraform uses modules for sharing configurations and the documentation is quite good. This seems a lot nicer than building linked ARM (Azure Resource Management) templates, as you can have shareable modules locally without having to use blob storage.

You can also take advantage of the Terraform Public Registry or sign up for Terraform Cloud which supports using a Private Registry.

Creation

So I need to create a folder for the module (I’ll name it frontdoor), a main.tf, variables.tf, outputs.tf and README.md.

Main.tf

Terraform includes azurerm_frontdoor resource in order to create an Azure Front Door.

Azure Front Door has a lot of settings and there are many parts, so let’s go through them a bit at a time.

Note: a lot of the sections allow a list of items (Load Balancing, Routing Rule, Backend Pool, Frontend Endpoint, etc.), this is to allow for multiple configurations and rules to be setup in one go.

Basic
  • Azure Front Door name
  • Resource Group name for Azure Front Door
  • Load balancer enabled
  • Backend pools
    • Certificate name check – enforce name check on HTTPS requests
    • Send/Receive Timeout – timeout forwarding the request to the backend
  • Tags – always good to tag your resources
# Create front door
resource "azurerm_frontdoor" "instance" {
  name                                         = var.frontdoor_name
  resource_group_name                          = var.frontdoor_resource_group_name
  enforce_backend_pools_certificate_name_check = var.enforce_backend_pools_certificate_name_check
  load_balancer_enabled                        = var.frontdoor_loadbalancer_enabled
  backend_pools_send_receive_timeout_seconds   = var.backend_pools_send_receive_timeout_seconds
  tags                                         = var.tags
}
Load Balancing
  • Name
  • Sample size – number of samples to use for load balancing decisions
  • Successful samples required – how many samples must succeed to be considered successful
  • Additional latency – how many milliseconds for probes to fall into the low latency bucket
  dynamic "backend_pool_load_balancing" {
    for_each = var.frontdoor_loadbalancer
    content {
      name                            = backend_pool_load_balancing.value.name
      sample_size                     = backend_pool_load_balancing.value.sample_size
      successful_samples_required     = backend_pool_load_balancing.value.successful_samples_required
      additional_latency_milliseconds = backend_pool_load_balancing.value.successful_samples_required
    }
  }
Routing Rule
  • Name
  • Accepted protocols – e.g. Http, Https
  • Patterns for route match – e.g. “/*”, “/mypath”, “/mypath/*”
  • Enabled
  • Forwarding or Redirect configuration
  dynamic "routing_rule" {
    for_each = var.frontdoor_routing_rule
    content {
        name               = routing_rule.value.name
        accepted_protocols = routing_rule.value.accepted_protocols
        patterns_to_match  = routing_rule.value.patterns_to_match        
        frontend_endpoints = values({for x, endpoint in var.frontend_endpoint : x => endpoint.name})
        dynamic "forwarding_configuration" {
          for_each = routing_rule.value.configuration == "Forwarding" ? routing_rule.value.forwarding_configuration : []
          content {
            backend_pool_name                     = forwarding_configuration.value.backend_pool_name
            cache_enabled                         = forwarding_configuration.value.cache_enabled                           
            cache_use_dynamic_compression         = forwarding_configuration.value.cache_use_dynamic_compression 
            cache_query_parameter_strip_directive = forwarding_configuration.value.cache_query_parameter_strip_directive
            custom_forwarding_path                = forwarding_configuration.value.custom_forwarding_path
            forwarding_protocol                   = forwarding_configuration.value.forwarding_protocol
          }
        }
        dynamic "redirect_configuration" {
          for_each = routing_rule.value.configuration == "Redirecting" ? routing_rule.value.redirect_configuration : []
          content {
            custom_host         = redirect_configuration.value.custom_host
            redirect_protocol   = redirect_configuration.value.redirect_protocol
            redirect_type       = redirect_configuration.value.redirect_type
            custom_fragment     = redirect_configuration.value.custom_fragment
            custom_path         = redirect_configuration.value.custom_path
            custom_query_string = redirect_configuration.value.custom_query_string
          }
        }
    }
  }

As the Frontend Endpoints are configured separately, being able to find a way to reuse the names to configure the frontend_endpoints for the routing was invaluable. The values function allows to read just the values from the given object field. The expression is very similar to C#, using a lambda (=>) to project just the name field to then get values from.

frontend_endpoints = values({for x, endpoint in var.frontend_endpoint : x => endpoint.name})
Health Probe
  • Name
  • Enabled
  • Path
  • Protocol – e.g. Http, Https
  • Probe method – e.g. HEAD, GET
  • Interval – interval between each health probe
 dynamic "backend_pool_health_probe" {
    for_each = var.frontdoor_health_probe
    content {
      name                = backend_pool_health_probe.value.name
      enabled             = backend_pool_health_probe.value.enabled
      path                = backend_pool_health_probe.value.path
      protocol            = backend_pool_health_probe.value.protocol
      probe_method        = backend_pool_health_probe.value.probe_method
      interval_in_seconds = backend_pool_health_probe.value.interval_in_seconds
    }  
  }
Backend Pool
  • Name
  • Load Balancer name
  • Health probe name
  • Backend
    • Enabled
    • Host Header
    • Address
    • HTTP port
    • HTTPS port
    • Priority
    • Weight
  dynamic "backend_pool" {
    for_each = var.frontdoor_backend
    content {
       name                = backend_pool.value.name      
       load_balancing_name = backend_pool.value.loadbalancing_name
       health_probe_name   = backend_pool.value.health_probe_name

       dynamic "backend" {
        for_each = backend_pool.value.backend
        content {
          enabled     = backend.value.enabled
          address     = backend.value.address
          host_header = backend.value.host_header
          http_port   = backend.value.http_port
          https_port  = backend.value.https_port
          priority    = backend.value.priority
          weight      = backend.value.weight
        }
      }
    }
  }

Frontend Endpoint
  • Name
  • Host Name
  • Custom Domain
  • Session Affinity
  • WAF Policy ID
  dynamic "frontend_endpoint" {
    for_each = var.frontend_endpoint
    content {
      name                                    = frontend_endpoint.value.name
      host_name                               = frontend_endpoint.value.host_name
      custom_https_provisioning_enabled       = frontend_endpoint.value.custom_https_provisioning_enabled    
      session_affinity_enabled                = frontend_endpoint.value.session_affinity_enabled
      session_affinity_ttl_seconds            = frontend_endpoint.value.session_affinity_ttl_seconds
      web_application_firewall_policy_link_id = frontend_endpoint.value.waf_policy_link_id
      dynamic "custom_https_configuration" {
        for_each = frontend_endpoint.value.custom_https_provisioning_enabled == false ? [] : list(frontend_endpoint.value.custom_https_configuration.certificate_source)
        content {
          certificate_source = custom_https_configuration.value.certificate_source
        }
      }
    }
  }

Variables.tf

All the variables that are defined for the Module.

variable "frontdoor_resource_group_name" {
  description = "(Required) Resource Group name"
  type = string
}

variable "frontdoor_name" {
  description = "(Required) Name of the Azure Front Door to create"
  type = string
}

variable "frontdoor_loadbalancer_enabled" {
  description = "(Required) Enable the load balancer for Azure Front Door"
  type = bool
}

variable "enforce_backend_pools_certificate_name_check" {
  description = "Enforce the certificate name check for Azure Front Door"
  type = bool
  default = false
}

variable "backend_pools_send_receive_timeout_seconds" {
  description = "Set the send/receive timeout for Azure Front Door"
  type = number
  default = 60
}

variable "tags" {
  description = "(Required) Tags for Azure Front Door"  
}

variable "frontend_endpoint" {
  description = "(Required) Frontend Endpoints for Azure Front Door"
}

variable "frontdoor_routing_rule" {
  description = "(Required) Routing rules for Azure Front Door"
}

variable "frontdoor_loadbalancer" {
  description = "(Required) Load Balancer settings for Azure Front Door"
}

variable "frontdoor_health_probe" {
  description = "(Required) Health Probe settings for Azure Front Door"
}

variable "frontdoor_backend" {
  description = "(Required) Backend settings for Azure Front Door"
}

Example of Use

Make sure that Terraform is not less than 0.12.x and that the provider (azurerm) is using the latest version (at the time of writing this was 2.14.0).

terraform {
  required_version = ">= 0.12"
}
# Configure the Azure Provider
provider "azurerm" {
  # whilst the `version` attribute is optional, we recommend pinning to a given version of the Provider
  version = "=2.14.0"
  features {}
}

# Create a resource group
resource "azurerm_resource_group" "instance" {
  name     = "my-frontdoor-rg"
  location = "westeurope"
}

# Create Front Door
module "front-door" {
  source                                            = "./modules/frontdoor"    
  tags                                              = { Department = "Ops"}
  frontdoor_resource_group_name                     = azurerm_resource_group.instance.name
  frontdoor_name                                    = "my-frontdoor"
  frontdoor_loadbalancer_enabled                    = true
  backend_pools_send_receive_timeout_seconds        = 240
    
  frontend_endpoint      = [{
      name                                    = "my-frontdoor-frontend-endpoint"
      host_name                               = "my-frontdoor.azurefd.net"
      custom_https_provisioning_enabled       = false
      custom_https_configuration              = { certificate_source = "FrontDoor"}
      session_affinity_enabled                = false
      session_affinity_ttl_seconds            = 0
      waf_policy_link_id                      = ""
  }]

  frontdoor_routing_rule = [{
      name               = "my-routing-rule"
      accepted_protocols = ["Http", "Https"] 
      patterns_to_match  = ["/*"]
      enabled            = true              
      configuration      = "Forwarding"
      forwarding_configuration = [{
        backend_pool_name                     = "backendBing"
        cache_enabled                         = false       
        cache_use_dynamic_compression         = false       
        cache_query_parameter_strip_directive = "StripNone" 
        custom_forwarding_path                = ""
        forwarding_protocol                   = "MatchRequest"   
      }]      
  }]

  frontdoor_loadbalancer =  [{      
      name                            = "loadbalancer"
      sample_size                     = 4
      successful_samples_required     = 2
      additional_latency_milliseconds = 0
  }]

  frontdoor_health_probe = [{      
      name                = "healthprobe"
      enabled             = true
      path                = "/"
      protocol            = "Http"
      probe_method        = "HEAD"
      interval_in_seconds = 60
  }]

  frontdoor_backend =  [{
      name               = "backendBing"
      loadbalancing_name = "loadbalancer"
      health_probe_name  = "healthprobe"
      backend = [{
        enabled     = true
        host_header = "www.bing.com"
        address     = "www.bing.com"
        http_port   = 80
        https_port  = 443
        priority    = 1
        weight      = 50
      }]
  }]
}

The code for this article and full module can be found in my GitHub repository.

I’ve ran the example with the newly created module, let’s take a look at the Azure Portal to see if an Azure Front Door was created.

It looks like everything was setup and working, selecting the link my-frontdoor.azurefd.net forwarded to Bing.com as the example was configured to do.

Note: Azure Front Door configuration can be viewed and updated via the Azure CLI.

Summary

I am sure there is more to learn about Terraform and Azure Front Door and this configuration may well get updated in the future as I learn more. I’ve not only gained a better understanding of what Terraform has to offer, but what Azure Front Door has to offer as well.

Azure

IaC with Containers

In a previous article about IaC co-located with your application I discussed having application specific infrastructure with your code and then in following articles (IaC ARM templates, IaC Ansible and IaC Terraform) I discussed deploying IaC using various methods with Azure Pipelines. What I haven’t discussed is the development environment used in order to test out my infrastructure code, and that is what this article is about.

Lately I have been building various infrastructure using ARM templates, Ansible and Terraform separately and sometimes together to deploy to Azure. And I have lost track of what was installed in order to create and run my infrastructure code. For creating templates, playbooks, etc. I use Visual Studio Code. There are extensions for ARM templates, Ansible and Terraform which provide great help in creating infrastructure code.

For Ansible and Terraform I was using the WSL (Windows Subsystem for Linux) and running a script to install them into that environment and then using PowerShell Core and the Azure CLI for ARM templates.

I thought this was a good setup and provided me with everything I needed but what if I want to share this environment with my team. I could simply put together a list of commands to install all of the tools that I had and maybe create a page in our wiki, that would be a start, but then what about versions of tools and different operating systems and the fact it can be time consuming getting it all set up.

To help solve this I decided that creating a container using Docker would provide a way of consistently building an IaC environment. An environment I can share with others and I could use in a CI/CD pipeline. I decided that the editor is up to the developer so I am not including Visual Studio Code but definitely recommend it. (if you are interested Microsoft provide a guide to installing VS Code into containers https://code.visualstudio.com/docs/remote/containers).

I can imagine that most use only one of the tools for their infrastructure code, PowerShell or Azure CLI or either Ansible or Terraform even though they can work well together. RedHat and HashiCorp presented the concept of using Ansible and Terraform together in this video, it is very interesting to see how the strengths of each can be used together rather than a pro or con for choosing one or the other.

So on to creating some docker files, Microsoft have official images on Docker Hub for Powershell and Azure CLI, Hashicorp have an official image for Terraform. I still prefer to create my own image for Terraform though so I can add other tools to the image.

Note: All dockerfiles shown here are available from my GitHub repository.

Create Image

Terraform dockerfile

ARG IMAGE_VERSION=latest
ARG IMAGE_REPO=alpine

FROM ${IMAGE_REPO}:${IMAGE_VERSION} AS installer-env
ARG TERRAFORM_VERSION=0.12.26
ARG TERRAFORM_PACKAGE=terraform_${TERRAFORM_VERSION}_linux_amd64.zip
ARG TERRAFORM_URL=https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/${TERRAFORM_PACKAGE}

# Install packages to get terraform
RUN apk upgrade --update && \
    apk add --no-cache wget
RUN wget --quiet ${TERRAFORM_URL} && \
    unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
    mv terraform /usr/bin

# New stage to remove tar.gz layers from the final image
FROM ${IMAGE_REPO}:${IMAGE_VERSION}

# Copy only the files we need from the previous stage
COPY --from=installer-env ["/usr/bin/terraform", "/usr/bin/terraform"]

# Install additional packages
RUN apk upgrade --update && \
    apk add --no-cache bash 

CMD tail -f /dev/null

Ansible dockerfile

ARG IMAGE_VERSION=latest
ARG IMAGE_REPO=alpine

FROM ${IMAGE_REPO}:${IMAGE_VERSION}

ENV ANSIBLE_VERSION=2.9.9
ENV ALPINE_ANSIBLE_VERSION=2.9.9-r0
ENV WHEEL_VERSION=0.30.0
ENV APK_ADD="bash py3-pip ansible=${ALPINE_ANSIBLE_VERSION}"

# Install core libs
RUN apk upgrade --update && \
    apk add --no-cache ${APK_ADD}

# Install Ansible 
RUN python3 -m pip install --upgrade pip && \
    pip install wheel==${WHEEL_VERSION} && \
    pip install ansible[azure]==${ANSIBLE_VERSION} mitogen && \
    pip install netaddr xmltodict openshift

CMD tail -f /dev/null

As I mentioned at the beginning, I’ve be using a mix of the tools and could do with an image with multiple tools. I ended up with the following dockerfile.

ARG IMAGE_VERSION=latest
ARG IMAGE_REPO=alpine

FROM ${IMAGE_REPO}:${IMAGE_VERSION} AS installer-env

ARG TERRAFORM_VERSION=0.12.26
ARG TERRAFORM_PACKAGE=terraform_${TERRAFORM_VERSION}_linux_amd64.zip
ARG TERRAFORM_URL=https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/${TERRAFORM_PACKAGE}

ARG POWERSHEL_VERSION=7.0.1
ARG POWERSHELL_PACKAGE=powershell-${POWERSHEL_VERSION}-linux-alpine-x64.tar.gz
ARG POWERSHLL_DOWNLOAD_PACKAGE=powershell.tar.gz
ARG POWERSHELL_URL=https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHEL_VERSION}/${POWERSHELL_PACKAGE}

# Install packages
RUN apk upgrade --update && \
    apk add --no-cache bash wget curl python3 libffi openssl

# Get Terraform
RUN wget --quiet ${TERRAFORM_URL} && \
    unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
    mv terraform /usr/bin

# Get PowerShell Core
RUN curl -L ${POWERSHELL_URL} -o /tmp/${POWERSHLL_DOWNLOAD_PACKAGE}&& \
    mkdir -p /opt/microsoft/powershell/7 && \
    tar zxf /tmp/${POWERSHLL_DOWNLOAD_PACKAGE} -C /opt/microsoft/powershell/7 && \
    chmod +x /opt/microsoft/powershell/7/pwsh

# New stage to remove tar.gz layers from the final image
FROM ${IMAGE_REPO}:${IMAGE_VERSION}

# Copy only the files we need from the previous stage
COPY --from=installer-env ["/usr/bin/terraform", "/usr/bin/terraform"]
COPY --from=installer-env ["/opt/microsoft/powershell/7", "/opt/microsoft/powershell/7"]
RUN ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh

ENV ANSIBLE_VERSION=2.9.9
ENV ALPINE_ANSIBLE_VERSION=2.9.9-r0
ENV WHEEL_VERSION=0.30.0
ENV APK_ADD="bash py3-pip ansible=${ALPINE_ANSIBLE_VERSION}"
ENV APK_POWERSHELL="ca-certificates less ncurses-terminfo-base krb5-libs libgcc libintl libssl1.1 libstdc++ tzdata userspace-rcu zlib icu-libs"

# Install core packages
RUN apk upgrade --update && \
    apk add --no-cache ${APK_ADD} ${APK_POWERSHELL} && \
    apk -X https://dl-cdn.alpinelinux.org/alpine/edge/main add --no-cache lttng-ust

# Install Ansible and other packages
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
    pip install --no-cache-dir wheel==${WHEEL_VERSION} && \
    pip install --no-cache-dir ansible[azure]==${ANSIBLE_VERSION} mitogen && \
    pip install --no-cache-dir netaddr xmltodict openshift

CMD tail -f /dev/null

Now I have dockerfiles, I need to create the image. Docker build is the command that I need.

docker build -f <dockerfile to use> -t <image name>
e.g.
docker build -f terraform.dockerfile -t iac_terraform_image

Run Image

Once I have the image, I need to test it out. As I installed bash as part of my image I can run an interactive command that will provide me with the bash shell.

e.g.
docker run -it --entrypoint=/bin/bash iac_terraform_image

from the shell I can then run commands like terraform –version

So now I have an image running in a container, what about my infrastructure I want to run? I could update the image to include git and pull my source code into the container or I could mount a volume to the container pointing to my source code on the host machine. I will use a volume for this example.

docker run -it --entrypoint=/bin/bash --volume <my source code>:/<name of the mount> <image name>
e.g.
docker run -it --entrypoint=/bin/bash --volumeĀ c:\users\<your profile>\Source:/mycode iac_terraform_image

As I want to deploy infrastructure to Azure I need to define some credentials, for this example I will add them as environment variables in a separate local file. Note: this file should not be added to source control.

The file contents looks like this for Terraform:

ARM_SUBSCRIPTION_ID=<subscription id>
ARM_TENANT_ID=<tenant id>
ARM_CLIENT_ID=<app id>
ARM_CLIENT_SECRET=<client secret>

The file contents looks like this for Ansible:

AZURE_SUBSCRIPTION_ID=<subscription id>
AZURE_TENANT=<tenant id>
AZURE_CLIENT_ID=<app id>
AZURE_SECRET=<client secret>

Combining the Terraform and Ansible environment values in the file would be used for the image that uses both of them.

I’ve called the file docker_env.txt and can now add that to the docker run command to include those environment variables.

e.g.
docker run -it --entrypoint=/bin/bash --volume c:\users\<your profile>\Source:/mycode --env-file docker_env.txt iac_terraform_image

Share Image

I now have an image, attached my source code, added my environment and I can run commands. Next step is to be able to share this image with others, I can upload it to Docker Hub (public) or I can use some other container registry like Azure Container Registry (private).  I will use the Azure Container Registry as it’s private.

I need to login to the registry, so I’ll use the Azure CLI to do this.

az acr login --name <your container registry>

Next I need to tag the image I want to push using the Azure Container Registry name with docker tag.

e.g.
docker tag terraform yourcontainerregistry.azurecr.io/iac_terraform_image

And now I can push the image to the Azure Container Registry using docker push.

e.g.
docker push yourcontainerregistry.azurecr.io/iac_terraform_image

For others to now use the image (assuming they have access to the Azure Container Registry) they can use docker pull.

e.g.
docker pull yourcontainerregistry.azurecr.io/iac_terraform_image

Running the newly pulled image.

docker run -it --entrypoint=/bin/bash --volumeĀ  c:\users\<your profile>\Source:/mycode --env-file docker_env.txt yourcontainerregistry.azurecr.io/iac_terraform_image

Summary

Using containers really helps get an environment setup quickly and easily and allows you to concentrate on the task at hand rather than setting up the tools. If you have many developers this can save a lot of time and the image can be easily updated and versioned.

I hope this was useful and helps create your own IaC development environments.