DevOps, GitHub Actions, Security

Building Securely with Nuke.Build: Integrating Snyk Scans for .NET Projects

Introduction

As a .NET developer familiar with crafting CI/CD pipelines in YAML using platforms like Azure Pipelines or GitHub Actions, my recent discovery of Nuke.Build sparked considerable interest. This alternative offers the ability to build pipelines in the familiar C# language, complete with IntelliSense for auto-completion, and the flexibility to run and debug pipelines locally—a notable departure from the constraints of YAML pipelines, which often rely on remote build agents.

While exploring Nuke.Build’s developer-centric features, it became apparent that this tool not only enhances the developer experience but also provides an opportunity to seamlessly integrate security practices into the development workflow. As someone deeply invested in promoting developer-first application security, the prospect of incorporating security scans directly into the development lifecycle resonated strongly with me, aligning perfectly with my desire for rapid feedback on application security.

Given my role as a Snyk Ambassador, it was only natural to explore how I could leverage Snyk’s robust security scanning capabilities within the Nuke.Build pipeline to bolster the security posture of .NET projects.

In this blog post, I’ll demonstrate the creation of a pipeline using Nuke.Build and showcase the addition of Snyk scan capability, along with Software Bill of Materials (SBOM) generation. Through this proactive approach, we’ll highlight the ease of integrating a layer of security within the development lifecycle.

Getting Started

Following the Getting Started Guide on the Nuke.Build website, I swiftly integrated a build project into my solution. For the sake of simplicity, I opted to consolidate the .NET Restore, Build, and Test actions into a single target, along with adding a Clean target.

In Nuke.Build, a “target” refers to individual build steps that can be executed independently or in sequence. By combining multiple actions into a single target, I aimed to streamline the build process and eliminate unnecessary complexity.

The resulting build code looks like this:

using Nuke.Common;
using Nuke.Common.IO;
using Nuke.Common.ProjectModel;
using Nuke.Common.Tools.DotNet;

class Build : NukeBuild
{
    public static int Main() => Execute<Build>(x => x.BuildTestCode);

    [Parameter("Configuration to build - Default is 'Debug' (local) or 'Release' (server)")]
    readonly Configuration Configuration = IsLocalBuild ? Configuration.Debug : Configuration.Release;

    [Solution(GenerateProjects = true)] readonly Solution Solution;

    AbsolutePath SourceDirectory => RootDirectory / "src";
    AbsolutePath TestsDirectory => RootDirectory / "tests";

    Target Clean => _ => _
        .Executes(() =>
        {
            SourceDirectory.GlobDirectories("*/bin", "*/obj").DeleteDirectories();
        });

    Target BuildTestCode => _ => _
        .DependsOn(Clean)
        .Executes(() =>
        {
            DotNetTasks.DotNetRestore(_ => _
                .SetProjectFile(Solution)
            );
            DotNetTasks.DotNetBuild(_ => _
                .EnableNoRestore()
                .SetProjectFile(Solution)
                .SetConfiguration(Configuration)
                .SetProperty("SourceLinkCreate", true)
            );
            DotNetTasks.DotNetTest(_ => _
                .EnableNoRestore()
                .EnableNoBuild()
                .SetConfiguration(Configuration)
                .SetTestAdapterPath(TestsDirectory / "*.Tests")
            );
        });
}

Running nuke from my windows terminal built the solution and ran the tests.

Adding Security Scans

With the basic pipeline set up to build the code and run the tests, the next step is to integrate the Snyk scan into the pipeline. While Nuke supports a variety of CLI tools, unfortunately, Snyk is not among them.

To begin, you’ll need to create a free account on the Snyk platform if you haven’t already done so. Once registered, you can then install Snyk CLI using npm. If you have Node.js installed locally, you can install it by running:

npm install snyk@latest -g

Given that the Snyk CLI isn’t directly supported by Nuke, I turned to the Nuke documentation to explore possible solutions for running the CLI. Two options caught my attention: PowerShellTasks and DockerTasks.

To execute the necessary tasks for the Snyk scan, a few steps are required. These include authorizing a connection to Snyk, performing an open-source scan, potentially conducting a code scan, and generating a Software Bill of Materials (SBOM).

Let’s delve into each of these tasks using PowerShellTasks in Nuke. Firstly, let’s tackle authorization. The CLI command for authorization is:

snyk auth

Running this command typically opens a web browser to the Snyk platform, allowing you to authorize access. However, this method isn’t suitable for automated builds on a remote agent. Instead, we need to provide credentials. If you’re using a free account, your user will have an API Token available, which you can find on your account settings page under “API Token.” For enterprise accounts, you can create a service account specifically for this purpose.

To incorporate the Snyk Token into our application, let’s add a parameter to the code:

[Parameter("Snyk Token to interact with the API")] readonly string SnykToken;

Next, we’ll create a new target to execute the authorization command using PowerShellTasks and pass in the Snyk Token:

Target SnykAuth => _ => _
    .DependsOn(BuildTestCode)
    .Executes(() =>
    {          
        PowerShellTasks.PowerShell(_ => _
            .SetCommand("npm install snyk@latest -g")
        );
        PowerShellTasks.PowerShell(_ => _
            .SetCommand($"snyk auth {SnykToken}")
        );
    });

NOTE: This assumes that the build agent does not have the Snyk CLI installed

With authorization complete, our next task is to add a target for the Snyk Open Source scan, ensuring it depends on the Snyk Auth target:

 Target SnykTest => _ => _
    .DependsOn(SnykAuth)
    .Executes(() =>
    {
        // Snyk Test
        PowerShellTasks.PowerShell(_ => _
          .SetCommand("snyk test --all-projects --exclude=build")
        );
    });

Including the --all-projects flag ensures that all projects are scanned, which is good practice for .NET projects. Additionally, I’ve added an exclusion for the build project to focus the scan on application issues. I typically rely on Snyk Monitor attached to my GitHub Repo to detect issues in the entire repository, leaving this scan to concentrate solely on the application being deployed.

Finally, we need to update the Execute method to include the Snyk Test:

public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest);

Running nuke again from the Windows terminal now prompts for Snyk authentication

Once authenticated

In order to prevent this we need to pass the API token value to nuke. It’s a good idea to set an environment variable for your API token e.g. with PowerShell

$env:snykApiToken = "<your api token>"
# or using the Snyk CLI
$env:snykApiToken = snyk config get api
# Run nuke passing in the parameter
nuke --snykToken $snykApiToken

Upon executing the scan, it promptly identified several issues:

Subsequently, the Snyk Test failed, flagging vulnerabilities in the code and failing SnykTest:

To control whether the build fails based on the severity of vulnerabilities found, we can add another parameter:

 [Parameter("Snyk Severity Threshold (critical, high, medium or low)")] readonly string SnykSeverityThreshold = "high";

Ensure that the value has been set before using it. Note that the threshold must be in lowercase.

Target SnykTest => _ => _
    .DependsOn(SnykAuth)
    .Requires(() => SnykSeverityThreshold)
    .Executes(() =>
    {
        // Snyk Test
        PowerShellTasks.PowerShell(_ => _
          .SetCommand($"snyk test --all-projects --exclude=build --severity-threshold={SnykSeverityThreshold.ToLowerInvariant()}")
        );
    });

Now, let’s address running Snyk Code for a SAST scan, which will also need a parameter to control the severity threshold:

[Parameter("Snyk Code Severity Threshold (high, medium or low)")] readonly string SnykCodeSeverityThreshold = "high";

We’ll create another target for the Code test:

Target SnykCodeTest => _ => _
    .DependsOn(SnykAuth)
    .Requires(() => SnykCodeSeverityThreshold)
    .Executes(() =>
    {
        PowerShellTasks.PowerShell(_ => _
            .SetCommand($"snyk code test --all-projects --exclude=build --severity-threshold={SnykCodeSeverityThreshold.ToLowerInvariant()}")
        );
    });

Update the Execute method to include the code test:

public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest, x => x.SnykCodeTest);

With the severity set for both scans, SnykTest continues to find high vulnerabilities, while SnykCodeTest passes:

To generate an SBOM (Software Bill of Materials) using Snyk and publish it as a build artifact, let’s add an output path:

AbsolutePath OutputDirectory => RootDirectory / "outputs";

And include a Produces entry to ensure the artifact is generated and stored in the specified directory:

Target GenerateSbom => _ => _
   .DependsOn(SnykAuth)
   .Produces(OutputDirectory / "*.json")
   .Executes(() =>
   {
       OutputDirectory.CreateOrCleanDirectory();
       PowerShellTasks.PowerShell(_ => _
           .SetCommand($"snyk sbom --all-projects --format spdx2.3+json --json-file-output={OutputDirectory / "sbom.json"}")
       );
   });

Lastly, update the Execute method to include the generation of the SBOM:

public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest, x => x.SnykCodeTest, x => x.GenerateSbom);

Now, when executing Nuke, the SBOM will be generated and stored in the specified directory, ready to be published as a build artifact.

Earlier, I mentioned that PowerShellTasks and DockerTasks were both viable options for integrating the Snyk CLI into the Nuke build. Here’s how you can achieve the same tasks using DockerTasks:

using Nuke.Common.Tools.Docker;
...
  Target SnykTest => _ => _
     .DependsOn(BuildTestCode)
     .Requires(() => SnykToken, () => SnykSeverityThreshold)
     .Executes(() =>
     {
         // Snyk Test
         DockerTasks.DockerRun(_ => _
             .EnableRm()
             .SetVolume($"{RootDirectory}:/app")
             .SetEnv($"SNYK_TOKEN={SnykToken}")
             .SetImage("snyk/snyk:dotnet")
             .SetCommand($"snyk test --all-projects --exclude=build --severity-threshold={SnykSeverityThreshold.ToLowerInvariant()}")
         );
     });
 Target SnykCodeTest => _ => _
     .DependsOn(BuildTestCode)
     .Requires(() => SnykToken, () => SnykCodeSeverityThreshold)
     .Executes(() =>
     {
         DockerTasks.DockerRun(_ => _
             .EnableRm()
             .SetVolume($"{RootDirectory}:/app")
             .SetEnv($"SNYK_TOKEN={SnykToken}")
             .SetImage("snyk/snyk:dotnet")
             .SetCommand($"snyk code test --all-projects --exclude=build --severity-threshold={SnykCodeSeverityThreshold.ToLowerInvariant()}")
         );
     });
 Target GenerateSbom => _ => _
     .DependsOn(BuildTestCode)
     .Produces(OutputDirectory / "*.json")
     .Requires(() => SnykToken)
     .Executes(() =>
     {
         OutputDirectory.CreateOrCleanDirectory();
         DockerTasks.DockerRun(_ => _
             .EnableRm()
             .SetVolume($"{RootDirectory}:/app")
             .SetEnv($"SNYK_TOKEN={SnykToken}")
             .SetImage("snyk/snyk:dotnet")
             .SetCommand($"snyk sbom --all-projects --format spdx2.3+json --json-file-output={OutputDirectory.Name}/sbom.json")
         );
     });

NOTE: Snyk Auth is not required as a separate task as that is done inside the snyk container

Automating Nuke.Build with GitHub Actions: Generating YAML

Nuke comes with another useful feature: the ability to see a plan, which shows which targets are being executed and when. Simply running nuke --plan provides an HTML output of the plan:

With everything configured for local execution, it’s time to think about running this in a pipeline. Nuke supports various CI platforms, but for this demonstration, I’ll be using GitHub Actions. Nuke provides attributes to automatically generate the file to run the code:

using Nuke.Common.CI.GitHubActions;

[GitHubActions(
    "continuous",
    GitHubActionsImage.UbuntuLatest,
    On = new[] { GitHubActionsTrigger.Push },
    ImportSecrets = new[] { nameof(SnykOrgId), nameof(SnykToken), nameof(SnykSeverityThreshold), nameof(SnykCodeSeverityThreshold) },
    InvokedTargets = new[] { nameof(BuildTestCode), nameof(SnykTest), nameof(SnykCodeTest), nameof(GenerateSbom) })]
class Build : NukeBuild
...

To pass in the parameters for GitHub Actions, we’ll need to designate the token as a Secret:

[Parameter("Snyk Token to interact with the API")][Secret] readonly string SnykToken;

Next, let’s remove the default values for the threshold parameters:

[Parameter("Snyk Severity Threshold (critical, high, medium or low)")] readonly string SnykSeverityThreshold;
[Parameter("Snyk Code Severity Threshold (high, medium or low)")] readonly string SnykCodeSeverityThreshold;

We’ll then add the values from the .nuke/parameters.json file:

{
  "$schema": "./build.schema.json",
  "Solution": "Useful.Extensions.sln",
  "SnykSeverityThreshold": "high",
  "SnykCodeSeverityThreshold": "high"
}

Running Nuke again produces the following auto-generated output for GitHub Actions YAML in the folder .github/workflows/continuous.yml:

# ------------------------------------------------------------------------------
# <auto-generated>
#
#     This code was generated.
#
#     - To turn off auto-generation set:
#
#         [GitHubActions (AutoGenerate = false)]
#
#     - To trigger manual generation invoke:
#
#         nuke --generate-configuration GitHubActions_continuous --host GitHubActions
#
# </auto-generated>
# ------------------------------------------------------------------------------

name: continuous

on: [push]

jobs:
  ubuntu-latest:
    name: ubuntu-latest
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: 'Cache: .nuke/temp, ~/.nuget/packages'
        uses: actions/cache@v3
        with:
          path: |
            .nuke/temp
            ~/.nuget/packages
          key: ${{ runner.os }}-${{ hashFiles('**/global.json', '**/*.csproj', '**/Directory.Packages.props') }}
      - name: 'Run: BuildTestCode, SnykTest, SnykCodeTest, GenerateSbom'
        run: ./build.cmd BuildTestCode SnykTest SnykCodeTest GenerateSbom
        env:
          SnykToken: ${{ secrets.SNYK_TOKEN }}
          SnykSeverityThreshold: ${{ secrets.SNYK_SEVERITY_THRESHOLD }}
          SnykCodeSeverityThreshold: ${{ secrets.SNYK_CODE_SEVERITY_THRESHOLD }}
      - name: 'Publish: outputs'
        uses: actions/upload-artifact@v3
        with:
          name: outputs
          path: outputs

This YAML file is automatically generated by Nuke and is ready to be used in your GitHub Actions workflow. It sets up the necessary steps to run your build, including caching dependencies, executing targets, and publishing artifacts.

NOTE: When I first committed the nuke build files, GitHub Actions gave me a permission denied error when running build.cmd. Running these commands and committing them got over that problem

git update-index --chmod=+x .\build.cmd
git update-index --chmod=+x .\build.sh

Here is the output of the GitHub Actions run for this pipeline

After fixing the vulnerabilities in my code, the workflow successfully passed:

Here’s the full C# source code for both PowerShell and Docker versions:

using Nuke.Common;
using Nuke.Common.CI.GitHubActions;
using Nuke.Common.IO;
using Nuke.Common.ProjectModel;
using Nuke.Common.Tools.DotNet;
using Nuke.Common.Tools.PowerShell;

[GitHubActions(
    "continuous",
    GitHubActionsImage.UbuntuLatest,
    On = new[] { GitHubActionsTrigger.Push },
    ImportSecrets = new[] { nameof(SnykToken), nameof(SnykSeverityThreshold), nameof(SnykCodeSeverityThreshold) },
    InvokedTargets = new[] { nameof(BuildTestCode), nameof(SnykTest), nameof(SnykCodeTest), nameof(GenerateSbom) })]
class Build : NukeBuild
{
    public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest, x => x.SnykCodeTest, x => x.GenerateSbom);

    [Parameter("Configuration to build - Default is 'Debug' (local) or 'Release' (server)")]
    readonly Configuration Configuration = IsLocalBuild ? Configuration.Debug : Configuration.Release;

    [Parameter("Snyk Token to interact with the API")][Secret] readonly string SnykToken;
    [Parameter("Snyk Severity Threshold (critical, high, medium or low)")] readonly string SnykSeverityThreshold;
    [Parameter("Snyk Code Severity Threshold (high, medium or low)")] readonly string SnykCodeSeverityThreshold;

    [Solution(GenerateProjects = true)] readonly Solution Solution;

    AbsolutePath SourceDirectory => RootDirectory / "src";
    AbsolutePath TestsDirectory => RootDirectory / "tests";
    AbsolutePath OutputDirectory => RootDirectory / "outputs";

    Target Clean => _ => _
        .Executes(() =>
        {
            SourceDirectory.GlobDirectories("*/bin", "*/obj").DeleteDirectories();
        });

    Target BuildTestCode => _ => _
        .DependsOn(Clean)
        .Executes(() =>
        {
            DotNetTasks.DotNetRestore(_ => _
                .SetProjectFile(Solution)
            );
            DotNetTasks.DotNetBuild(_ => _
                .EnableNoRestore()
                .SetProjectFile(Solution)
                .SetConfiguration(Configuration)
                .SetProperty("SourceLinkCreate", true)
            );
            DotNetTasks.DotNetTest(_ => _
                .EnableNoRestore()
                .EnableNoBuild()
                .SetConfiguration(Configuration)
                .SetTestAdapterPath(TestsDirectory / "*.Tests")
            );
        });
    Target SnykAuth => _ => _
     .DependsOn(BuildTestCode)
     .Executes(() =>
     {
         PowerShellTasks.PowerShell(_ => _
             .SetCommand("npm install snyk@latest -g")
         );
         PowerShellTasks.PowerShell(_ => _
             .SetCommand($"snyk auth {SnykToken}")
         );
     });
    Target SnykTest => _ => _
        .DependsOn(SnykAuth)
        .Requires(() => SnykSeverityThreshold)
        .Executes(() =>
        {
            PowerShellTasks.PowerShell(_ => _
                .SetCommand($"snyk test --all-projects --exclude=build --severity-threshold={SnykSeverityThreshold.ToLowerInvariant()}")
            );
        });
    Target SnykCodeTest => _ => _
        .DependsOn(SnykAuth)
        .Requires(() => SnykCodeSeverityThreshold)
        .Executes(() =>
        {
            PowerShellTasks.PowerShell(_ => _
                .SetCommand($"snyk code test --all-projects --exclude=build --severity-threshold={SnykCodeSeverityThreshold.ToLowerInvariant()}")
            );
        });
    Target GenerateSbom => _ => _
        .DependsOn(SnykAuth)
        .Produces(OutputDirectory / "*.json")
        .Executes(() =>
        {
            OutputDirectory.CreateOrCleanDirectory();
            PowerShellTasks.PowerShell(_ => _
                .SetCommand($"snyk sbom --all-projects --format spdx2.3+json --json-file-output={OutputDirectory / "sbom.json"}")
            );
        });
}
using Nuke.Common;
using Nuke.Common.CI.GitHubActions;
using Nuke.Common.IO;
using Nuke.Common.ProjectModel;
using Nuke.Common.Tools.Docker;
using Nuke.Common.Tools.DotNet;

[GitHubActions(
    "continuous",
    GitHubActionsImage.UbuntuLatest,
    On = new[] { GitHubActionsTrigger.Push },
    ImportSecrets = new[] { nameof(SnykToken), nameof(SnykSeverityThreshold), nameof(SnykCodeSeverityThreshold) },
    InvokedTargets = new[] { nameof(BuildTestCode), nameof(SnykTest), nameof(SnykCodeTest), nameof(GenerateSbom) })]
class Build : NukeBuild
{
    public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest, x => x.SnykCodeTest, x => x.GenerateSbom);

    [Parameter("Configuration to build - Default is 'Debug' (local) or 'Release' (server)")]
    readonly Configuration Configuration = IsLocalBuild ? Configuration.Debug : Configuration.Release;

    [Parameter("Snyk Token to interact with the API")][Secret] readonly string SnykToken;
    [Parameter("Snyk Severity Threshold (critical, high, medium or low)")] readonly string SnykSeverityThreshold;
    [Parameter("Snyk Code Severity Threshold (high, medium or low)")] readonly string SnykCodeSeverityThreshold;

    [Solution(GenerateProjects = true)] readonly Solution Solution;

    AbsolutePath SourceDirectory => RootDirectory / "src";
    AbsolutePath TestsDirectory => RootDirectory / "tests";
    AbsolutePath OutputDirectory => RootDirectory / "outputs";

    Target Clean => _ => _
        .Executes(() =>
        {
            SourceDirectory.GlobDirectories("*/bin", "*/obj").DeleteDirectories();
        });

    Target BuildTestCode => _ => _
        .DependsOn(Clean)
        .Executes(() =>
        {
            DotNetTasks.DotNetRestore(_ => _
                .SetProjectFile(Solution)
            );
            DotNetTasks.DotNetBuild(_ => _
                .EnableNoRestore()
                .SetProjectFile(Solution)
                .SetConfiguration(Configuration)
                .SetProperty("SourceLinkCreate", true)
            );
            DotNetTasks.DotNetTest(_ => _
                .EnableNoRestore()
                .EnableNoBuild()
                .SetConfiguration(Configuration)
                .SetTestAdapterPath(TestsDirectory / "*.Tests")
            );
        });
    Target SnykTest => _ => _
        .DependsOn(BuildTestCode)
        .Requires(() => SnykToken, () => SnykSeverityThreshold)
        .Executes(() =>
        {
            // Snyk Test
            DockerTasks.DockerRun(_ => _
                .EnableRm()
                .SetVolume($"{RootDirectory}:/app")
                .SetEnv($"SNYK_TOKEN={SnykToken}")
                .SetImage("snyk/snyk:dotnet")
                .SetCommand($"snyk test --all-projects --exclude=build --severity-threshold={SnykSeverityThreshold.ToLowerInvariant()}")
            );
        });
    Target SnykCodeTest => _ => _
        .DependsOn(BuildTestCode)
        .Requires(() => SnykToken, () => SnykCodeSeverityThreshold)
        .Executes(() =>
        {
            DockerTasks.DockerRun(_ => _
                .EnableRm()
                .SetVolume($"{RootDirectory}:/app")
                .SetEnv($"SNYK_TOKEN={SnykToken}")
                .SetImage("snyk/snyk:dotnet")
                .SetCommand($"snyk code test --all-projects --exclude=build --severity-threshold={SnykCodeSeverityThreshold.ToLowerInvariant()}")
            );
        });
    Target GenerateSbom => _ => _
        .DependsOn(BuildTestCode)
        .Produces(OutputDirectory / "*.json")
        .Requires(() => SnykToken)
        .Executes(() =>
        {
            OutputDirectory.CreateOrCleanDirectory();
            DockerTasks.DockerRun(_ => _
                .EnableRm()
                .SetVolume($"{RootDirectory}:/app")
                .SetEnv($"SNYK_TOKEN={SnykToken}")
                .SetImage("snyk/snyk:dotnet")
                .SetCommand($"snyk sbom --all-projects --format spdx2.3+json --json-file-output={OutputDirectory.Name}/sbom.json")
            );
        });
}

Final Thoughts

Nuke.Build is a great concept for performing build pipelines and it really helps to be able to run the pipeline locally and test it out, making sure paths and everything are correct. Couple that with the capability to generate GitHub Actions and other support CI pipelines to run the code is a big benefit.

Adding Security Scans to catch things early is another plus and I am glad that it’s possible to run those scans in multiple ways in Nuke, hopefully the Snyk CLI can be supported in Nuke directly in the future.

If you haven’t checked out Nuke yet, I would definitely give it a try and see the benefits for yourself.

Azure, IaC, Pulumi, Security

Securing Shared Pulumi State in Azure

Introduction

Welcome to this blog post on securing Pulumi state in Azure, in the fast-evolving world of cloud infrastructure, ensuring the security of your code and sensitive information is paramount. Pulumi, a powerful Infrastructure as Code (IaC) platform, allows developers to express their infrastructure needs using familiar programming languages. As teams embrace Pulumi to manage their Azure resources, the need to safeguard the state files containing critical information becomes crucial.

In this blog post, we will explore an approach to securing Pulumi state files using Azure Storage Accounts, Azure Key Vault and Azure AD.

Before we begin configuring Pulumi state files, ensure you have the following pre-requisites in place:

  • Azure CLI: Make sure you have the latest Azure CLI installed (2.53.1 at the time of writing). The Azure CLI provides a powerful and user-friendly interface to interact with Azure resources.
  • Pulumi: Make sure you have the latest Pulumi installed (3.90.1 at the time of writing) to leverage the most recent features and improvements provided by the Pulumi platform.
  • Azure AD Permissions: To configure the Azure AD Groups, you need sufficient permissions within Azure Active Directory. Ensure that you have appropriate permissions, such as being assigned as an “User Access Administrator” on the Azure subscription where you’ll be deploying resources as well as permissions to deploy into the subscription e.g. “Contributor”.
  • PowerShell: Make sure you have PowerShell installed (7.3.9 at the time of writing)

NOTE: All the commands used were ran from a PowerShell 7 terminal on Windows

Some useful links:

Setup Azure Infrastructure

In this section, we will walk you through the process of creating an Azure Storage Account and an Azure Key Vault along with configuring access roles and permissions with Azure AD.

First of all login to Azure with the Azure CLI and set the subscription that you are going to deploy to.

az login
az account set -s "<name of subscription>"

We will start with declaring some variables needed the configuration such as storage name, key vault name, resource group, etc. (Don’t forget storage account and key vault names need to be globally unique)

$location="westeurope"
$stateResourceGroup="rg-pulumi-state-dev-euw"
$storageName="stpulumistoredevweu"
$storageContainerName="iacstate"
$storageSku="Standard_ZRS"
$storageDeleteRetention=7
$keyVaultName="kv-pulumisecrets-dev-euw"
$keyName="pulumisecret"

To organize and manage our Azure resources effectively, we’ll create a dedicated resource group. This container will house all the resources related to our Pulumi deployments, allowing for easy management.

az group create --location $location --name $stateResourceGroup

The next step is setting up an Azure Storage Account to store your Pulumi state files securely. The Storage Account will ensure durability and redundancy for your infrastructure state, while following good practices for access control and encryption.

az storage account create --name $storageName --resource-group $stateResourceGroup --location $location --sku $storageSku --min-tls-version TLS1_2 --https-only true --allow-shared-key-access false --allow-blob-public-access false --require-infrastructure-encryption

This may seem like quite a long command but this is configuring the account and additional security properties i.e.

  • min-tls-version: Specifies the minimum Transport Layer Security (TLS) version required for secure communication with the storage account
  • https-only: true, Enables HTTPS-only access for the storage account. This ensures that all data access and management operations are encrypted over HTTPS
  • allow-shared-key-access: false, Disables the use of the account key (shared key) for access to the storage account and use Azure AD for access, this is recommended for improved security
  • allow-blob-public-access: false, Disables public access to containers and blobs in the storage account. This setting ensures that containers and blobs cannot be accessed anonymously
  • require-infrastructure-encryption: Requires the storage service to encrypt all data at rest using Azure Storage Service Encryption (SSE). This is good practice for data security and as we will no doubt be storing secrets in our state its a good option to turn on

To prevent accidental deletions of your Pulumi state files, it’s a good idea to update the retention period for both containers and blobs. Enabling delete retention ensures data remains accessible for a specified duration, acting as a safety net against data loss.

Let’s update the retention period to reinforce the security of your Pulumi state files and maintain the integrity of your cloud infrastructure.

az storage account blob-service-properties update -n $storageName -g $stateResourceGroup --enable-delete-retention --delete-retention-days $storageDeleteRetention --enable-container-delete-retention --container-delete-retention-days $storageDeleteRetention
  • enable-delete-retention: Enable delete retention
  • delete-retention-days: Indicates the number of days that the deleted blob should be retained
  • enable-container-delete-retention: Enable container delete retention
  • container-delete-retention-days: Indicates the number of days that the deleted container should be retained

With our Azure Storage Account settings optimised for enhanced protection, it’s time to create the container where we’ll store our Pulumi state files. To further bolster security, we’ll configure the container with Azure AD authentication.

az storage container create --name $storageContainerName --account-name $storageName --auth-mode login

Having set up our Azure Storage Account, the next critical step is to create the Azure Key Vault to use as Pulumi’s secrets provider.

az keyvault create --name $keyVaultName --resource-group $stateResourceGroup --location $location

To encrypt values within our Pulumi state file securely, we’ll generate an encryption key in Azure Key Vault.

az keyvault key create --name $keyName --kty RSA --size 4096 --vault-name $keyVaultName

Next we will enable RBAC so that we can grant permissions to the same group of users given access to the Storage Account.

az keyvault update --name $keyVaultName --enable-rbac-authorization

Full PowerShell with comments

$location="westeurope"
$stateResourceGroup="rg-pulumi-state-dev-euw"
$storageName="stpulumistoredevweu"
$storageContainerName="iacstate"
$storageSku="Standard_ZRS"
$storageDeleteRetention=7
$keyVaultName="kv-pulumisecrets-dev-euw"
$keyName="pulumisecret"

# Create Resource Group
az group create --location $location --name $stateResourceGroup

# Create Secure Storage Account
az storage account create --name $storageName --resource-group $stateResourceGroup --location $location --sku $storageSku --min-tls-version TLS1_2 --https-only true --allow-shared-key-access false --allow-blob-public-access false --require-infrastructure-encryption
az storage account blob-service-properties update -n $storageName -g $stateResourceGroup --enable-delete-retention true --delete-retention-days $storageDeleteRetention --enable-container-delete-retention --container-delete-retention-days $storageDeleteRetention
az storage container create --name $storageContainerName --account-name $storageName --auth-mode login

# Create Key Vault
az keyvault create --name $keyVaultName --resource-group $stateResourceGroup --location $location

# Create Key
az keyvault key create --name $keyName --kty RSA --size 4096 --vault-name $keyVaultName

# Enable RBAC
az keyvault update --name $keyVaultName --enable-rbac-authorization

Azure AD Group and Role Assignment

With our resources properly configured, it’s time to grant access by adding an Azure AD group. We have two options to achieve this:

  1. Azure Portal: You can create an AD group directly in the Azure Portal. Refer to the “How to manage groups” tutorial on Microsoft Learn for step-by-step guidance.
  2. Azure CLI: Alternatively, you can use the Azure CLI to create the AD group programmatically. This approach provides automation and consistency in managing groups. Use the Azure CLI commands to create the AD group and assign it to the necessary resources.

For this guide we’ll be using the Azure CLI

$adGroup="PulumiUsers"
$adGroupId=$(az ad group create --display-name $adGroup --mail-nickname $adGroup --query id -o tsv)

or if the AD group already exists

$adGroup="PulumiUsers"
$adGroupId=$(az ad group show -g $adGroup --query id -o tsv)

To grant the necessary permissions on the Storage Account, we will assign the “Storage Blob Data Contributor” role to the Azure AD group. This role provides the group members with the required access to manage blobs within the Storage Account.

$storageId=$(az storage account show -n $storageName --query id -o tsv)
az role assignment create --role 'Storage Blob Data Contributor' --assignee-object-id $adGroupId --assignee-principal-type Group --scope $storageId

To grant the necessary permissions on the Key Vault, we will assign the “Key Vault Crypto User” role to the Azure AD group. This role allows group members to perform cryptographic operations using keys within the Key Vault.

$keyVaultId=$(az keyvault show --name $keyVaultName --query id -o tsv)
az role assignment create --role 'Key Vault Crypto User' --assignee-object-id $adGroupId --assignee-principal-type Group --scope $keyVaultId

You will now need to add users to the Azure AD group in order to grant access to the Key Vault and Storage Account resources to securely manage secrets and Pulumi state files.

Configuring Local Development

To configure your local development environment, login to Azure using the Azure CLI and if needed set the subscription where the storage account and key vault were created, if you are not already logged in

az login
az account set -s "<name of subscription>"

Now we need to configure a couple of environment variables for Pulumi in order to use the Azure Key Vault and Storage Account.

$storageContainerName="iacstate"
$keyVaultName="kv-pulumisecrets-dev-euw"
$env:AZURE_KEYVAULT_AUTH_VIA_CLI=$true
$env:AZURE_STORAGE_ACCOUNT="stpulumistoredevweu"

pulumi login --cloud-url azblob://$storageContainerName

After logging in to Pulumi, it’s now time to create a new stack. As I am a C# developer I am going to use azure-csharp.

pulumi new azure-csharp --secrets-provider azurekeyvault://$keyvaultName.vault.azure.net/keys/pulumisecret

Other pulumi languages are also supported e.g. azure-typescript, azure-python, etc..

When you run the command pulumi new, the Pulumi CLI will prompt you to provide the following information:

  • Project Name e.g. pulumi-demo
  • Project Description e.g. Simple App in C#
  • Stack Name e.g. Dev
  • Azure Native Location e.g. WestEurope

TIP: if you have a current stack it can be updated to use the Azure Key Vault as a secrets provider e.g.

pulumi stack change-secrets-provider azurekeyvault://$keyvaultName.vault.azure.net/keys/pulumisecret

With your secure shared state store in place, you’re all set to begin building your infrastructure. By adding this configuration to source control, your team members can collaborate seamlessly. They’ll be able to build and run the infrastructure effortlessly, provided they have the necessary prerequisites installed and are part of the designated Azure AD group.

Running pulumi up with the basic generated code builds a Resource Group and an Azure Storage Account with random values to make them unique

The state file is also populated and will secrets encrypted e.g. primaryStorageKey

Note: The generated code for the Azure Storage Account uses the defaults and as such lacks some essential security options, for example the defaults include TLS 1.0, Blobs are publicly accessible and unsecure transfer is enabled.

To bolster the security of your Azure Storage Account (if you are keeping it), add the following properties to your configuration:

AllowBlobPublicAccess = false,
EnableHttpsTrafficOnly = true,
MinimumTlsVersion = "TLS1_2"

Deployment

With our infrastructure fully defined and tested, it’s time to deploy it in a repeatable and automated manner. Let’s explore how to set up the deployment process using Azure Pipelines.

To get started, we’ll need the Pulumi extension, the “Pulumi Azure Pipelines Task,” which can be found on the Visual Studio Marketplace.

To grant the necessary access to the shared state, we must add the build agent service principal to the designated Azure AD group.

$buildAgentSPName="<build agent service principal name>"
$buildAgentSPId=$(az ad sp list --display-name $buildAgentSPName --query [0].id -o tsv)
az ad group member add -g $adGroup --member-id $buildAgentSPId

Now let’s see what an Azure Pipeline looks like using the Pulumi task.

Azure Pipelines Example

trigger: none
pr: none
pool:
  vmImage: 'ubuntu-latest'
variables:
  subscription: '<Subscription Name>'
  storageName: 'stpulumistoredevweu'
  storageContainerName: 'iacstate'
steps:
- task: Pulumi@1
  displayName: 'Pulumi Preview'
  inputs:
    loginArgs: --cloud-url azblob://$(storageContainerName)
    azureSubscription: $(subscription)
    command: 'preview'
    cwd: '$(Build.SourcesDirectory)'
    createStack: true
    stack: 'organization/pulumi-demo/dev'
  env:
    AZURE_STORAGE_ACCOUNT: $(storageName)
- task: Pulumi@1
  displayName: 'Pulumi Up'
  inputs:
    loginArgs: --cloud-url azblob://$(storageContainerName)
    args: --yes
    azureSubscription: $(subscription)
    command: 'up'
    cwd: '$(Build.SourcesDirectory)'
    createStack: true
    stack: 'organization/pulumi-demo/dev'
  env:
    AZURE_STORAGE_ACCOUNT: $(storageName)

Conclusion

Throughout this process, we’ve demonstrated a straightforward and effective approach to secure your Pulumi state files in Azure, ensuring the safe deployment of your infrastructure.

By leveraging Azure Storage Accounts and Key Vault, and integrating Azure AD for access control, we’ve fortified the confidentiality and integrity of your critical resources.

Now, you’re well-equipped to embark on your Infrastructure as Code (IaC) journey with confidence and security.

Happy IaCing! 🚀😊