DevOps, GitHub Actions, Security

Building Securely with Nuke.Build: Integrating Snyk Scans for .NET Projects

Introduction

As a .NET developer familiar with crafting CI/CD pipelines in YAML using platforms like Azure Pipelines or GitHub Actions, my recent discovery of Nuke.Build sparked considerable interest. This alternative offers the ability to build pipelines in the familiar C# language, complete with IntelliSense for auto-completion, and the flexibility to run and debug pipelines locally—a notable departure from the constraints of YAML pipelines, which often rely on remote build agents.

While exploring Nuke.Build’s developer-centric features, it became apparent that this tool not only enhances the developer experience but also provides an opportunity to seamlessly integrate security practices into the development workflow. As someone deeply invested in promoting developer-first application security, the prospect of incorporating security scans directly into the development lifecycle resonated strongly with me, aligning perfectly with my desire for rapid feedback on application security.

Given my role as a Snyk Ambassador, it was only natural to explore how I could leverage Snyk’s robust security scanning capabilities within the Nuke.Build pipeline to bolster the security posture of .NET projects.

In this blog post, I’ll demonstrate the creation of a pipeline using Nuke.Build and showcase the addition of Snyk scan capability, along with Software Bill of Materials (SBOM) generation. Through this proactive approach, we’ll highlight the ease of integrating a layer of security within the development lifecycle.

Getting Started

Following the Getting Started Guide on the Nuke.Build website, I swiftly integrated a build project into my solution. For the sake of simplicity, I opted to consolidate the .NET Restore, Build, and Test actions into a single target, along with adding a Clean target.

In Nuke.Build, a “target” refers to individual build steps that can be executed independently or in sequence. By combining multiple actions into a single target, I aimed to streamline the build process and eliminate unnecessary complexity.

The resulting build code looks like this:

using Nuke.Common;
using Nuke.Common.IO;
using Nuke.Common.ProjectModel;
using Nuke.Common.Tools.DotNet;

class Build : NukeBuild
{
    public static int Main() => Execute<Build>(x => x.BuildTestCode);

    [Parameter("Configuration to build - Default is 'Debug' (local) or 'Release' (server)")]
    readonly Configuration Configuration = IsLocalBuild ? Configuration.Debug : Configuration.Release;

    [Solution(GenerateProjects = true)] readonly Solution Solution;

    AbsolutePath SourceDirectory => RootDirectory / "src";
    AbsolutePath TestsDirectory => RootDirectory / "tests";

    Target Clean => _ => _
        .Executes(() =>
        {
            SourceDirectory.GlobDirectories("*/bin", "*/obj").DeleteDirectories();
        });

    Target BuildTestCode => _ => _
        .DependsOn(Clean)
        .Executes(() =>
        {
            DotNetTasks.DotNetRestore(_ => _
                .SetProjectFile(Solution)
            );
            DotNetTasks.DotNetBuild(_ => _
                .EnableNoRestore()
                .SetProjectFile(Solution)
                .SetConfiguration(Configuration)
                .SetProperty("SourceLinkCreate", true)
            );
            DotNetTasks.DotNetTest(_ => _
                .EnableNoRestore()
                .EnableNoBuild()
                .SetConfiguration(Configuration)
                .SetTestAdapterPath(TestsDirectory / "*.Tests")
            );
        });
}

Running nuke from my windows terminal built the solution and ran the tests.

Adding Security Scans

With the basic pipeline set up to build the code and run the tests, the next step is to integrate the Snyk scan into the pipeline. While Nuke supports a variety of CLI tools, unfortunately, Snyk is not among them.

To begin, you’ll need to create a free account on the Snyk platform if you haven’t already done so. Once registered, you can then install Snyk CLI using npm. If you have Node.js installed locally, you can install it by running:

npm install snyk@latest -g

Given that the Snyk CLI isn’t directly supported by Nuke, I turned to the Nuke documentation to explore possible solutions for running the CLI. Two options caught my attention: PowerShellTasks and DockerTasks.

To execute the necessary tasks for the Snyk scan, a few steps are required. These include authorizing a connection to Snyk, performing an open-source scan, potentially conducting a code scan, and generating a Software Bill of Materials (SBOM).

Let’s delve into each of these tasks using PowerShellTasks in Nuke. Firstly, let’s tackle authorization. The CLI command for authorization is:

snyk auth

Running this command typically opens a web browser to the Snyk platform, allowing you to authorize access. However, this method isn’t suitable for automated builds on a remote agent. Instead, we need to provide credentials. If you’re using a free account, your user will have an API Token available, which you can find on your account settings page under “API Token.” For enterprise accounts, you can create a service account specifically for this purpose.

To incorporate the Snyk Token into our application, let’s add a parameter to the code:

[Parameter("Snyk Token to interact with the API")] readonly string SnykToken;

Next, we’ll create a new target to execute the authorization command using PowerShellTasks and pass in the Snyk Token:

Target SnykAuth => _ => _
    .DependsOn(BuildTestCode)
    .Executes(() =>
    {          
        PowerShellTasks.PowerShell(_ => _
            .SetCommand("npm install snyk@latest -g")
        );
        PowerShellTasks.PowerShell(_ => _
            .SetCommand($"snyk auth {SnykToken}")
        );
    });

NOTE: This assumes that the build agent does not have the Snyk CLI installed

With authorization complete, our next task is to add a target for the Snyk Open Source scan, ensuring it depends on the Snyk Auth target:

 Target SnykTest => _ => _
    .DependsOn(SnykAuth)
    .Executes(() =>
    {
        // Snyk Test
        PowerShellTasks.PowerShell(_ => _
          .SetCommand("snyk test --all-projects --exclude=build")
        );
    });

Including the --all-projects flag ensures that all projects are scanned, which is good practice for .NET projects. Additionally, I’ve added an exclusion for the build project to focus the scan on application issues. I typically rely on Snyk Monitor attached to my GitHub Repo to detect issues in the entire repository, leaving this scan to concentrate solely on the application being deployed.

Finally, we need to update the Execute method to include the Snyk Test:

public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest);

Running nuke again from the Windows terminal now prompts for Snyk authentication

Once authenticated

In order to prevent this we need to pass the API token value to nuke. It’s a good idea to set an environment variable for your API token e.g. with PowerShell

$env:snykApiToken = "<your api token>"
# or using the Snyk CLI
$env:snykApiToken = snyk config get api
# Run nuke passing in the parameter
nuke --snykToken $snykApiToken

Upon executing the scan, it promptly identified several issues:

Subsequently, the Snyk Test failed, flagging vulnerabilities in the code and failing SnykTest:

To control whether the build fails based on the severity of vulnerabilities found, we can add another parameter:

 [Parameter("Snyk Severity Threshold (critical, high, medium or low)")] readonly string SnykSeverityThreshold = "high";

Ensure that the value has been set before using it. Note that the threshold must be in lowercase.

Target SnykTest => _ => _
    .DependsOn(SnykAuth)
    .Requires(() => SnykSeverityThreshold)
    .Executes(() =>
    {
        // Snyk Test
        PowerShellTasks.PowerShell(_ => _
          .SetCommand($"snyk test --all-projects --exclude=build --severity-threshold={SnykSeverityThreshold.ToLowerInvariant()}")
        );
    });

Now, let’s address running Snyk Code for a SAST scan, which will also need a parameter to control the severity threshold:

[Parameter("Snyk Code Severity Threshold (high, medium or low)")] readonly string SnykCodeSeverityThreshold = "high";

We’ll create another target for the Code test:

Target SnykCodeTest => _ => _
    .DependsOn(SnykAuth)
    .Requires(() => SnykCodeSeverityThreshold)
    .Executes(() =>
    {
        PowerShellTasks.PowerShell(_ => _
            .SetCommand($"snyk code test --all-projects --exclude=build --severity-threshold={SnykCodeSeverityThreshold.ToLowerInvariant()}")
        );
    });

Update the Execute method to include the code test:

public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest, x => x.SnykCodeTest);

With the severity set for both scans, SnykTest continues to find high vulnerabilities, while SnykCodeTest passes:

To generate an SBOM (Software Bill of Materials) using Snyk and publish it as a build artifact, let’s add an output path:

AbsolutePath OutputDirectory => RootDirectory / "outputs";

And include a Produces entry to ensure the artifact is generated and stored in the specified directory:

Target GenerateSbom => _ => _
   .DependsOn(SnykAuth)
   .Produces(OutputDirectory / "*.json")
   .Executes(() =>
   {
       OutputDirectory.CreateOrCleanDirectory();
       PowerShellTasks.PowerShell(_ => _
           .SetCommand($"snyk sbom --all-projects --format spdx2.3+json --json-file-output={OutputDirectory / "sbom.json"}")
       );
   });

Lastly, update the Execute method to include the generation of the SBOM:

public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest, x => x.SnykCodeTest, x => x.GenerateSbom);

Now, when executing Nuke, the SBOM will be generated and stored in the specified directory, ready to be published as a build artifact.

Earlier, I mentioned that PowerShellTasks and DockerTasks were both viable options for integrating the Snyk CLI into the Nuke build. Here’s how you can achieve the same tasks using DockerTasks:

using Nuke.Common.Tools.Docker;
...
  Target SnykTest => _ => _
     .DependsOn(BuildTestCode)
     .Requires(() => SnykToken, () => SnykSeverityThreshold)
     .Executes(() =>
     {
         // Snyk Test
         DockerTasks.DockerRun(_ => _
             .EnableRm()
             .SetVolume($"{RootDirectory}:/app")
             .SetEnv($"SNYK_TOKEN={SnykToken}")
             .SetImage("snyk/snyk:dotnet")
             .SetCommand($"snyk test --all-projects --exclude=build --severity-threshold={SnykSeverityThreshold.ToLowerInvariant()}")
         );
     });
 Target SnykCodeTest => _ => _
     .DependsOn(BuildTestCode)
     .Requires(() => SnykToken, () => SnykCodeSeverityThreshold)
     .Executes(() =>
     {
         DockerTasks.DockerRun(_ => _
             .EnableRm()
             .SetVolume($"{RootDirectory}:/app")
             .SetEnv($"SNYK_TOKEN={SnykToken}")
             .SetImage("snyk/snyk:dotnet")
             .SetCommand($"snyk code test --all-projects --exclude=build --severity-threshold={SnykCodeSeverityThreshold.ToLowerInvariant()}")
         );
     });
 Target GenerateSbom => _ => _
     .DependsOn(BuildTestCode)
     .Produces(OutputDirectory / "*.json")
     .Requires(() => SnykToken)
     .Executes(() =>
     {
         OutputDirectory.CreateOrCleanDirectory();
         DockerTasks.DockerRun(_ => _
             .EnableRm()
             .SetVolume($"{RootDirectory}:/app")
             .SetEnv($"SNYK_TOKEN={SnykToken}")
             .SetImage("snyk/snyk:dotnet")
             .SetCommand($"snyk sbom --all-projects --format spdx2.3+json --json-file-output={OutputDirectory.Name}/sbom.json")
         );
     });

NOTE: Snyk Auth is not required as a separate task as that is done inside the snyk container

Automating Nuke.Build with GitHub Actions: Generating YAML

Nuke comes with another useful feature: the ability to see a plan, which shows which targets are being executed and when. Simply running nuke --plan provides an HTML output of the plan:

With everything configured for local execution, it’s time to think about running this in a pipeline. Nuke supports various CI platforms, but for this demonstration, I’ll be using GitHub Actions. Nuke provides attributes to automatically generate the file to run the code:

using Nuke.Common.CI.GitHubActions;

[GitHubActions(
    "continuous",
    GitHubActionsImage.UbuntuLatest,
    On = new[] { GitHubActionsTrigger.Push },
    ImportSecrets = new[] { nameof(SnykOrgId), nameof(SnykToken), nameof(SnykSeverityThreshold), nameof(SnykCodeSeverityThreshold) },
    InvokedTargets = new[] { nameof(BuildTestCode), nameof(SnykTest), nameof(SnykCodeTest), nameof(GenerateSbom) })]
class Build : NukeBuild
...

To pass in the parameters for GitHub Actions, we’ll need to designate the token as a Secret:

[Parameter("Snyk Token to interact with the API")][Secret] readonly string SnykToken;

Next, let’s remove the default values for the threshold parameters:

[Parameter("Snyk Severity Threshold (critical, high, medium or low)")] readonly string SnykSeverityThreshold;
[Parameter("Snyk Code Severity Threshold (high, medium or low)")] readonly string SnykCodeSeverityThreshold;

We’ll then add the values from the .nuke/parameters.json file:

{
  "$schema": "./build.schema.json",
  "Solution": "Useful.Extensions.sln",
  "SnykSeverityThreshold": "high",
  "SnykCodeSeverityThreshold": "high"
}

Running Nuke again produces the following auto-generated output for GitHub Actions YAML in the folder .github/workflows/continuous.yml:

# ------------------------------------------------------------------------------
# <auto-generated>
#
#     This code was generated.
#
#     - To turn off auto-generation set:
#
#         [GitHubActions (AutoGenerate = false)]
#
#     - To trigger manual generation invoke:
#
#         nuke --generate-configuration GitHubActions_continuous --host GitHubActions
#
# </auto-generated>
# ------------------------------------------------------------------------------

name: continuous

on: [push]

jobs:
  ubuntu-latest:
    name: ubuntu-latest
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: 'Cache: .nuke/temp, ~/.nuget/packages'
        uses: actions/cache@v3
        with:
          path: |
            .nuke/temp
            ~/.nuget/packages
          key: ${{ runner.os }}-${{ hashFiles('**/global.json', '**/*.csproj', '**/Directory.Packages.props') }}
      - name: 'Run: BuildTestCode, SnykTest, SnykCodeTest, GenerateSbom'
        run: ./build.cmd BuildTestCode SnykTest SnykCodeTest GenerateSbom
        env:
          SnykToken: ${{ secrets.SNYK_TOKEN }}
          SnykSeverityThreshold: ${{ secrets.SNYK_SEVERITY_THRESHOLD }}
          SnykCodeSeverityThreshold: ${{ secrets.SNYK_CODE_SEVERITY_THRESHOLD }}
      - name: 'Publish: outputs'
        uses: actions/upload-artifact@v3
        with:
          name: outputs
          path: outputs

This YAML file is automatically generated by Nuke and is ready to be used in your GitHub Actions workflow. It sets up the necessary steps to run your build, including caching dependencies, executing targets, and publishing artifacts.

NOTE: When I first committed the nuke build files, GitHub Actions gave me a permission denied error when running build.cmd. Running these commands and committing them got over that problem

git update-index --chmod=+x .\build.cmd
git update-index --chmod=+x .\build.sh

Here is the output of the GitHub Actions run for this pipeline

After fixing the vulnerabilities in my code, the workflow successfully passed:

Here’s the full C# source code for both PowerShell and Docker versions:

using Nuke.Common;
using Nuke.Common.CI.GitHubActions;
using Nuke.Common.IO;
using Nuke.Common.ProjectModel;
using Nuke.Common.Tools.DotNet;
using Nuke.Common.Tools.PowerShell;

[GitHubActions(
    "continuous",
    GitHubActionsImage.UbuntuLatest,
    On = new[] { GitHubActionsTrigger.Push },
    ImportSecrets = new[] { nameof(SnykToken), nameof(SnykSeverityThreshold), nameof(SnykCodeSeverityThreshold) },
    InvokedTargets = new[] { nameof(BuildTestCode), nameof(SnykTest), nameof(SnykCodeTest), nameof(GenerateSbom) })]
class Build : NukeBuild
{
    public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest, x => x.SnykCodeTest, x => x.GenerateSbom);

    [Parameter("Configuration to build - Default is 'Debug' (local) or 'Release' (server)")]
    readonly Configuration Configuration = IsLocalBuild ? Configuration.Debug : Configuration.Release;

    [Parameter("Snyk Token to interact with the API")][Secret] readonly string SnykToken;
    [Parameter("Snyk Severity Threshold (critical, high, medium or low)")] readonly string SnykSeverityThreshold;
    [Parameter("Snyk Code Severity Threshold (high, medium or low)")] readonly string SnykCodeSeverityThreshold;

    [Solution(GenerateProjects = true)] readonly Solution Solution;

    AbsolutePath SourceDirectory => RootDirectory / "src";
    AbsolutePath TestsDirectory => RootDirectory / "tests";
    AbsolutePath OutputDirectory => RootDirectory / "outputs";

    Target Clean => _ => _
        .Executes(() =>
        {
            SourceDirectory.GlobDirectories("*/bin", "*/obj").DeleteDirectories();
        });

    Target BuildTestCode => _ => _
        .DependsOn(Clean)
        .Executes(() =>
        {
            DotNetTasks.DotNetRestore(_ => _
                .SetProjectFile(Solution)
            );
            DotNetTasks.DotNetBuild(_ => _
                .EnableNoRestore()
                .SetProjectFile(Solution)
                .SetConfiguration(Configuration)
                .SetProperty("SourceLinkCreate", true)
            );
            DotNetTasks.DotNetTest(_ => _
                .EnableNoRestore()
                .EnableNoBuild()
                .SetConfiguration(Configuration)
                .SetTestAdapterPath(TestsDirectory / "*.Tests")
            );
        });
    Target SnykAuth => _ => _
     .DependsOn(BuildTestCode)
     .Executes(() =>
     {
         PowerShellTasks.PowerShell(_ => _
             .SetCommand("npm install snyk@latest -g")
         );
         PowerShellTasks.PowerShell(_ => _
             .SetCommand($"snyk auth {SnykToken}")
         );
     });
    Target SnykTest => _ => _
        .DependsOn(SnykAuth)
        .Requires(() => SnykSeverityThreshold)
        .Executes(() =>
        {
            PowerShellTasks.PowerShell(_ => _
                .SetCommand($"snyk test --all-projects --exclude=build --severity-threshold={SnykSeverityThreshold.ToLowerInvariant()}")
            );
        });
    Target SnykCodeTest => _ => _
        .DependsOn(SnykAuth)
        .Requires(() => SnykCodeSeverityThreshold)
        .Executes(() =>
        {
            PowerShellTasks.PowerShell(_ => _
                .SetCommand($"snyk code test --all-projects --exclude=build --severity-threshold={SnykCodeSeverityThreshold.ToLowerInvariant()}")
            );
        });
    Target GenerateSbom => _ => _
        .DependsOn(SnykAuth)
        .Produces(OutputDirectory / "*.json")
        .Executes(() =>
        {
            OutputDirectory.CreateOrCleanDirectory();
            PowerShellTasks.PowerShell(_ => _
                .SetCommand($"snyk sbom --all-projects --format spdx2.3+json --json-file-output={OutputDirectory / "sbom.json"}")
            );
        });
}
using Nuke.Common;
using Nuke.Common.CI.GitHubActions;
using Nuke.Common.IO;
using Nuke.Common.ProjectModel;
using Nuke.Common.Tools.Docker;
using Nuke.Common.Tools.DotNet;

[GitHubActions(
    "continuous",
    GitHubActionsImage.UbuntuLatest,
    On = new[] { GitHubActionsTrigger.Push },
    ImportSecrets = new[] { nameof(SnykToken), nameof(SnykSeverityThreshold), nameof(SnykCodeSeverityThreshold) },
    InvokedTargets = new[] { nameof(BuildTestCode), nameof(SnykTest), nameof(SnykCodeTest), nameof(GenerateSbom) })]
class Build : NukeBuild
{
    public static int Main() => Execute<Build>(x => x.BuildTestCode, x => x.SnykTest, x => x.SnykCodeTest, x => x.GenerateSbom);

    [Parameter("Configuration to build - Default is 'Debug' (local) or 'Release' (server)")]
    readonly Configuration Configuration = IsLocalBuild ? Configuration.Debug : Configuration.Release;

    [Parameter("Snyk Token to interact with the API")][Secret] readonly string SnykToken;
    [Parameter("Snyk Severity Threshold (critical, high, medium or low)")] readonly string SnykSeverityThreshold;
    [Parameter("Snyk Code Severity Threshold (high, medium or low)")] readonly string SnykCodeSeverityThreshold;

    [Solution(GenerateProjects = true)] readonly Solution Solution;

    AbsolutePath SourceDirectory => RootDirectory / "src";
    AbsolutePath TestsDirectory => RootDirectory / "tests";
    AbsolutePath OutputDirectory => RootDirectory / "outputs";

    Target Clean => _ => _
        .Executes(() =>
        {
            SourceDirectory.GlobDirectories("*/bin", "*/obj").DeleteDirectories();
        });

    Target BuildTestCode => _ => _
        .DependsOn(Clean)
        .Executes(() =>
        {
            DotNetTasks.DotNetRestore(_ => _
                .SetProjectFile(Solution)
            );
            DotNetTasks.DotNetBuild(_ => _
                .EnableNoRestore()
                .SetProjectFile(Solution)
                .SetConfiguration(Configuration)
                .SetProperty("SourceLinkCreate", true)
            );
            DotNetTasks.DotNetTest(_ => _
                .EnableNoRestore()
                .EnableNoBuild()
                .SetConfiguration(Configuration)
                .SetTestAdapterPath(TestsDirectory / "*.Tests")
            );
        });
    Target SnykTest => _ => _
        .DependsOn(BuildTestCode)
        .Requires(() => SnykToken, () => SnykSeverityThreshold)
        .Executes(() =>
        {
            // Snyk Test
            DockerTasks.DockerRun(_ => _
                .EnableRm()
                .SetVolume($"{RootDirectory}:/app")
                .SetEnv($"SNYK_TOKEN={SnykToken}")
                .SetImage("snyk/snyk:dotnet")
                .SetCommand($"snyk test --all-projects --exclude=build --severity-threshold={SnykSeverityThreshold.ToLowerInvariant()}")
            );
        });
    Target SnykCodeTest => _ => _
        .DependsOn(BuildTestCode)
        .Requires(() => SnykToken, () => SnykCodeSeverityThreshold)
        .Executes(() =>
        {
            DockerTasks.DockerRun(_ => _
                .EnableRm()
                .SetVolume($"{RootDirectory}:/app")
                .SetEnv($"SNYK_TOKEN={SnykToken}")
                .SetImage("snyk/snyk:dotnet")
                .SetCommand($"snyk code test --all-projects --exclude=build --severity-threshold={SnykCodeSeverityThreshold.ToLowerInvariant()}")
            );
        });
    Target GenerateSbom => _ => _
        .DependsOn(BuildTestCode)
        .Produces(OutputDirectory / "*.json")
        .Requires(() => SnykToken)
        .Executes(() =>
        {
            OutputDirectory.CreateOrCleanDirectory();
            DockerTasks.DockerRun(_ => _
                .EnableRm()
                .SetVolume($"{RootDirectory}:/app")
                .SetEnv($"SNYK_TOKEN={SnykToken}")
                .SetImage("snyk/snyk:dotnet")
                .SetCommand($"snyk sbom --all-projects --format spdx2.3+json --json-file-output={OutputDirectory.Name}/sbom.json")
            );
        });
}

Final Thoughts

Nuke.Build is a great concept for performing build pipelines and it really helps to be able to run the pipeline locally and test it out, making sure paths and everything are correct. Couple that with the capability to generate GitHub Actions and other support CI pipelines to run the code is a big benefit.

Adding Security Scans to catch things early is another plus and I am glad that it’s possible to run those scans in multiple ways in Nuke, hopefully the Snyk CLI can be supported in Nuke directly in the future.

If you haven’t checked out Nuke yet, I would definitely give it a try and see the benefits for yourself.

Architecture, Azure Pipelines, Diagrams, GitHub Actions

Git to Structurizr Cloud

In a previous post I looked at Architecture Diagrams as Code with Structurizr DSL with one workspace and using puppeteer to generate images via an Azure Pipeline.

Since writing that post I’ve been using multiple workspaces and become more familiar with the structurizr-cli as well as using the docker images for structurizr-cli and structurizr-lite.

So, in this post I am going to look at:

  • Using source control to store your diagrams in DSL format
  • Editing diagrams locally
  • Automation of pushing changes to one or more workspaces using Azure Pipelines or GitHub Actions
  • Optionally generating SVG files for the workspaces that have changed as pipeline artifacts

Setup

Structurizr Workspaces

In Structurizr create the workspaces that you are going to use

The main reason to do this first is so that the ID, API Keys and API Secrets are created and can be copied from each of the workspace settings to add to the pipeline variables, the ID can be used for naming the folders if you choose to

GitHub/Azure DevOps

Now in GitHub or Azure DevOps create a new repository that you are going to use to put your diagrams in. Once that has been created, clone the repository and then create a folder for each workspace you have in Structurizr either using the Workspace Id (from the workspace settings) or with a name that has no spaces, starts with a letter and only contains alphanumeric characters ([a-z], [A-Z], [0-9]) or underscores (_).

Note: The folder name is used by the Pipeline to find the correct secrets to publish the workspace to Structurizr

Edit Diagrams

To get started create a workspace.dsl file in the folder you want to create a diagram in.

Note: You will do this for each workspace

To edit the diagrams locally you can use any text editor but I recommend using Visual Studio Code along with the extension by Ciaran Treanor for code highlighting and use Structurizr Lite to view the diagram. Simon Brown has a great post on Getting started with Structurizr Lite and don’t forget to install Docker if you haven’t already

In your editor create the diagram e.g.

Tip: The DSL reference is very handy when creating diagrams

Using Structurizr lite the diagram can be viewed without having to upload to the cloud. Run the following docker command replacing PATH with the path to the diagram you want to show

docker run -it --rm -p 8080:8080 -v PATH:/usr/local/structurizr structurizr/lite

In your browser navigate to localhost:8080 and you should see the diagram e.g. the above diagram looks like this

Tip: As you make changes to the diagram you can refresh the browser to see the changes

Note: Structurizr Lite only shows 1 workspace, if you have more and want to see those at the same time, run the Docker command as before but change the Port from 8080 to something else e.g. 8089 and change PATH to another diagram

docker run -it --rm -p 8089:8080 -v PATH:/usr/local/structurizr structurizr/lite

Once you are happy with the diagram changes they can be committed and pushed into the repository to share with others.

Pipelines

Now we have the diagrams in source control and can track the changes, we still want to push those changes to Structurizr to share with others who perhaps want to review the diagrams in Structurizr or see them in another application e.g. Atlassian Confluence

We can automate this process by creating a pipeline to publish diagrams to Structurizr when changes are made

Our pipeline has some requirements:

  • Only run on the main branch
  • Do not run on a Pull Request
  • Only publish the diagrams that have been changed
  • Optionally output an SVG of the changed diagram as artifacts
  • Not include any Workspace Keys and Secrets in the source controlled pipeline files

Note: Secrets need to be in the format of WORKSPACE_<Type>_<Folder Name> e.g.

  • WORKSPACE_ID_MYPROJECT
  • WORKSPACE_KEY_MYPROJECT
  • WORKSPACE_SECRET_MYPROJECT

Azure Pipelines

If you are using Azure Pipelines then read on or you can skip to the GitHub actions section.

So, let’s create the YAML file for the pipeline. In the root folder of your repository create a azure-pipelines.yml file and open that in an editor and add the following YAML

trigger:
  - main

pr: none
 
pool:
    vmImage: ubuntu-latest

parameters:
  - name: folderAsId
    type: boolean
    default: false
  - name: createImages
    type: boolean
    default: false

variables:
  downloadFolder: 'downloads'
  
steps:

The first step is to get the changes since the last push, this helps with the requirement of pushing only diagrams that have changed. Unlike GitHub actions there is not a pre-defined variable for this so this PowerShell script uses the Azure DevOps Rest API to obtain the git commit id before the changes and sets a variable to store the id for later use

- pwsh: |
    $devops_event_before = $env:BUILD_BUILDSOURCEVERSION
    $uri = "$env:SYSTEM_TEAMFOUNDATIONSERVERURI$env:SYSTEM_TEAMPROJECT/_apis/build/builds/$($env:BUILD_BUILDID)/changes?api-version=6.1"    
    $changes = Invoke-RestMethod -Method Get -Headers @{ Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN" } -Uri $uri -Verbose    
    if ($changes.count -gt 0) {
      $firstCommit = $changes.value[$changes.count-1].id
      # Go back to the commit before the first change
      $devops_event_before = git rev-parse $firstCommit^           
    }
    Write-Host $devops_event_before 
    Write-Host "##vso[task.setvariable variable=DEVOPS_EVENT_BEFORE]$devops_event_before"
  displayName: 'Get Start Commit Id'
  env:
    SYSTEM_ACCESSTOKEN: $(System.AccessToken)

Next step is to optionally install Graphviz which is used to create the SVG files

- bash: |
    sudo apt-get install graphviz
  displayName: 'Install dependencies'
  condition: and(succeeded(), eq(${{ parameters.createImages }}, 'true'))

Now we can call a PowerShell script that will perform the publish action passing in the Workspace secrets as environment variables.

Note: Azure Pipelines automatically creates an Environment Variable for pipeline variables if they are not set a secrets (e.g. WORKSPACE_ID_MYPROJECT is not a secret). Once they are a secret they need to be explicitly added in the env property to be used inside the script as Environment Variables.

- task: PowerShell@2
  displayName: 'Publish Workspace'
  inputs:
    targetType: 'filePath'
    filePath: ./publish.ps1
    arguments: -StartCommitId $(DEVOPS_EVENT_BEFORE) -CommitId $(Build.SourceVersion) -DownloadFolder $(downloadFolder) -FolderAsId $${{ parameters.folderAsId }} -CreateImages $${{ parameters.createImages }}
  env: 
    WORKSPACE_KEY_MYPROJECT: $(WORKSPACE_KEY_MYPROJECT)
    WORKSPACE_SECRET_MYPROJECT: $(WORKSPACE_SECRET_MYPROJECT)
    WORKSPACE_KEY_OTHERPROJECT: $(WORKSPACE_KEY_OTHERPROJECT)
    WORKSPACE_SECRET_OTHERPROJECT: $(WORKSPACE_SECRET_OTHERPROJECT)

Note: You may have noticed there is an additional $ on the parameters and think this is a typo but it’s actually a little hack, parameters of type boolean are really strings and so when passing to PowerShell you get an error that says it cannot convert a System.String to a System.Boolean adding a $ results in $true or $false and then is correctly read in by PowerShell

And the last step is to optionally upload the SVGs as artifacts if they were requested to be created

- publish: $(downloadFolder)
  displayName: Publish Diagrams
  artifact: 'architecture'
  condition: and(succeeded(), eq(${{ parameters.createImages }}, 'true'))

With the pipeline configured the next thing to do is add the secrets for each workspace using pipeline variables

You can also use variable groups but you need to update the variables section to load in a group e.g. if I had a group called structurizr_workspaces

variables:
  - name: downloadFolder
    value: 'downloads'
  - group: structurizr_workspaces

The final pipeline then is:

trigger:
  - main

pr: none
 
pool:
    vmImage: ubuntu-latest
parameters:
  - name: folderAsId
    type: boolean
    default: false
  - name: createImages
    type: boolean
    default: false

variables:
  downloadFolder: 'downloads'
  
steps:
- pwsh: |
    $devops_event_before = $env:BUILD_BUILDSOURCEVERSION
    $uri = "$env:SYSTEM_TEAMFOUNDATIONSERVERURI$env:SYSTEM_TEAMPROJECT/_apis/build/builds/$($env:BUILD_BUILDID)/changes?api-version=6.1"    
    $changes = Invoke-RestMethod -Method Get -Headers @{ Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN" } -Uri $uri -Verbose    
    if ($changes.count -gt 0) {
      $firstCommit = $changes.value[$changes.count-1].id
      # Go back to the commit before the first change
      $devops_event_before = git rev-parse $firstCommit^           
    }
    Write-Host $devops_event_before 
    Write-Host "##vso[task.setvariable variable=DEVOPS_EVENT_BEFORE]$devops_event_before"
  displayName: 'Get Start Commit Id'
  env:
    SYSTEM_ACCESSTOKEN: $(System.AccessToken)
- bash: |
    sudo apt-get install graphviz
  displayName: 'Install dependencies'
  condition: and(succeeded(), eq(${{ parameters.createImages }}, 'true'))
- task: PowerShell@2
  displayName: 'Publish Workspace'
  inputs:
    targetType: 'filePath'
    filePath: ./publish.ps1
    arguments: -StartCommitId $(DEVOPS_EVENT_BEFORE) -CommitId $(Build.SourceVersion) -DownloadFolder $(downloadFolder) -FolderAsId $${{ parameters.folderAsId }} -CreateImages $${{ parameters.createImages }}
  env: 
    WORKSPACE_KEY_MYPROJECT: $(WORKSPACE_KEY_MYPROJECT)
    WORKSPACE_SECRET_MYPROJECT: $(WORKSPACE_SECRET_MYPROJECT)
    WORKSPACE_KEY_OTHERPROJECT: $(WORKSPACE_KEY_OTHERPROJECT)
    WORKSPACE_SECRET_OTHERPROJECT: $(WORKSPACE_SECRET_OTHERPROJECT)
- publish: $(downloadFolder)
  displayName: Publish Diagrams
  artifact: 'architecture'
  condition: and(succeeded(), eq(${{ parameters.createImages }}, 'true'))

GitHub Actions

If you prefer to use GitHub actions instead of Azure Pipeline then let’s create the YAML file for the pipeline. Create a folder in your repository called .github\workflows and then create a ci.yml file in that folder and then open the ci.yml in an editor and add the following YAML

name: Structurizr Workspace Pipeline
on: 
  push: 
    branches: [ main ]
env:
  CREATE_IMAGES: ${{ false }}
  FOLDER_AS_ID: ${{ false }}

jobs:
  structurizr_workspace_pipeline:
    runs-on: ubuntu-latest
    steps:

Unlike Azure Pipelines the code is not automatically checked out so the first step is to checkout the code.

Note: although the fetch-depth is normally 1 this needs to be 0 to make sure we get all the changes on a push

- name: Checkout
   uses: actions/checkout@v3
   with:
     fetch-depth: 0

Next step is to optionally install Graphviz which is used to create the SVG files

- name: Install dependencies
  if: ${{ env.CREATE_IMAGES == 'true' }}
  run: sudo apt-get install graphviz

Now we need to get our secret variables into Environment Variables, unlike Azure Pipelines secrets can be added to Environment Variables and not have to be explicitly added to scripts in order to be used

- name: Create Secrets as Envs
   run: |
     while IFS="=" read -r key value
     do
       echo "$key=$value" >> $GITHUB_ENV
     done < <(jq -r "to_entries|map(\"\(.key)=\(.value)\")|.[]" <<< "$SECRETS_CONTEXT")
   env:
     SECRETS_CONTEXT: ${{ toJson(secrets) }}

As with Azure Pipelines we can call a PowerShell script that will perform the publish action

Note: the github.event.before pre-defined variable contains the start commit id

- name: Publish Workspace
   run: |
        ./publish.ps1 -StartCommitId ${{ github.event.before }} -CommitId ${{ github.sha }} -DownloadFolder 'downloads' -FolderAsId $${{ env.FOLDER_AS_ID }} -CreateImages $${{ env.CREATE_IMAGES }}
   shell: pwsh

Note: As with the Azure Pipeline the double $ hack to get a string into PowerShell as a boolean value

And the last step is to optionally upload the SVGs as artifacts if they were requested to be created

- name: Publish Diagrams
   if: ${{ env.CREATE_IMAGES == 'true' }}
   uses: actions/upload-artifact@v2
   with:  
     name: architecture
     path: downloads

Now we have the GitHub action defined we now need to add the secrets for each workspace using Secrets in GitHub

The final pipeline then is:

name: Structurizr Workspace Pipeline
on: 
  push: 
    branches: [ main ]
env:
  CREATE_IMAGES: ${{ false }}
  FOLDER_AS_ID: ${{ false }}

jobs:
  structurizr_workspace_pipeline:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v3
      with:
        fetch-depth: 0
    - name: Install dependencies
      if: ${{ env.CREATE_IMAGES == 'true' }}
      run: sudo apt-get install graphviz
    - name: Create Secrets as Envs
      run: |
        while IFS="=" read -r key value
        do
          echo "$key=$value" >> $GITHUB_ENV
        done < <(jq -r "to_entries|map(\"\(.key)=\(.value)\")|.[]" <<< "$SECRETS_CONTEXT")
      env:
        SECRETS_CONTEXT: ${{ toJson(secrets) }}
    - name: Publish Workspace
      run: |
        ./publish.ps1 -StartCommitId ${{ github.event.before }} -CommitId ${{ github.sha }} -DownloadFolder 'downloads' -FolderAsId $${{ env.FOLDER_AS_ID }} -CreateImages $${{ env.CREATE_IMAGES }}
      shell: pwsh
    - name: Publish Diagrams
      if: ${{ env.CREATE_IMAGES == 'true' }}
      uses: actions/upload-artifact@v2
      with:  
        name: architecture
        path: downloads

The PowerShell

As you may have noticed both pipelines run the same PowerShell script to publish the workspaces, this script detects the changes in each of the folders and pushes the workspace to structurizr using the cli and then optionally exports an svg file of the diagrams.

Note: The part of the script that looks for the changes in the Git commit is:

git diff-tree --no-commit-id --name-only --diff-filter=cd -r <commit id>

The –diff-filter is used to reduce what files to including in the diff, uppercase filters e.g. AD would only include Add and Delete where as using them in lowercase would exclude them. In this instance any Copied or Delete statuses are excluded.

The full script that is used looks like this:

<#
.SYNOPSIS
    PowerShell script to upload diagram changes to Structurizr Cloud

.DESCRIPTION
    This PowerShell script works out the changes between git commits where the files are of extension .dsl and upload to Structurizr Cloud
    and optionally creates SVG files of the changes

.PARAMETER StartCommitId
    The commit hash of the starting commit to look for changes

.PARAMETER CommitId
    The commit has of the end commit to look for changes

.PARAMETER DownloadFolder
    The folder to use as the download folder

.PARAMETER FolderAsId
    A boolean flag to denote if the Structurizr workspace ID is the folder name

.PARAMETER CreateImages
    A boolean flag to denote if SVG files should be created ready for upload

.EXAMPLE
    Example syntax for running the script or function
    PS C:\> ./publish.ps1 -StartCommitId $startCommitHash -CommitId $commitHash -DownloadFolder 'downloads' -FolderAsId $false CreateImages $false
#>

param (
    [Parameter(Mandatory)]
    [string]$StartCommitId,
    [Parameter(Mandatory)]
    [string]$CommitId,
    [Parameter(Mandatory)]
    [string]$DownloadFolder = 'downloads',
    [bool]$FolderAsId = $false,
    [bool]$CreateImages = $false
)

git diff-tree --no-commit-id --name-only --diff-filter=cd -r "$StartCommitId..$CommitId" | Where-Object { $_.EndsWith('.dsl') } | Foreach-Object {

    $filePath = ($_ | Resolve-Path -Relative) -replace "^./"
    $workspaceFolder = Split-Path -Path $filePath -Parent 
    $workspaceFile = $filePath 
    Write-Host "folder: $workspaceFolder"
    Write-Host "file: $workspaceFile"

    if ( $FolderAsId -eq $true ) {
        $workspaceIdValue = $workspaceFolder
    }
    else {
        $workspaceId = "WORKSPACE_ID_$($workspaceFolder)".ToUpper()
        $workspaceIdValue = (Get-item env:$workspaceId).Value
    }
    
    $workspaceKey = "WORKSPACE_KEY_$($workspaceFolder)".ToUpper()
    $workspaceKeyValue = (Get-item env:$workspaceKey).Value

    $workspaceSecret = "WORKSPACE_SECRET_$($workspaceFolder)".ToUpper()
    $workspaceSecretValue = (Get-item env:$workspaceSecret).Value

    docker run -i --rm -v ${pwd}:/usr/local/structurizr structurizr/cli push -id $workspaceIdValue -key $workspaceKeyValue -secret $workspaceSecretValue -workspace $workspaceFile
    $outputPath = "$DownloadFolder/$workspaceIdValue"
    if ( $CreateImages -eq $true ) {
        docker run --rm -v ${pwd}:/usr/local/structurizr structurizr/cli export -workspace $workspaceFile -format dot -output $outputPath
        sudo chown ${env:USER}:${env:USER} $outputPath

        Write-Host 'Convert exported files to svg'
        Get-ChildItem -Path $outputPath | Foreach-Object {
            $exportPath = ($_ | Resolve-Path -Relative)
            $folder = Split-Path -Path $exportPath -Parent
            $name = Split-Path -Path $exportPath -LeafBase

            Write-Host "Writing file: $folder/$name.svg"
            dot -Tsvg $exportPath > $folder/$name.svg
            rm $exportPath
        }
    }
}

Final Thoughts

I am big fan of using the C4 model and Structurizr and I hope sharing this idea of using a monorepo with multiple diagrams and automatically updating Structurizr via Pipeline has been a useful post.

Happy C4 diagramming 🙂

As always the example files, pipelines and script can be found in GitHub.