Azure Pipelines, Bicep, DevOps, IaC

Azure Pipelines – Deploy AKS with Bicep

In this post we are going to look at deploying an AKS cluster using Azure Pipelines YAML and Bicep.

If you are new to AKS then take a look at the video series AKS Zero to Hero from Richard Hooper (aka PixelRobots) and Gregor Suttie as well as the learning path from Brendan Burns.

If you are new to Pipelines and Bicep then checkout this Microsoft Learn course to give an introduction.

So, on to creating the AKS cluster using Bicep.

The resources we are going to deploy are:

  • Virtual Network
  • Log Analytics Workspace
  • AKS Cluster
  • Container Registry

We are also going to add Azure AD groups to lockdown the cluster administration and connect the container registry to allow AKS to pull containers from the registry.

Bicep

So let’s start with creating a module for the Virtual network, we need a name for the network and subnet as well as some address prefixes and tags.

@description('The virtual network name')
param vnetName string
@description('The name of the subnet')
param subnetName string
@description('The virtual network address prefixes')
param vnetAddressPrefixes array
@description('The subnet address prefix')
param subnetAddressPrefix string
@description('Tags for the resources')
param tags object

resource vnet 'Microsoft.Network/virtualNetworks@2019-11-01' = {
  name: vnetName
  location: resourceGroup().location
  properties: {
    addressSpace: {
      addressPrefixes: vnetAddressPrefixes
    }
    subnets: [
      {
        name: subnetName
        properties: {
          addressPrefix: subnetAddressPrefix
        }
      }      
    ]
  }
  tags: tags
}

output subnetId string = '${vnet.id}/subnets/${subnetName}'

The next module then is the AKS cluster itself, there is a lot of settings that you might want to control but I’ve added defaults for some of them. This module also includes creation of an Log Analytics workspace and the renaming of the AKS resource group that normally is prefixed with MC_ to something inline with the used naming convention.

@description('The environment prefix of the Managed Cluster resource e.g. dev, prod, etc.')
param prefix string
@description('The name of the Managed Cluster resource')
param clusterName string
@description('Resource location')
param location string = resourceGroup().location
@description('Kubernetes version to use')
param kubernetesVersion string = '1.20.7'
@description('The VM Size to use for each node')
param nodeVmSize string
@minValue(1)
@maxValue(50)
@description('The number of nodes for the cluster.')
param nodeCount int
@maxValue(100)
@description('Max number of nodes to scale up to')
param maxNodeCount int
@description('The node pool name')
param nodePoolName string = 'linux1'
@minValue(0)
@maxValue(1023)
@description('Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize')
param osDiskSizeGB int
param nodeAdminUsername string
@description('Availability zones to use for the cluster nodes')
param availabilityZones array = [
  '1'
  '2'
  '3'
]
@description('Allow the cluster to auto scale to the max node count')
param enableAutoScaling bool = true
@description('SSH RSA public key for all the nodes')
@secure()
param sshPublicKey string
@description('Tags for the resources')
param tags object
@description('Log Analytics Workspace Tier')
@allowed([
  'Free'
  'Standalone'
  'PerNode'
  'PerGB2018'
  'Premium'
])
param workspaceTier string
@allowed([
  'azure'  
])
@description('Network plugin used for building Kubernetes network')
param networkPlugin string = 'azure'
@description('Subnet id to use for the cluster')
param subnetId string
@description('Cluster services IP range')
param serviceCidr string = '10.0.0.0/16'
@description('DNS Service IP address')
param dnsServiceIP string = '10.0.0.10'
@description('Docker Bridge IP range')
param dockerBridgeCidr string = '172.17.0.1/16'
@description('An array of AAD group object ids for administration')
param adminGroupObjectIDs array = []

resource logAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2020-10-01' = {
  name: '${prefix}-oms-${clusterName}-${resourceGroup().location}'
  location: location
  properties: {
    sku: {
      name: workspaceTier
    }
  }
  tags: tags
}

resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-03-01' = {
  name: '${prefix}-aks-${clusterName}-${location}'
  location: location
  identity: {
    type: 'SystemAssigned'
  }
  tags: tags  
  properties: {
    nodeResourceGroup: 'rg-${prefix}-aks-nodes-${clusterName}-${location}'
    kubernetesVersion: kubernetesVersion
    dnsPrefix: '${clusterName}-dns'
    enableRBAC: true    
    agentPoolProfiles: [
      {        
        name: nodePoolName
        osDiskSizeGB: osDiskSizeGB
        osDiskType: 'Ephemeral'        
        count: nodeCount
        enableAutoScaling: enableAutoScaling
        minCount: nodeCount
        maxCount: maxNodeCount
        vmSize: nodeVmSize        
        osType: 'Linux'
        type: 'VirtualMachineScaleSets'
        mode: 'System'
        availabilityZones: availabilityZones
        enableEncryptionAtHost: true
        vnetSubnetID: subnetId
      }
    ]
    networkProfile: {      
      loadBalancerSku: 'standard'
      networkPlugin: networkPlugin
      serviceCidr: serviceCidr
      dnsServiceIP: dnsServiceIP
      dockerBridgeCidr: dockerBridgeCidr
    }
    aadProfile: !empty(adminGroupObjectIDs) ? {
      managed: true
      adminGroupObjectIDs: adminGroupObjectIDs
    } : null
    addonProfiles: {
      azurepolicy: {
        enabled: false
      }
      omsAgent: {
        enabled: true
        config: {
          logAnalyticsWorkspaceResourceID: logAnalyticsWorkspace.id
        }
      }   
    }
    linuxProfile: {      
      adminUsername: nodeAdminUsername
      ssh: {
        publicKeys: [
          {
            keyData: sshPublicKey
          }
        ]
      }      
    }
  }

  dependsOn: [
    logAnalyticsWorkspace    
  ]
}

output controlPlaneFQDN string = reference('${prefix}-aks-${clusterName}-${location}').fqdn
output clusterPrincipalID string = aksCluster.properties.identityProfile.kubeletidentity.objectId

The final module is building an Azure Container Registry and assigning the ACR Pull role for the cluster

@description('The name of the container registry')
param registryName string
@description('The principal ID of the AKS cluster')
param aksPrincipalId string
@description('Tags for the resources')
param tags object

@allowed([
  'b24988ac-6180-42a0-ab88-20f7382dd24c' // Contributor
  'acdd72a7-3385-48ef-bd42-f606fba81ae7' // Reader
])
param roleAcrPull string = 'b24988ac-6180-42a0-ab88-20f7382dd24c'

resource containerRegistry 'Microsoft.ContainerRegistry/registries@2019-05-01' = {
  name: registryName
  location: resourceGroup().location
  sku: {
    name: 'Standard'
  }
  properties: {
    adminUserEnabled: true
  }
  tags: tags
}

resource assignAcrPullToAks 'Microsoft.Authorization/roleAssignments@2020-04-01-preview' = {
  name: guid(resourceGroup().id, registryName, aksPrincipalId, 'AssignAcrPullToAks')
  scope: containerRegistry
  properties: {
    description: 'Assign AcrPull role to AKS'
    principalId: aksPrincipalId
    principalType: 'ServicePrincipal'
    roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${roleAcrPull}'
  }
}

output name string = containerRegistry.name

So now we have the all the modules lets setup the main bicep file to put it all together

@description('Naming prefix for the resources e.g. dev, test, prod')
param prefix string
@description('The public SSH key')
@secure()
param publicsshKey string
@description('The name of the cluster')
param clusterName string
@description('The location of the resources')
param location string = resourceGroup().location
@description('The admin username for the nodes in the cluster')
param nodeAdminUsername string
@description('An array of AAD group object ids to give administrative access.')
param adminGroupObjectIDs array = []
@description('The VM size to use in the cluster')
param nodeVmSize string
@minValue(1)
@maxValue(50)
@description('The number of nodes for the cluster.')
param nodeCount int = 1
@maxValue(100)
@description('Max number of nodes to scale up to')
param maxNodeCount int = 3
@description('Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize')
param osDiskSizeGB int
@description('Log Analytics Workspace Tier')
@allowed([
  'Free'
  'Standalone'
  'PerNode'
  'PerGB2018'
  'Premium'
])
param workspaceTier string
@description('The virtual network address prefixes')
param vnetAddressPrefixes array
@description('The subnet address prefix')
param subnetAddressPrefix string
@description('Tags for the resources')
param tags object

module vnet 'vnet.bicep' = {
  name: 'vnetDeploy'
  params: {
    vnetName: '${prefix}-vnet-${clusterName}-${location}'
    subnetName: '${prefix}-snet-${clusterName}-${location}'
    vnetAddressPrefixes: vnetAddressPrefixes
    subnetAddressPrefix: subnetAddressPrefix
    tags: tags
  }
}

module aks 'aks.bicep' = {
  name: 'aksDeploy'
  params: {
    prefix: prefix
    clusterName: clusterName    
    subnetId: vnet.outputs.subnetId
    nodeAdminUsername: nodeAdminUsername
    adminGroupObjectIDs: adminGroupObjectIDs
    nodeVmSize: nodeVmSize
    nodeCount: nodeCount
    maxNodeCount: maxNodeCount
    osDiskSizeGB: osDiskSizeGB
    sshPublicKey: publicsshKey
    workspaceTier: workspaceTier
    tags: tags
  }

  dependsOn: [
    vnet
  ]
}

module registry 'registry.bicep' = {
  name: 'registryDeploy'
  params: {
    registryName: 'acr${clusterName}'
    aksPrincipalId: aks.outputs.clusterPrincipalID
    tags: tags
  }

  dependsOn: [
    aks
  ]
}

As with ARM templates you can use a Json File to configure the parameters in bicep and so I’ve added one for this

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "tags": {
            "value": {
                "project": "mjdemo",
                "resource": "AKS"
            }
        },
        "prefix": {
            "value": "dev"
        },
        "clusterName": {
            "value": "mjdemo"
        },
        "nodeVmSize": {
            "value": "Standard_D2s_V3"
        },
        "osDiskSizeGB": {
            "value": 50
        },
        "nodeCount": {
            "value": 1
        },
        "maxNodeCount": {
            "value": 3
        },
        "nodeAdminUsername": {
            "value": "aksAdminUser"
        },
        "adminGroupObjectIDs": {
            "value": []
        },
        "publicsshKey": {
            "value": ""
        },
        "workspaceTier": {
            "value": "PerGB2018"
        },
        "vnetAddressPrefixes": {
            "value": []
        },
        "subnetAddressPrefix": {
            "value": ""
        }
    }
}

Pipeline

Now we have all the Bicep files and a parameters file, we can create an Azure Pipeline but first we are going to need an SSH Key and upload it to Azure Pipelines, one way to generate an SSH key is to use the ssh-keygen command in Bash (I used Ubuntu in WSL) e.g.

ssh-keygen -q -t rsa -b 4096 -N '' -f aksKey

This will generate a private and public key pair, you can then upload the public key file e.g. aksKey.pub to Secure Files in Azure DevOps Pipelines (Pipelines->Library->Secure files)

We are going to add Azure AD Groups in this deployment and will need to assign the role ‘Azure Kubernetes Service Cluster User Role’ to each group, the Microsoft Docs detail how to do this.

Now we have the SSH key uploaded we can configure the parameters we want to set for our AKS cluster and network.

trigger: none
pr: none

pool:
  vmImage: ubuntu-latest

parameters:
  - name: azureSubscription
    type: string
    default: 'Sandbox'
  - name: location
    displayName: 'Resource Location'
    type: string
    default: 'uksouth'
  - name: prefix
    displayName: 'Environment Prefix'
    type: string
    default: 'prod'
  - name: clusterName
    displayName: 'Name of the AKS Cluster'
    type: string
    default: 'demo'
  - name: nodeVmSize
    displayName: 'VM Size for the Nodes'
    type: string
    default: 'Standard_D2s_V3'
    values:
      - 'Standard_D2s_V3'
      - 'Standard_DS2_v2'
      - 'Standard_D4s_V3'
      - 'Standard_DS3_v2'
      - 'Standard_DS4_v2'
      - 'Standard_D8s_v3'
  - name: osDiskSizeGB
    displayName: 'Size of OS disk (0 means use vm size)'
    type: number
    default: 50
  - name: nodeCount
    displayName: 'The number of nodes'
    type: number
    default: 3
  - name: maxNodeCount
    displayName: 'Max node to scale out to'
    type: number
    default: 10
  - name: workspaceTier
    displayName: Log Analytics Workspace Tier
    type: string
    default: 'PerGB2018'
    values:
      - 'Free'
      - 'Standalone'
      - 'PerNode'
      - 'PerGB2018'
      - 'Premium'
  - name: tags
    displayName: 'Tags'
    type: object
    default:
     Environment: "prod"
     Resource: "AKS"
     Project: "Demo"
  - name: nodeAdminUsername
    displayName: 'Admin username for the nodes'
    type: string
    default: 'adminUserName'
  - name: vnetAddressPrefixes
    displayName: 'Virtual Network Address Prefixes'
    type: object
    default: 
      - '10.240.0.0/16'
  - name: subnetAddressPrefix
    displayName: 'Subnet Address Prefix'
    type: string
    default: '10.240.0.0/20'
  - name: adGroupNames
    type: object
    default: 
      - 'demo-group'

variables:
  resourceGroupName: 'rg-${{ parameters.prefix }}-${{ parameters.clusterName }}-${{ parameters.location }}'

With the parameters set the next part is to build up the steps, starting with downloading the SSH key from the secure files using the DownloadSecureFile task

steps:
- task: DownloadSecureFile@1
  displayName: 'Download Public SSH Key'
  name: SSHfile
  inputs:
    secureFile: 'aksKey.pub'
- bash: |
    value=`cat $(SSHfile.secureFilePath)`
    echo '##vso[task.setvariable variable=publicsshKey;issecret=true]'$value
  displayName: Obtain SSH key value

Next we can get the object IDs for the groups

- task: AzureCLI@2
  displayName: 'Get AD Group Object Ids'
  inputs:
    azureSubscription: ${{ parameters.azureSubscription }}
    scriptType: pscore
    scriptLocation: inlineScript
    inlineScript: |    
      $objectIds = '${{ join(',',parameters.adGroupNames) }}'.Split(',') | ForEach { 
        "$(az ad group list --query "[?displayName == '$_'].{objectId:objectId}" -o tsv)" 
      }

      $output = ConvertTo-Json -Compress @($objectIds)
      Write-Host '##vso[task.setvariable variable=groupIds]'$output

This next section is taking those parameters and turning them into variables to then substitute the values in parameters json file

- template: objectparameters.yml
  parameters:
    tags: ${{ parameters.tags }}
    vnetAddressPrefixes: ${{ parameters.vnetAddressPrefixes }}
- template: parameters.yml
  parameters:
    prefix: ${{ parameters.prefix }}
    clusterName: ${{ parameters.clusterName }}
    nodeVmSize: ${{ parameters.nodeVmSize }}
    osDiskSizeGB: ${{ parameters.osDiskSizeGB }}
    nodeCount: ${{ parameters.nodeCount }}
    maxNodeCount: ${{ parameters.maxNodeCount }}
    nodeAdminUsername: ${{ parameters.nodeAdminUsername }}
    publicsshKey: $(publicsshKey)
    workspaceTier: ${{ parameters.workspaceTier }}    
    subnetAddressPrefix: ${{ parameters.subnetAddressPrefix }}
    adminGroupObjectIDs: $(groupIds)
- task: FileTransform@2
  displayName: "Transform Parameters"
  inputs:
    folderPath: '$(System.DefaultWorkingDirectory)'
    xmlTransformationRules: ''
    jsonTargetFiles: 'deploy.parameters.json'

If you need to debug the transform then you can add another step to output the file contents, I find this a useful technique to make sure the transform worked as expected

- bash: |
    cat deploy.parameters.json
  displayName: "Debug show parameters file"

If we put all that together then the final pipeline looks like this:

trigger: none
pr: none

pool:
  vmImage: ubuntu-latest

parameters:
  - name: azureSubscription
    type: string
    default: 'Sandbox'
  - name: location
    displayName: 'Resource Location'
    type: string
    default: 'uksouth'
  - name: prefix
    displayName: 'Environment Prefix'
    type: string
    default: 'prod'
  - name: clusterName
    displayName: 'Name of the AKS Cluster'
    type: string
    default: 'demo'
  - name: nodeVmSize
    displayName: 'VM Size for the Nodes'
    type: string
    default: 'Standard_D2s_V3'
    values:
      - 'Standard_D2s_V3'
      - 'Standard_DS2_v2'
      - 'Standard_D4s_V3'
      - 'Standard_DS3_v2'
      - 'Standard_DS4_v2'
      - 'Standard_D8s_v3'
  - name: osDiskSizeGB
    displayName: 'Size of OS disk (0 means use vm cache size)'
    type: number
    default: 50
  - name: nodeCount
    displayName: 'The number of nodes'
    type: number
    default: 3
  - name: maxNodeCount
    displayName: 'Max node to scale out to'
    type: number
    default: 10
  - name: workspaceTier
    displayName: Log Analytics Workspace Tier
    type: string
    default: 'PerGB2018'
    values:
      - 'Free'
      - 'Standalone'
      - 'PerNode'
      - 'PerGB2018'
      - 'Premium'
  - name: tags
    displayName: 'Tags'
    type: object
    default:
     Environment: "prod"
     Resource: "AKS"
     Project: "Demo"
  - name: nodeAdminUsername
    displayName: 'Admin username for the nodes'
    type: string
    default: 'adminUserName'
  - name: vnetAddressPrefixes
    displayName: 'Virtual Network Address Prefixes'
    type: object
    default: 
      - '10.240.0.0/16'
  - name: subnetAddressPrefix
    displayName: 'Subnet Address Prefix'
    type: string
    default: '10.240.0.0/20'
  - name: adGroupNames
    type: object
    default: 
      - 'demo-group'

variables:
  resourceGroupName: 'rg-${{ parameters.prefix }}-${{ parameters.clusterName }}-${{ parameters.location }}'

steps:
- task: DownloadSecureFile@1
  displayName: 'Download Public SSH Key'
  name: SSHfile
  inputs:
    secureFile: 'aksKey.pub'
- bash: |
    value=`cat $(SSHfile.secureFilePath)`
    echo '##vso[task.setvariable variable=publicsshKey;issecret=true]'$value
  displayName: Obtain SSH key value  
- task: AzureCLI@2
  displayName: 'Get AD Group Object Ids'
  inputs:
    azureSubscription: ${{ parameters.azureSubscription }}
    scriptType: pscore
    scriptLocation: inlineScript
    inlineScript: |    
      $objectIds = '${{ join(',',parameters.adGroupNames) }}'.Split(',') | ForEach { 
        "$(az ad group list --query "[?displayName == '$_'].{objectId:objectId}" -o tsv)" 
      }

      $output = ConvertTo-Json -Compress @($objectIds)
      Write-Host '##vso[task.setvariable variable=groupIds]'$output
- template: objectparameters.yml
  parameters:
    tags: ${{ parameters.tags }}
    vnetAddressPrefixes: ${{ parameters.vnetAddressPrefixes }}
- template: parameters.yml
  parameters:
    prefix: ${{ parameters.prefix }}
    clusterName: ${{ parameters.clusterName }}
    nodeVmSize: ${{ parameters.nodeVmSize }}
    osDiskSizeGB: ${{ parameters.osDiskSizeGB }}
    nodeCount: ${{ parameters.nodeCount }}
    maxNodeCount: ${{ parameters.maxNodeCount }}
    nodeAdminUsername: ${{ parameters.nodeAdminUsername }}
    publicsshKey: $(publicsshKey)
    workspaceTier: ${{ parameters.workspaceTier }}    
    subnetAddressPrefix: ${{ parameters.subnetAddressPrefix }}
    adminGroupObjectIDs: $(groupIds)
- task: FileTransform@2
  displayName: "Transform Parameters"
  inputs:
    folderPath: '$(System.DefaultWorkingDirectory)'
    xmlTransformationRules: ''
    jsonTargetFiles: 'deploy.parameters.json'
- task: AzureCLI@2
  displayName: 'Deploy AKS Cluster'
  inputs:
    azureSubscription: ${{ parameters.azureSubscription }}
    scriptType: bash
    scriptLocation: inlineScript
    inlineScript: |
      az group create --name "$(resourceGroupName)" --location ${{ parameters.location }} 
      az deployment group create --name "${{ parameters.clusterName }}-deploy" --resource-group "$(resourceGroupName)" --template-file deploy.bicep --parameters deploy.parameters.json

The template file objectparameters.yml looks like this:

parameters: 
  - name: tags
    type: object
  - name: vnetAddressPrefixes
    type: object

steps:
- ${{ each item in parameters }}: 
  - bash: |
      value='${{ convertToJson(item.value) }}'
      echo '##vso[task.setvariable variable=parameters.${{ item.key }}.value]'$value
    displayName: "Create Variable ${{ item.key }}"

And the template file parameters.yml looks like this:

parameters: 
  - name: prefix
    type: string
  - name: clusterName
    type: string
  - name: nodeVmSize
    type: string
  - name: osDiskSizeGB
    type: number
  - name: nodeCount
    type: number
  - name: maxNodeCount
    type: number
  - name: nodeAdminUsername
    type: string
  - name: publicsshKey
    type: string
  - name: workspaceTier
    type: string
  - name: subnetAddressPrefix
    type: string
  - name: adminGroupObjectIDs
    type: string

steps:
- ${{ each item in parameters }}:  
    - bash: |
        echo '##vso[task.setvariable variable=parameters.${{ item.key }}.value]${{ item.value }}'
      displayName: "Create Variable ${{ item.key }}"

Now we have an AKS cluster setup we might want to deploy some applications to the cluster. CoderDave has a great video tutorial to do this with Azure Pipelines.

All the files shown above can be found on my GitHub

Azure, Azure Pipelines, DevOps, Security

Create Issues in Azure DevOps via Snyk API

Snyk is a great tool for scanning your code and containers for vulnerabilities. Snyk is constantly evolving and adding new features and integrations so if you haven’t checked out the Snyk website, I highly recommend you do so. There also is a free tier for you to get started.

One of the features is Jira Integration, this allows you to create a Jira Issue from within Snyk. If you use Jira then I can see a benefit for this but what if you use Azure DevOps or you want to automate the issue creation.

This post goes though using an Azure Logic App to create an issue in Azure DevOps when a new issue is discovered (Note: the process works for Jira too, just use the Azure Logic App Jira connector).

To use the Snyk API you will need to be on the Business Plan or above (at the time of writing), this then allows the ability to add a webhook to receive events.


So the flow of the Logic App is something like this

On enable of the Logic App it will register as a webhook with your Snyk account and on disable will unregister the webhook.

Let’s build the logic app, step one create a new Logic App in Azure

Once that has been created, select a blank logic app and find the HTTP Webhook trigger

As detailed in the Snyk API Documentation set the subscribe method to POST and the URI for web hooks with your organization Id (this can be found on your Snyk account under Org Settings -> General)

Add the subscribe body that includes a secret defined by you and the url of the logic app (for the URL select the Callback Url in dynamic content)

Now set the unsubscribe method to DELETE and use an expression for the URI and leave the Body as blank

concat('https://snyk.io/api/v1/org/<your org id>/webhooks/',triggerOutputs().subscribe.body.id)

Now add new parameters for Subscribe – Headers and Unsubscribe – Headers

Authorization in both headers should be set to your API token (this can be found on your Snyk account Account Settings -> API Token)

When registering the application Snyk sends a Ping event which is determined by the X-Snyk-Event header, we really don’t want to run the rest of the workflow when this happens so we can add a condition to terminate

Select New Step then find Control and select it

Then select Condition

For the value use an expression to get the X-Snyk-Event header

@triggerOutputs()?['headers']?['X-Snyk-Event']

and then make the condition check it contains the word ping (the actual value is ping and the version e.g. ping/v0)

Now in the True side add an action to Terminate and Set it to Cancelled

Now if the message is anything other than a ping then we want to continue processing the response. We will want to validate that the message coming in is from Snyk and is intended for us as it will have created a signature using our custom secret and added to the header under X-Hub-Signature. To perform this validation we can use an Azure Function.

You can create an Azure Function via the Portal, I suggest you use Linux as the OS

Using Visual Studio Code with the Functions Runtime installed you can create and deploy the following function. If you are not sure how to do this take a look at the Microsoft Docs they are really helpful

I named the function ValidateRequest and used some code from the Snyk API documentation to perform the validation and return either OK (200) or Bad Request (400)

const crypto = require('crypto');
module.exports = async function (context, req) {
     
    context.log('JavaScript HTTP trigger function processed a request.');
    const secret = req.headers['x-logicapp-secret'];
    const hubsignature = req.headers['x-hub-signature'];
    const hmac = crypto.createHmac('sha256', secret);
    const buffer = JSON.stringify(req.body);
    hmac.update(buffer, 'utf8');
    const signature = `sha256=${hmac.digest('hex')}`;    
    const isValid = signature === hubsignature;
   
    context.res = {
        status: isValid ? 200 : 400,
        body: isValid
    };    
}

Now the Function is deployed we can add the next step to the Logic App.

Select the function app we created previously

And select the function we deployed previously

Now we need to pass the payload from the webhook into our ValidateRequest function

Add additional parameters for method and header

Set the method to POST and switch the headers to text mode

Then add the following expression to add the request headers and one with your secret

addProperty(triggerOutputs()['headers'], 'X-LogicApp-Secret', '<your secret>')

If the check is successful then the next step is to parse the json and loop through the new issues

For the Content add the payload as you did previously for the validate functionand the Schema add the following

{
    "properties": {
        "group": {
            "properties": {},
            "type": "object"
        },
        "newIssues": {
            "type": "array"
        },
        "org": {
            "properties": {},
            "type": "object"
        },
        "project": {
            "properties": {},
            "type": "object"
        },
        "removedIssues": {
            "type": "array"
        }
    },
    "type": "object"
}

Next we need to loop through the new issues, add a new action using the control For each and add newIssues

For this I am only interested in the high severity issues so we need to add another condition using an expression for the value and is equal to high (Note: I renamed the condition to Severity)

items('For_each')?['issueData']?['severity']

Now if the severity is high then create work item using the built-in Azure DevOps connector

This will ask you to sign in

Once signed in you can set the details of the Azure DevOps organization, project and work item type

To add the details from the issue as the title and description use the following expressions

items('For_each')?['issueData']?['title']
items('For_each')?['issueData']?['description']

And that is the Logic App complete, a created work item with the title and description fields from the payload looks like this

Perhaps the formatting could do with some work but the information is there and the workflow works.

Although I used Azure DevOps as the output, there is a Jira connector that will allow creation of an issue in Jira

Once you are happy everything is running there is one last step, securing the secrets inside the logic app so they can only be seen by the designer

For the webhook select the settings and turn on Secure Inputs and Secure Outputs

and for the ValidateRequest function turn on at least Secure Inputs.

I find Azure Logic Apps a great way to connect systems together for these types of workflows because it has so many connectors.

NOTE: If you run the logic app via Run Trigger it will fail when looking for the X-Snyk-Event header. Disable and Enable the logic app to register the connection with the API

I hope it helps others integrate Snyk with their workflows and can’t wait to see what other features the API will provide in the future.