Azure

Recap: Microsoft Build 2020

“3 time zones, hundreds of thousands of attendees, hundreds of sessions and 48 hours”.

That just about sums up the MSBuild conference this year and what an achievement. Many conferences were cancelled due to Covid-19 but Microsoft rose to the challenge of a 100% virtual conference and they absolutely delivered.

The conference felt very connected, the community was involved, the speakers showing demos from their homes and a sense of we are all in this together. There were some great innovations in the imagine cup, all of them deserved to win but only one could.

Satya Nadella inspiring us with his vision and highlighting how technology and developers were helping in the current crisis. Well worth watching.

Scott Hanselman and guests gave a very different type of keynote, it was fun, interactive, open and still showed off some cool tech. If you haven’t seen it, watch it here.

Beyond the keynotes there was so many announcements of general availability, new tools, new services and even new previews, It was impossible to keep up with everything going on or attend all the sessions. So here are some of the things that stood out from the sessions I got to attend.

Windows Terminal 1.0 Released

I’ve been using the new terminal for a while now and it’s become my go to command line tool, really glad its got to 1.0. Find out more and give it a go.

Winget

Other operating systems like Linux have had native package managers for what seems like forever but Windows has only got close with tools like chocolatey, To be honest I didn’t think we would get a native package manager for Windows, but now we have with the introduction of Winget. Whilst this is a preview release it looks good and well worth checking out.

Codespaces

Codespaces look amazing, the ability to run a dev environment with all the dependencies without downloading everything to your local machine is very cool.

Just to add some confusion there is Visual Studio Codespaces and GitHub Codespaces.

GitHub Codespaces

GitHub Codespaces provides the ability to setup a development environment including all the dependencies and start editing the code without leaving the browser. This means that devices that would normally not be able to build/run the code now can, for example the iPad.

Editing the code in GitHub Codespaces is done by using a web based version of Visual Studio Code. If you are running VS code on a support operating system (Windows, Linux or Mac) then you can sync your settings to the web version and share your extensions, colour schemes, etc.

Visual Studio Codespaces

Visual Studio Codespaces is designed to be used from a browser (just as with GitHub Codespaces) but also from within Visual Studio Code or Visual Studio 2019.

Dapr

I had heard about Dapr but not in any detail, so attending the session to get more was well worth it. The description on the website says “Dapr is a portable, event-driven runtime that makes it easy for developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.”

I think this diagram from the Dapr website helps visualize it:

Without Dapr we might have a Service A that uses an SDK to communicate with Redis to push data and Service B that also uses the SDK to read data.

Now suppose we wanted to change Redis for Cosmos Db, we would have to change both services. Using Dapr, each service would use the Dapr API and so a change to the Dapr sidecar is all that would be needed.

During the Q&A from the session there was interesting points:

  • You can run Dapr using the CLI
  • Dapr can be ran locally
  • You can host Dapr in Kubernetes, Service Fabric and Azure Functions (coming soon)
  • Dapr works along side of service mesh’s

Blazor WebAssembly 3.2 Released

I’ve not had a chance to look into this but it looks very promising, writing C# code instead of JavaScript for the UI is a really nice option. Take a look at the Release Blog.

Static Web Apps

Welcome to a new member of the Web App family Static Web Apps, globally available static frontend and dynamic backend powered by serverless api’s.

Despite this being very new there is already an example of deploying a Blazor WebAssembly app in Static Web Apps https://dev.to/azure/deploy-blazor-webassembly-apps-to-azure-static-web-apps-6bp.

Summary

There were so many sessions to attend and I’ll definitely be catching up with the other sessions when the recordings are available, check out myBuild, the Channel9 page and/or YouTube channel.

I’m sure there will be more updates in the coming weeks and months on some of these. Happy coding 🙂

Azure

Working with Azure Table Storage

I’ve been working with Azure Table Storage for a few years and find it really useful for storing logs or static data or even as a data recovery store. Table Storage is an incredibly cheap way to store data.

The first time I used Table Storage I thought it was great, but there were times it was slow and I had no idea why and then I couldn’t isolate it to perform unit testing.

Research

Why is it slow?

  • First off you need to design your data to be accessed quickly. A good place to start is the Storage Design Guide
  • Nagle’s Algorithm – I really had no idea about this or how much it mattered, fortunately there is a great article to explain (despite being from 2010 it’s still useful)
  • The default connection limit in ServicePointManager is 2

How Do I Unit Test?

  • I could use the Azure Storage Emulator to perform tests, but it feels wrong having an external process for my tests and my build server will need to run this emulator too. On top of that we consider it good practice to not rely on external entities for our tests
  • I could write a wrapper around the Table Storage API and use an interface into my code

Batching?

  • The Table Storage API provides the ability to bulk/batch inserts. But this type of insert requires the Partition Key to be the same for each entry. I have found this to be a problem when there is multiple partitions to insert at once.

Solution

I decided to build a generic wrapper than encompassed being able to isolate the storage and configure the settings e.g. Nagle’s Algorithm.

The Wrapper

The wrapper has a TableStoreFactory method that creates the table store connection, or it allows you to create a TableStore directly.

The code below shows a very small example of injecting the TableStoreFactory and changing the options from the defaults.

public class TableStorageClient
{
    private ITableStore<MyDto> _store;

    public TableStorageClient(ITableStoreFactory factory)
    {
        var options = new TableStorageOptions
        {
            UseNagleAlgorithm = true,
            ConnectionLimit = 100,
            EnsureTableExists = false
        };

        _store = factory.CreateTableStore<MyDto>("MyTable", "UseDevelopmentStorage=true", options);
    }
}

You could also inject the TableStore

public class TableStorageClient
{
    private ITableStore<MyDto> _store;

    public TableStorageClient(ITableStore<MyDto> store)
    {
        _store = store;
    }
}

Or simply create the store in code

var store = new TableStore<MyDto>("MyTable", "UseDevelopmentStorage=true", new TableStorageOptions());

Batching

To handle the batch insert with multiple partition keys, I added the ability to automatically split the batch by key and then insert them in batches of Partition Key and up to the Max 100 records per batch. Now I can just create my list of entries and call insert without having to worry about it.

var entries = new List<MyDto>
{
    new MyDto("John", "Smith") {Age = 21, Email = "john.smith@something.com"},
    new MyDto("Jane", "Smith") {Age = 28, Email = "jane.smith@something.com"},
    new MyDto("Bill", "Smith") { Age = 38, Email = "bill.smith@another.com"},
    new MyDto("Fred", "Jones") {Age = 32, Email = "fred.jones@somewhere.com"},
    new MyDto("Bill", "Jones") {Age = 45, Email = "bill.jones@somewhere.com"},
    new MyDto("Bill", "King") {Age = 45, Email = "bill.king@email.com"},
    new MyDto("Fred", "Bloggs") { Age = 32, Email = "fred.bloggs@email.com" }
};

_store.InsertAsync(entries)

Filtering

Another noteworthy feature is GetRecordsByFilter, this allows secondary data to be filtered before returning the result (the filtering is done by passing in a predicate). The downside here is that it is required to get all records and then perform the filter, testing showed ~1.3 seconds for 10,000 records but when using paging and returning the first 100 it was ~0.0300 seconds for 10,000 records.

// Without paging
_store.GetRecordsByFilter(x => x.Age > 21 && x.Age < 25);

// With paging
_store.GetRecordsByFilter(x => x.Age > 21 && x.Age < 25, 0, 100);

If there is a need to perform an action as the data is read from the table then there is support for Observable

var theObserver = _store.GetAllRecordsObservable();
theObserver.Where(entry => entry.Age > 21 && entry.Age < 25).Take(100).Subscribe(item =>
{
   // Do something with the table entry
});

The end result can be found on github and it is available on nuget. Others have introduced additions to this and they can be found on here and here

Cosmos DB

Table Storage does not support secondary indexes and global distribution, there was a Premium tier for Table Storage but now it is known as Cosmos DB.

The wrapper shown here will work with Cosmos DB but it does not support everything. For more details take a look at the FAQ’s and the new Table API.