Architecture, Dapr Series, Platform Engineering

Part 5 – Integrating External Systems with Dapr Bindings and Storage

In Part 4, we explored how Dapr simplifies event‑driven communication through its pub/sub building block. Now we’ll look at another common integration point in distributed systems: external services such as object storage, queues, SaaS APIs, and cloud‑native capabilities.

Traditionally, integrating with these systems means importing vendor‑specific SDKs, managing credentials, handling retries, and writing environment‑specific configuration. Dapr’s bindings building block provides a consistent way to interact with external systems using simple HTTP or gRPC calls, no vendor-specific SDKs required.

This post focuses on how bindings work, how to use them for storage, and why they’re a powerful tool for keeping infrastructure concerns out of your application code.

What Are Dapr Bindings?

Bindings allow applications to:

  • Invoke external systems (output bindings)
  • Be triggered by external systems (input bindings)

From the application’s perspective, bindings are just HTTP or gRPC calls. Dapr handles the integration with the external service.

Bindings are commonly used for:

  • Object storage (S3, Azure Blob Storage)
  • Queues and streams
  • Webhooks and SaaS APIs
  • File systems and cloud services

They’re especially useful when you want to avoid pulling cloud‑specific SDKs into your codebase.

Note: Bindings intentionally expose only a subset of provider operations, typically create, get, delete, or provider‑specific equivalents. They are not full SDK replacements.

Storage as a Binding

Object storage is a great example of where bindings shine. Many applications need to:

  • Upload a file
  • Download a file
  • Delete a file

They don’t need to know:

  • Which cloud provider is being used
  • How authentication works
  • How retries or timeouts are handled
  • How provider‑specific APIs differ

Dapr lets you treat storage as an external capability, configured outside your application.

Architecture Overview

At runtime, storage access looks like this:

Application → Dapr Binding API → Storage Provider (S3 / Azure Blob / local)

The application talks only to the local Dapr sidecar. Dapr handles the provider‑specific integration, authentication, retries, and serialisation.

Configuring a Storage Binding

Bindings are defined using Dapr components.

Example: Azure Blob Storage binding

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: storage
spec:
  type: bindings.azure.blobstorage
  version: v1
  metadata:
    - name: accountName
      value: myaccount
    - name: accountKey
      value: mykey
    - name: containerName
      value: uploads

A few important notes:

  • The component name (storage) is what applications reference
  • Secrets should be stored in a secret store component, not inline (shown inline here just for example)
  • You can restrict bindings to specific apps using scopes:
  • In local mode, components load at startup (no hot‑reload)
  • In Kubernetes, components can be updated dynamically

Writing to Storage Using Dapr

Go example: uploading a file

package main

import (
	"context"
	"encoding/json"
	"fmt"

	"github.com/dapr/go-sdk/client"
)

type Order struct {
	Id string `json:"id"`
	Amount  int    `json:"amount"`
}

func main() {
	ctx := context.Background()
	daprClient, _ := client.NewClient()
	order := Order{Id: "order-123", Amount: 100}
	saveOrder(ctx, daprClient, order)

    storeReceipt(ctx, daprClient, order)
}

func storeReceipt(ctx context.Context, daprClient client.Client, order Order) error {
	data := []byte("Order receipt for " + order.OrderID)
	req := &client.InvokeBindingRequest{
		Name:      "storage",
		Operation: "create",
		Data:      data,
		Metadata: map[string]string{
			"blobName": order.Id + ".txt",
			"key":      order.Id + ".txt",
			"fileName": order.Id + ".txt",
		},
	}

	_, err := daprClient.InvokeBinding(ctx, req)
	return err
}

There is no Cloud Specific SDK, no credentials in code, and no storage‑specific logic.

.NET example: uploading a file

var client = new DaprClientBuilder().Build();

var order = new Order("order-123",100);

var metadata = new Dictionary<string, string>
{
     ["blobName"] = $"{order.Id}.txt",
     ["key"] = $"{order.Id}.txt",
     ["fileName"] = $"{order.Id}.txt"
};

await client.InvokeBindingAsync(
    "storage",
    "create",
    Encoding.UTF8.GetBytes($"Order receipt for {order.Id}"),
    metadata
);

The same code works regardless of whether the backing store is Azure Blob Storage or S3.

Switching to S3 Without Code Changes

To switch from Azure Blob Storage to S3, only the component configuration changes.

Example: AWS S3 binding

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: storage
spec:
  type: bindings.aws.s3
  version: v1
  metadata:
    - name: bucket
      value: my-bucket
    - name: region
      value: eu-west-1
    - name: endpoint
      value: s3.eu-west-1.amazonaws.com
    - name: accessKey
      value: mykey
    - name: secretKey
      value: <add secret key>

Note: Secrets should be stored in a secret store component, not inline (shown inline here just for example)

The application code remains unchanged.

This makes it easy to:

  • Use local or cloud‑specific storage per environment
  • Migrate between providers
  • Keep application logic provider‑agnostic

Input Bindings: Triggering Applications

Bindings can also trigger applications.

For example:

  • A file uploaded to storage
  • A message arriving on a queue
  • An event from a SaaS system

Dapr delivers these events to your application as HTTP requests, allowing you to react without polling or SDKs.

A few nuances:

  • Input bindings follow at‑least‑once delivery semantics
  • Your app acknowledges processing by returning a 200‑level response
  • Dapr handles retries and backoff
  • Observability is automatic
  • Events are delivered as CloudEvents

This is particularly useful for building event‑driven workflows around storage and integration points.

Why This Matters in Practice

Using Dapr bindings for storage and integrations enables:

  • SDK‑free integrations – No cloud‑specific libraries in application code.
  • Infrastructure portability – Switch providers via configuration.
  • Cleaner security boundaries – Credentials live in components, not code.
  • Simpler local development – Use local or mocked services without code changes.
  • Consistent patterns across languages

Bindings help teams avoid the “SDK sprawl” that often creeps into distributed systems.

Trade‑offs to Consider

Bindings are intentionally generic. They work best for:

  • Simple operations (create, read, delete)
  • Event‑driven workflows
  • Integration points at system boundaries

They are not ideal for:

  • Complex provider‑specific features
  • High‑performance bulk operations
  • Advanced querying or filtering
  • Deep control over provider‑specific behaviour

In practice, many teams use bindings for integration and SDKs only where deeper control is required.

What’s Next

At this point, we’ve covered the core Dapr building blocks most teams adopt first:

  • State management
  • Pub/Sub
  • Bindings and storage

In the next post, we’ll explore Observability with Dapr, including:

  • What signals Dapr emits by default
  • How tracing and metrics work across building blocks
  • Why observability is already present, even if you haven’t configured anything explicitly

By the time you reach production, these signals are often the difference between guessing and knowing.

Architecture, Dapr Series, Platform Engineering

Part 4 – Event‑Driven Systems with Dapr Pub/Sub

In Part 3, we used Dapr’s state management building block to store and retrieve data without database‑specific SDKs. Now we’ll shift to another core pattern in distributed systems: event‑driven communication.

As systems grow, synchronous service‑to‑service calls become brittle. Tight coupling, cascading failures, and deployment coordination all get harder. Event‑driven architectures help but they often introduce new complexity: broker‑specific SDKs, consumer groups, retry logic, and environment‑specific configuration.

Dapr’s Pub/Sub building block simplifies this by providing a consistent eventing API, regardless of the underlying message broker.

The Problem with Traditional Pub/Sub Integration

Most applications integrate directly with a message broker such as Kafka, RabbitMQ, or Azure Service Bus. That usually means:

  • Importing broker‑specific client libraries
  • Writing custom retry and error‑handling logic
  • Managing subscriptions and consumer groups in code
  • Rewriting integrations when switching brokers
  • Handling CloudEvent envelopes manually
  • Dealing with different semantics across environments

Over time, messaging logic becomes tightly coupled to infrastructure choices and difficult to evolve.

Dapr’s Pub/Sub Model

Dapr introduces a simple abstraction over pub/sub systems.

Your application:

  • Publishes events to a topic
  • Subscribes to topics via HTTP endpoints

It does not know:

  • Which broker is being used
  • How messages are delivered or retried
  • How subscriptions are managed
  • How dead‑lettering works
  • How CloudEvents are formatted

Those concerns are handled by Dapr and defined via configuration.

Architecture Overview

At runtime, pub/sub looks like this:

Publisher → Dapr Pub/Sub API → Message Broker → Dapr → Subscriber

Applications communicate only with their local Dapr sidecar. Dapr handles broker integration, retries, delivery semantics, and CloudEvent formatting.

This keeps messaging logic simple and portable.

Configuring a Pub/Sub Component

Before writing any code, we define a pub/sub component.

Example: Redis Pub/Sub component

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub
spec:
  type: pubsub.redis
  version: v1
  metadata:
    - name: redisHost
      value: localhost:6379

The component name (pubsub) is what applications reference, not the underlying broker.

Switching to Kafka, RabbitMQ, or Azure Service Bus only requires changing this configuration, no code changes.

Publishing Events with Dapr

Publishing an event is a simple SDK call.

Go example: publishing an event

package main

import (
	"context"
	"encoding/json"
	"fmt"

	"github.com/dapr/go-sdk/client"
	"github.com/dapr/go-sdk/service/common"
	daprhttp "github.com/dapr/go-sdk/service/http"
)

type Order struct {
	Id string `json:"id"`
	Amount  int    `json:"amount"`
}

func main() {
	ctx := context.Background()
	daprClient, _ := client.NewClient()
	order := Order{Id: "order-123", Amount: 100}
	if err := publishOrder(ctx, daprClient, order); err != nil {
		log.Printf("publishOrder error: %v", err)
	}
}

func publishOrder(ctx context.Context, daprClient client.Client, order Order) error {
	return daprClient.PublishEvent(ctx, "pubsub", "orders", order)
}

There is no Redis, Kafka, or Service Bus client in the application code, only a call to Dapr.

.NET example: publishing an event

using Dapr.Client;
var client = new DaprClientBuilder().Build();

var order = new Order("order-123",100);

await client.PublishEventAsync(
    "pubsub",
    "orders",
    order
);

public record Order(string Id, int Amount);

The same API works regardless of the underlying broker.

Subscribing to Events

Subscriptions in Dapr are defined by HTTP endpoints.

Dapr delivers events to these endpoints automatically.

Go example: subscribing via HTTP

func main() {
	s := daprhttp.NewService(":8081")

	// Add topic subscription using the proper Dapr Go SDK approach
	subscription := &common.Subscription{
		PubsubName: "pubsub",
		Topic:      "orders",
		Route:      "/orders",
	}

	if err := s.AddTopicEventHandler(subscription, ordersHandler); err != nil {
		log.Fatalf("error adding topic subscription: %v", err)
	}

	if err := s.Start(); err != nil {
		log.Fatalf("error starting service: %v", err)
	}
}

func ordersHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
   var order Order
   log.Printf("Received order: %s (amount: $%d)", order.Id, order.Amount)

  return false, nil
}

.NET example: subscribing with ASP.NET Core

// Add subscription endpoint for Dapr discovery
app.MapGet("/dapr/subscribe", () => 
{
    return new[] {
        new {
            pubsubname = "pubsub",
            topic = "orders",
            route = "/orders"
        }
    };
});

app.MapPost("/orders", [Topic("pubsub", "orders")] async (Order incomingOrder) =>
{
    Console.WriteLine($"Received order: {order.Id} (amount: ${order.Amount})");
    return Results.Ok();
}

Dapr handles:

  • CloudEvent envelopes
  • Delivery semantics
  • Retries
  • Consumer groups (where supported)

Your application only handles the business logic.

Switching Brokers Without Code Changes

To switch from Redis to Kafka, RabbitMQ, or Azure Service Bus, only the component configuration changes.

Example: Kafka pub/sub component

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub
spec:
  type: pubsub.kafka
  version: v1
  metadata:
    - name: brokers
      value: localhost:9092

The application code remains unchanged.

This makes it easy to:

  • Use lightweight brokers locally
  • Use managed services in production
  • Migrate between brokers incrementally

Delivery, Retries, and Reliability

Dapr handles:

  • At‑least‑once delivery
  • Automatic retries
  • Dead‑letter topics (where supported)
  • Backoff and resiliency policies
  • Dead‑letter topics (where supported)
  • CloudEvent formatting
  • Consumer group coordination (for Kafka, Service Bus, etc.)

Applications simply acknowledge messages by returning a successful HTTP response.

This keeps messaging logic simple and consistent across services.

Why This Matters in Practice

Using Dapr for pub/sub enables:

  • Loose coupling between producers and consumers
  • Polyglot messaging across languages
  • Infrastructure portability
  • Simpler failure handling
  • Cleaner codebases
  • Faster onboarding

It also encourages event‑driven designs without forcing teams to become messaging experts.

Trade‑offs to Consider

Dapr’s pub/sub abstraction is intentionally opinionated.

It works best when:

  • Events are immutable
  • Consumers are idempotent
  • Message ordering is not critical

It may not be suitable for:

  • Highly broker‑specific features
  • Strict ordering guarantees
  • Low‑level stream processing
  • Partition‑aware workloads

In practice, many teams use Dapr for most messaging and fall back to native SDKs only when necessary.

What’s Next

With state management and pub/sub in place, we can start integrating external systems without SDKs.

In the next post, we’ll explore Bindings and Storage, including:

  • Writing to S3 and Azure Blob Storage
  • Triggering services from queues or blobs
  • Keeping infrastructure concerns out of application code

This is where Dapr starts to feel like a platform, not just a library