Architecture, Dapr Series, Platform Engineering

Part 4 – Event‑Driven Systems with Dapr Pub/Sub

In Part 3, we used Dapr’s state management building block to store and retrieve data without database‑specific SDKs. Now we’ll shift to another core pattern in distributed systems: event‑driven communication.

As systems grow, synchronous service‑to‑service calls become brittle. Tight coupling, cascading failures, and deployment coordination all get harder. Event‑driven architectures help but they often introduce new complexity: broker‑specific SDKs, consumer groups, retry logic, and environment‑specific configuration.

Dapr’s Pub/Sub building block simplifies this by providing a consistent eventing API, regardless of the underlying message broker.

The Problem with Traditional Pub/Sub Integration

Most applications integrate directly with a message broker such as Kafka, RabbitMQ, or Azure Service Bus. That usually means:

  • Importing broker‑specific client libraries
  • Writing custom retry and error‑handling logic
  • Managing subscriptions and consumer groups in code
  • Rewriting integrations when switching brokers
  • Handling CloudEvent envelopes manually
  • Dealing with different semantics across environments

Over time, messaging logic becomes tightly coupled to infrastructure choices and difficult to evolve.

Dapr’s Pub/Sub Model

Dapr introduces a simple abstraction over pub/sub systems.

Your application:

  • Publishes events to a topic
  • Subscribes to topics via HTTP endpoints

It does not know:

  • Which broker is being used
  • How messages are delivered or retried
  • How subscriptions are managed
  • How dead‑lettering works
  • How CloudEvents are formatted

Those concerns are handled by Dapr and defined via configuration.

Architecture Overview

At runtime, pub/sub looks like this:

Publisher → Dapr Pub/Sub API → Message Broker → Dapr → Subscriber

Applications communicate only with their local Dapr sidecar. Dapr handles broker integration, retries, delivery semantics, and CloudEvent formatting.

This keeps messaging logic simple and portable.

Configuring a Pub/Sub Component

Before writing any code, we define a pub/sub component.

Example: Redis Pub/Sub component

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub
spec:
  type: pubsub.redis
  version: v1
  metadata:
    - name: redisHost
      value: localhost:6379

The component name (pubsub) is what applications reference, not the underlying broker.

Switching to Kafka, RabbitMQ, or Azure Service Bus only requires changing this configuration, no code changes.

Publishing Events with Dapr

Publishing an event is a simple SDK call.

Go example: publishing an event

package main

import (
	"context"
	"encoding/json"
	"fmt"

	"github.com/dapr/go-sdk/client"
	"github.com/dapr/go-sdk/service/common"
	daprhttp "github.com/dapr/go-sdk/service/http"
)

type Order struct {
	Id string `json:"id"`
	Amount  int    `json:"amount"`
}

func main() {
	ctx := context.Background()
	daprClient, _ := client.NewClient()
	order := Order{Id: "order-123", Amount: 100}
	if err := publishOrder(ctx, daprClient, order); err != nil {
		log.Printf("publishOrder error: %v", err)
	}
}

func publishOrder(ctx context.Context, daprClient client.Client, order Order) error {
	return daprClient.PublishEvent(ctx, "pubsub", "orders", order)
}

There is no Redis, Kafka, or Service Bus client in the application code, only a call to Dapr.

.NET example: publishing an event

using Dapr.Client;
var client = new DaprClientBuilder().Build();

var order = new Order("order-123",100);

await client.PublishEventAsync(
    "pubsub",
    "orders",
    order
);

public record Order(string Id, int Amount);

The same API works regardless of the underlying broker.

Subscribing to Events

Subscriptions in Dapr are defined by HTTP endpoints.

Dapr delivers events to these endpoints automatically.

Go example: subscribing via HTTP

func main() {
	s := daprhttp.NewService(":8081")

	// Add topic subscription using the proper Dapr Go SDK approach
	subscription := &common.Subscription{
		PubsubName: "pubsub",
		Topic:      "orders",
		Route:      "/orders",
	}

	if err := s.AddTopicEventHandler(subscription, ordersHandler); err != nil {
		log.Fatalf("error adding topic subscription: %v", err)
	}

	if err := s.Start(); err != nil {
		log.Fatalf("error starting service: %v", err)
	}
}

func ordersHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
   var order Order
   log.Printf("Received order: %s (amount: $%d)", order.Id, order.Amount)

  return false, nil
}

.NET example: subscribing with ASP.NET Core

// Add subscription endpoint for Dapr discovery
app.MapGet("/dapr/subscribe", () => 
{
    return new[] {
        new {
            pubsubname = "pubsub",
            topic = "orders",
            route = "/orders"
        }
    };
});

app.MapPost("/orders", [Topic("pubsub", "orders")] async (Order incomingOrder) =>
{
    Console.WriteLine($"Received order: {order.Id} (amount: ${order.Amount})");
    return Results.Ok();
}

Dapr handles:

  • CloudEvent envelopes
  • Delivery semantics
  • Retries
  • Consumer groups (where supported)

Your application only handles the business logic.

Switching Brokers Without Code Changes

To switch from Redis to Kafka, RabbitMQ, or Azure Service Bus, only the component configuration changes.

Example: Kafka pub/sub component

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub
spec:
  type: pubsub.kafka
  version: v1
  metadata:
    - name: brokers
      value: localhost:9092

The application code remains unchanged.

This makes it easy to:

  • Use lightweight brokers locally
  • Use managed services in production
  • Migrate between brokers incrementally

Delivery, Retries, and Reliability

Dapr handles:

  • At‑least‑once delivery
  • Automatic retries
  • Dead‑letter topics (where supported)
  • Backoff and resiliency policies
  • Dead‑letter topics (where supported)
  • CloudEvent formatting
  • Consumer group coordination (for Kafka, Service Bus, etc.)

Applications simply acknowledge messages by returning a successful HTTP response.

This keeps messaging logic simple and consistent across services.

Why This Matters in Practice

Using Dapr for pub/sub enables:

  • Loose coupling between producers and consumers
  • Polyglot messaging across languages
  • Infrastructure portability
  • Simpler failure handling
  • Cleaner codebases
  • Faster onboarding

It also encourages event‑driven designs without forcing teams to become messaging experts.

Trade‑offs to Consider

Dapr’s pub/sub abstraction is intentionally opinionated.

It works best when:

  • Events are immutable
  • Consumers are idempotent
  • Message ordering is not critical

It may not be suitable for:

  • Highly broker‑specific features
  • Strict ordering guarantees
  • Low‑level stream processing
  • Partition‑aware workloads

In practice, many teams use Dapr for most messaging and fall back to native SDKs only when necessary.

What’s Next

With state management and pub/sub in place, we can start integrating external systems without SDKs.

In the next post, we’ll explore Bindings and Storage, including:

  • Writing to S3 and Azure Blob Storage
  • Triggering services from queues or blobs
  • Keeping infrastructure concerns out of application code

This is where Dapr starts to feel like a platform, not just a library