Architecture, Dapr Series, Observability, Platform Engineering

Part 6 – Observability with Dapr: Tracing, Metrics, and Debugging Without the Boilerplate

In Part 5, we explored how Dapr bindings let you integrate with external systems like storage and SaaS APIs without pulling cloud‑specific SDKs into your code. At this point, your service can store state, publish events, and interact with external systems, which means it’s time to address one of the hardest parts of distributed systems: observability.

Logs alone aren’t enough once requests cross service boundaries. Tracing is difficult to retrofit. Metrics often depend on vendor‑specific SDKs. And in polyglot systems, consistency becomes almost impossible.

One of the most under‑appreciated aspects of Dapr is that it provides consistent, automatic observability across all building blocks, without requiring instrumentation in your application code.

This post explains what Dapr gives you out of the box, how tracing and metrics work, and why these signals matter long before you reach production.

The Observability Problem in Distributed Systems

In a typical microservice architecture:

  • Requests flow through multiple services
  • State is stored externally
  • Events are published asynchronously
  • Failures can occur at many layers

Without good observability, answering simple questions becomes difficult:

  • Where did this request fail?
  • Was it a timeout or a logic error?
  • Which dependency is slow?
  • Did the message get retried?

Traditionally, each service and SDK needs to be instrumented manually. Over time, this leads to inconsistent signals and duplicated effort.

What Dapr Does Automatically

Dapr is instrumented internally using OpenTelemetry. This means that as soon as you start using Dapr building blocks, you get:

  • Distributed tracing across services
  • Metrics for requests, latency, and errors
  • Context propagation across service boundaries
  • Consistent instrumentation across languages
  • Spans for both inbound and outbound calls

Dapr emits:

  • OTLP‑compatible traces
  • Prometheus‑scrapable metrics
  • Structured logs (JSON in Kubernetes)

Crucially, this happens without adding observability code to your application.

Your application emits business logic. Dapr emits infrastructure signals.

Tracing a Request End‑to‑End

Consider a simple workflow:

  1. An HTTP request hits a service
  2. State is written using Dapr
  3. An event is published
  4. A storage binding is invoked

From Dapr’s perspective, this is a single trace with multiple spans:

  • Application request
  • State store interaction
  • Pub/Sub publish
  • Binding invocation

Each span is clearly attributed to either:

  • Your application
  • The Dapr sidecar
  • The external dependency

This separation makes it much easier to understand where time is being spent and where failures occur.

Dapr also records:

  • retries
  • transient failures
  • backoff behaviour

…as part of the trace, something most SDKs require manual instrumentation for.

Note: Dapr uses CloudEvents for pub/sub and input bindings, and automatically propagates trace context across these boundaries.

Viewing Traces in Zipkin (Local Mode)

When running Dapr locally, Zipkin is available automatically at:

As soon as you send a request through your service, Zipkin will show a trace containing:

http://localhost:9411
  • the incoming HTTP request
  • the state store write
  • the pub/sub publish
  • the pub/sub delivery
  • the storage binding invocation
Zipkin running locally with Dapr. This trace shows the entire order workflow flowing through the sidecar, making it easy to spot latency, retries, and failures before you ever deploy to Kubernetes.

This gives you immediate visibility into latency, retries, and failures, without adding a single line of tracing code.

Using Jaeger v2 with Dapr (Production)

Zipkin works well for local debugging, but some teams choose to use OpenTelemetry collectors and Jaeger v2 in production for deeper analysis, scalable retention, and more flexible sampling. Because Dapr emits OTLP‑compatible traces, Jaeger v2 can be added without modifying your services.

A Jaeger v2 trace for the same workflow typically looks like this:

A Jaeger v2 trace of the same workflow. Dapr emits OTLP‑compatible spans, so the exact same application code used in local development can feed a production‑grade OpenTelemetry pipeline.

For a deeper look at Jaeger v2 and how it fits into modern OpenTelemetry pipelines, see my OpenTelemetry blog series, which walks through the architecture, configuration, and end‑to‑end workflows in detail.

This gives you a clear path from “local debugging” → “production‑grade observability”.

Observability in Local Development

Observability isn’t just a production concern.

Running Dapr locally gives you immediate insight into:

  • Failed state operations
  • Pub/Sub delivery issues
  • Retry behaviour
  • Misconfigured components

Because Dapr runs as a separate process, you can:

  • Debug your application normally
  • Inspect Dapr logs independently
  • See exactly which calls succeeded or failed
  • View traces and metrics without adding instrumentation

This makes it much easier to answer:

“Is this a bug in my code, or a configuration issue?”

Note: in local mode, Dapr emits the same observability signals as in Kubernetes, but exporters may differ depending on your configuration.

Metrics That Matter

Dapr emits metrics for:

  • Request counts
  • Latency
  • Error rates
  • Component‑level interactions
  • Sidecar health and runtime behaviour

These metrics are:

  • Consistent across languages
  • Independent of application frameworks
  • Aligned with Dapr building blocks
  • Exported in Prometheus format by default

For platform teams, this provides a common baseline.

For application teams, it removes the need to reinvent instrumentation.

Why This Changes How You Build Systems

With Dapr, observability is no longer something you bolt on later.

Instead:

  • Tracing is present from day one
  • Metrics are emitted automatically
  • Context flows across services without manual wiring

This encourages better system design:

  • Clear service boundaries
  • Explicit ownership of state
  • Event‑driven workflows that are observable by default

It also reduces the cognitive load on developers, who no longer need to think about observability at every integration point.

What Dapr Doesn’t Do for You

Dapr provides signals, not answers.

It does not:

  • Design dashboards
  • Define alert thresholds
  • Replace domain‑specific logging
  • Eliminate the need to understand your system

Observability still requires thought and intent, Dapr simply removes much of the boilerplate.

What’s Next

In the final post, we’ll put everything together:

  • A service that stores state
  • Publishes events
  • Writes to external storage
  • Emits observability signals

All without infrastructure‑specific code.

This is where Dapr stops being a set of features and starts looking like a platform.

Architecture, Dapr Series, Platform Engineering

Part 5 – Integrating External Systems with Dapr Bindings and Storage

In Part 4, we explored how Dapr simplifies event‑driven communication through its pub/sub building block. Now we’ll look at another common integration point in distributed systems: external services such as object storage, queues, SaaS APIs, and cloud‑native capabilities.

Traditionally, integrating with these systems means importing vendor‑specific SDKs, managing credentials, handling retries, and writing environment‑specific configuration. Dapr’s bindings building block provides a consistent way to interact with external systems using simple HTTP or gRPC calls, no vendor-specific SDKs required.

This post focuses on how bindings work, how to use them for storage, and why they’re a powerful tool for keeping infrastructure concerns out of your application code.

What Are Dapr Bindings?

Bindings allow applications to:

  • Invoke external systems (output bindings)
  • Be triggered by external systems (input bindings)

From the application’s perspective, bindings are just HTTP or gRPC calls. Dapr handles the integration with the external service.

Bindings are commonly used for:

  • Object storage (S3, Azure Blob Storage)
  • Queues and streams
  • Webhooks and SaaS APIs
  • File systems and cloud services

They’re especially useful when you want to avoid pulling cloud‑specific SDKs into your codebase.

Note: Bindings intentionally expose only a subset of provider operations, typically create, get, delete, or provider‑specific equivalents. They are not full SDK replacements.

Storage as a Binding

Object storage is a great example of where bindings shine. Many applications need to:

  • Upload a file
  • Download a file
  • Delete a file

They don’t need to know:

  • Which cloud provider is being used
  • How authentication works
  • How retries or timeouts are handled
  • How provider‑specific APIs differ

Dapr lets you treat storage as an external capability, configured outside your application.

Architecture Overview

At runtime, storage access looks like this:

Application → Dapr Binding API → Storage Provider (S3 / Azure Blob / local)

The application talks only to the local Dapr sidecar. Dapr handles the provider‑specific integration, authentication, retries, and serialisation.

Configuring a Storage Binding

Bindings are defined using Dapr components.

Example: Azure Blob Storage binding

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: storage
spec:
  type: bindings.azure.blobstorage
  version: v1
  metadata:
    - name: accountName
      value: myaccount
    - name: accountKey
      value: mykey
    - name: containerName
      value: uploads

A few important notes:

  • The component name (storage) is what applications reference
  • Secrets should be stored in a secret store component, not inline (shown inline here just for example)
  • You can restrict bindings to specific apps using scopes:
  • In local mode, components load at startup (no hot‑reload)
  • In Kubernetes, components can be updated dynamically

Writing to Storage Using Dapr

Go example: uploading a file

package main

import (
	"context"
	"encoding/json"
	"fmt"

	"github.com/dapr/go-sdk/client"
)

type Order struct {
	Id string `json:"id"`
	Amount  int    `json:"amount"`
}

func main() {
	ctx := context.Background()
	daprClient, _ := client.NewClient()
	order := Order{Id: "order-123", Amount: 100}
	saveOrder(ctx, daprClient, order)

    storeReceipt(ctx, daprClient, order)
}

func storeReceipt(ctx context.Context, daprClient client.Client, order Order) error {
	data := []byte("Order receipt for " + order.OrderID)
	req := &client.InvokeBindingRequest{
		Name:      "storage",
		Operation: "create",
		Data:      []byte(base64.StdEncoding.EncodeToString(data)),
		Metadata: map[string]string{
			"blobName": order.OrderID + ".txt",
			"key":      order.OrderID + ".txt",
			"fileName": order.OrderID + ".txt",
		},
	}

	_, err := daprClient.InvokeBinding(ctx, req)
	return err
}

There is no Cloud Specific SDK, no credentials in code, and no storage‑specific logic.

.NET example: uploading a file

var client = new DaprClientBuilder().Build();

var order = new Order("order-123",100);

var metadata = new Dictionary<string, string>
{
     ["blobName"] = $"{order.Id}.txt",
     ["key"] = $"{order.Id}.txt",
     ["fileName"] = $"{order.Id}.txt"
};

await client.InvokeBindingAsync(
    "storage",
    "create",
    Encoding.UTF8.GetBytes($"Order receipt for {order.Id}"),
    metadata
);

The same code works regardless of whether the backing store is Azure Blob Storage or S3.

Switching to S3 Without Code Changes

To switch from Azure Blob Storage to S3, only the component configuration changes.

Example: AWS S3 binding

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: storage
spec:
  type: bindings.aws.s3
  version: v1
  metadata:
    - name: bucket
      value: my-bucket
    - name: region
      value: eu-west-1
    - name: endpoint
      value: s3.eu-west-1.amazonaws.com
    - name: accessKey
      value: mykey
    - name: secretKey
      value: <add secret key>
    - name: decodeBase64
      value: true

Note: Secrets should be stored in a secret store component, not inline (shown inline here just for example)

The application code remains unchanged.

This makes it easy to:

  • Use local or cloud‑specific storage per environment
  • Migrate between providers
  • Keep application logic provider‑agnostic

Input Bindings: Triggering Applications

Bindings can also trigger applications.

For example:

  • A file uploaded to storage
  • A message arriving on a queue
  • An event from a SaaS system

Dapr delivers these events to your application as HTTP requests, allowing you to react without polling or SDKs.

A few nuances:

  • Input bindings follow at‑least‑once delivery semantics
  • Your app acknowledges processing by returning a 200‑level response
  • Dapr handles retries and backoff
  • Observability is automatic
  • Events are delivered as CloudEvents

This is particularly useful for building event‑driven workflows around storage and integration points.

Why This Matters in Practice

Using Dapr bindings for storage and integrations enables:

  • SDK‑free integrations – No cloud‑specific libraries in application code.
  • Infrastructure portability – Switch providers via configuration.
  • Cleaner security boundaries – Credentials live in components, not code.
  • Simpler local development – Use local or mocked services without code changes.
  • Consistent patterns across languages

Bindings help teams avoid the “SDK sprawl” that often creeps into distributed systems.

Trade‑offs to Consider

Bindings are intentionally generic. They work best for:

  • Simple operations (create, read, delete)
  • Event‑driven workflows
  • Integration points at system boundaries

They are not ideal for:

  • Complex provider‑specific features
  • High‑performance bulk operations
  • Advanced querying or filtering
  • Deep control over provider‑specific behaviour

In practice, many teams use bindings for integration and SDKs only where deeper control is required.

What’s Next

At this point, we’ve covered the core Dapr building blocks most teams adopt first:

  • State management
  • Pub/Sub
  • Bindings and storage

In the next post, we’ll explore Observability with Dapr, including:

  • What signals Dapr emits by default
  • How tracing and metrics work across building blocks
  • Why observability is already present, even if you haven’t configured anything explicitly

By the time you reach production, these signals are often the difference between guessing and knowing.