Architecture, Cloud Native, Dapr Series, Platform Engineering

Part 7 – Putting It All Together: A Real‑World Service with Dapr

Over the past six posts, we’ve explored Dapr’s core building blocks one by one:

Individually, each building block simplifies a specific problem. Together, they form a powerful pattern for building distributed systems where application code stays focused on business logic not infrastructure.

In this final post, we’ll put everything together by walking through a simple but realistic service that uses multiple Dapr capabilities in a single workflow.

The Scenario

We’ll build a service that:

  1. Accepts an order
  2. Stores order state
  3. Publishes an event
  4. Writes an order receipt to object storage

The service will use:

  • State management for persistence
  • Pub/Sub for eventing
  • Bindings for storage
  • Observability for tracing and metrics

And importantly:

There will be no Redis, Kafka, or cloud storage SDKs in the application code

Companion Repository

The full implementation of this scenario, including the Go and .NET services, Dapr components, and local‑first development setup is available in the companion GitHub repository: dapr-by-example.

You can clone it and follow along as you read, or use it as a reference architecture for your own Dapr‑enabled services.

High‑Level Architecture

At runtime, the flow looks like this:

Client
  ↓
Order Service
  ↓
Dapr Sidecar
  ├─ State Store (Redis / Postgres)
  ├─ Pub/Sub (Kafka / RabbitMQ / Service Bus)
  └─ Storage (S3 / Azure Blob)

Your application talks only to Dapr.
Dapr talks to the infrastructure.

This separation is what makes the system portable, testable, and easy to evolve.

The Order Model

We’ll use a simple order model shared across examples.

{
  "orderId": "order-123",
  "amount": 100
}

Step 1: Accepting an Order

Go example

type Order struct {
    OrderID string `json:"orderId"`
    Amount  int    `json:"amount"`
}

func createOrder(w http.ResponseWriter, r *http.Request) {
    var order Order
    json.NewDecoder(r.Body).Decode(&order)

    saveOrder(order)
    publishOrder(order)
    storeReceipt(order)

    w.WriteHeader(http.StatusAccepted)
}

The handler coordinates the workflow.

All infrastructure interactions happen through Dapr.

.NET example

app.MapPost("/orders", async (Order order, DaprClient dapr) =>
{
    await dapr.SaveStateAsync("statestore", order.OrderId, order);
    await dapr.PublishEventAsync("pubsub", "orders", order);
    await dapr.InvokeBindingAsync(
        "storage",
        "create",
        Encoding.UTF8.GetBytes($"Order {order.OrderId}")
    );

    return Results.Accepted();
});

Step 2: Storing Order State

Go

func saveOrder(order Order) error {
    state := []map[string]interface{}{
        {
            "key":   order.OrderID,
            "value": order,
        },
    }

    body, _ := json.Marshal(state)

    _, err := http.Post(
        "http://localhost:3500/v1.0/state/statestore",
        "application/json",
        bytes.NewBuffer(body),
    )

    return err
}

.NET

await dapr.SaveStateAsync("statestore", order.OrderId, order);

The backing store can be Redis, Postgres, or something else, the code doesn’t care.

Step 3: Publishing an Event

Go

func publishOrder(order Order) error {
    body, _ := json.Marshal(order)

    _, err := http.Post(
        "http://localhost:3500/v1.0/publish/pubsub/orders",
        "application/json",
        bytes.NewBuffer(body),
    )

    return err
}

.NET

await dapr.PublishEventAsync("pubsub", "orders", order);

Consumers can subscribe to this event without the order service knowing who they are.

Dapr handles CloudEvents, retries, and delivery semantics.

Step 4: Writing to Object Storage

Go

func storeReceipt(order Order) error {
    payload := map[string]interface{}{
        "operation": "create",
        "data":      []byte("Order receipt"),
        "metadata": map[string]string{
            "blobName": order.OrderID + ".txt",
        },
    }

    body, _ := json.Marshal(payload)

    _, err := http.Post(
        "http://localhost:3500/v1.0/bindings/storage",
        "application/json",
        bytes.NewBuffer(body),
    )

    return err
}

.NET

await dapr.InvokeBindingAsync(
    "storage",
    "create",
    Encoding.UTF8.GetBytes("Order receipt"),
    new Dictionary<string, string>
    {
        ["blobName"] = $"{order.OrderId}.txt"
    }
);

This works with S3, Azure Blob Storage, or any supported provider.

What Changed Compared to a Traditional Approach?

Notice what’s missing from the application code:

  • No Redis client
  • No Kafka or Service Bus SDK
  • No cloud storage SDK
  • No connection strings
  • No retry or backoff logic

All of that lives in Dapr components and configuration.

Your application code stays focused on the workflow, not the plumbing

Why This Matters in Real Systems

This approach enables:

Infrastructure portability

Swap Redis for Postgres, Kafka for Service Bus, or S3 for Azure Blob without code changes.

Polyglot services

Go and .NET services use the same APIs and patterns.

Cleaner boundaries

Application code focuses on business logic.

Incremental evolution

Infrastructure decisions can change independently of services.

Consistent observability

Tracing and metrics flow through Dapr automatically.

When This Pattern Works Best

This style of architecture works particularly well when:

  • Services own their own state
  • Systems are event‑driven
  • Teams want to avoid vendor lock‑in
  • Multiple languages are in use

It’s less useful for:

  • Query‑heavy, relational workloads
  • Highly specialised broker features
  • Single‑service applications

Final Thoughts

Dapr is not a silver bullet, and it doesn’t remove the need to understand your infrastructure. What it does provide is a consistent, portable abstraction over the parts of distributed systems that are otherwise repetitive and error‑prone.

Used thoughtfully, Dapr can significantly reduce the amount of glue code in your systems and make them easier to evolve over time.

If you want to see how this looks in Kubernetes, the Appendix covers some real‑world manifests.

Series Recap

You now have everything you need to build a real‑world Dapr‑enabled service from scratch.

Architecture, Dapr Series, Observability, Platform Engineering

Part 6 – Observability with Dapr: Tracing, Metrics, and Debugging Without the Boilerplate

In Part 5, we explored how Dapr bindings let you integrate with external systems like storage and SaaS APIs without pulling cloud‑specific SDKs into your code. At this point, your service can store state, publish events, and interact with external systems, which means it’s time to address one of the hardest parts of distributed systems: observability.

Logs alone aren’t enough once requests cross service boundaries. Tracing is difficult to retrofit. Metrics often depend on vendor‑specific SDKs. And in polyglot systems, consistency becomes almost impossible.

One of the most under‑appreciated aspects of Dapr is that it provides consistent, automatic observability across all building blocks, without requiring instrumentation in your application code.

This post explains what Dapr gives you out of the box, how tracing and metrics work, and why these signals matter long before you reach production.

The Observability Problem in Distributed Systems

In a typical microservice architecture:

  • Requests flow through multiple services
  • State is stored externally
  • Events are published asynchronously
  • Failures can occur at many layers

Without good observability, answering simple questions becomes difficult:

  • Where did this request fail?
  • Was it a timeout or a logic error?
  • Which dependency is slow?
  • Did the message get retried?

Traditionally, each service and SDK needs to be instrumented manually. Over time, this leads to inconsistent signals and duplicated effort.

What Dapr Does Automatically

Dapr is instrumented internally using OpenTelemetry. This means that as soon as you start using Dapr building blocks, you get:

  • Distributed tracing across services
  • Metrics for requests, latency, and errors
  • Context propagation across service boundaries
  • Consistent instrumentation across languages
  • Spans for both inbound and outbound calls

Dapr emits:

  • OTLP‑compatible traces
  • Prometheus‑scrapable metrics
  • Structured logs (JSON in Kubernetes)

Crucially, this happens without adding observability code to your application.

Your application emits business logic. Dapr emits infrastructure signals.

Tracing a Request End‑to‑End

Consider a simple workflow:

  1. An HTTP request hits a service
  2. State is written using Dapr
  3. An event is published
  4. A storage binding is invoked

From Dapr’s perspective, this is a single trace with multiple spans:

  • Application request
  • State store interaction
  • Pub/Sub publish
  • Binding invocation

Each span is clearly attributed to either:

  • Your application
  • The Dapr sidecar
  • The external dependency

This separation makes it much easier to understand where time is being spent and where failures occur.

Dapr also records:

  • retries
  • transient failures
  • backoff behaviour

…as part of the trace, something most SDKs require manual instrumentation for.

Note: Dapr uses CloudEvents for pub/sub and input bindings, and automatically propagates trace context across these boundaries.

Viewing Traces in Zipkin (Local Mode)

When running Dapr locally, Zipkin is available automatically at:

As soon as you send a request through your service, Zipkin will show a trace containing:

http://localhost:9411
  • the incoming HTTP request
  • the state store write
  • the pub/sub publish
  • the pub/sub delivery
  • the storage binding invocation
Zipkin running locally with Dapr. This trace shows the entire order workflow flowing through the sidecar, making it easy to spot latency, retries, and failures before you ever deploy to Kubernetes.

This gives you immediate visibility into latency, retries, and failures, without adding a single line of tracing code.

Using Jaeger v2 with Dapr (Production)

Zipkin works well for local debugging, but some teams choose to use OpenTelemetry collectors and Jaeger v2 in production for deeper analysis, scalable retention, and more flexible sampling. Because Dapr emits OTLP‑compatible traces, Jaeger v2 can be added without modifying your services.

A Jaeger v2 trace for the same workflow typically looks like this:

A Jaeger v2 trace of the same workflow. Dapr emits OTLP‑compatible spans, so the exact same application code used in local development can feed a production‑grade OpenTelemetry pipeline.

For a deeper look at Jaeger v2 and how it fits into modern OpenTelemetry pipelines, see my OpenTelemetry blog series, which walks through the architecture, configuration, and end‑to‑end workflows in detail.

This gives you a clear path from “local debugging” → “production‑grade observability”.

Observability in Local Development

Observability isn’t just a production concern.

Running Dapr locally gives you immediate insight into:

  • Failed state operations
  • Pub/Sub delivery issues
  • Retry behaviour
  • Misconfigured components

Because Dapr runs as a separate process, you can:

  • Debug your application normally
  • Inspect Dapr logs independently
  • See exactly which calls succeeded or failed
  • View traces and metrics without adding instrumentation

This makes it much easier to answer:

“Is this a bug in my code, or a configuration issue?”

Note: in local mode, Dapr emits the same observability signals as in Kubernetes, but exporters may differ depending on your configuration.

Metrics That Matter

Dapr emits metrics for:

  • Request counts
  • Latency
  • Error rates
  • Component‑level interactions
  • Sidecar health and runtime behaviour

These metrics are:

  • Consistent across languages
  • Independent of application frameworks
  • Aligned with Dapr building blocks
  • Exported in Prometheus format by default

For platform teams, this provides a common baseline.

For application teams, it removes the need to reinvent instrumentation.

Why This Changes How You Build Systems

With Dapr, observability is no longer something you bolt on later.

Instead:

  • Tracing is present from day one
  • Metrics are emitted automatically
  • Context flows across services without manual wiring

This encourages better system design:

  • Clear service boundaries
  • Explicit ownership of state
  • Event‑driven workflows that are observable by default

It also reduces the cognitive load on developers, who no longer need to think about observability at every integration point.

What Dapr Doesn’t Do for You

Dapr provides signals, not answers.

It does not:

  • Design dashboards
  • Define alert thresholds
  • Replace domain‑specific logging
  • Eliminate the need to understand your system

Observability still requires thought and intent, Dapr simply removes much of the boilerplate.

What’s Next

In the final post, we’ll put everything together:

  • A service that stores state
  • Publishes events
  • Writes to external storage
  • Emits observability signals

All without infrastructure‑specific code.

This is where Dapr stops being a set of features and starts looking like a platform.