Architecture, Cloud Native, Dapr Series, Platform Engineering

Part 7 – Putting It All Together: A Real‑World Service with Dapr

Over the past six posts, we’ve explored Dapr’s core building blocks one by one:

Individually, each building block simplifies a specific problem. Together, they form a powerful pattern for building distributed systems where application code stays focused on business logic not infrastructure.

In this final post, we’ll put everything together by walking through a simple but realistic service that uses multiple Dapr capabilities in a single workflow.

The Scenario

We’ll build a service that:

  1. Accepts an order
  2. Stores order state
  3. Publishes an event
  4. Writes an order receipt to object storage

The service will use:

  • State management for persistence
  • Pub/Sub for eventing
  • Bindings for storage
  • Observability for tracing and metrics

And importantly:

There will be no Redis, Kafka, or cloud storage SDKs in the application code

Companion Repository

The full implementation of this scenario, including the Go and .NET services, Dapr components, and local‑first development setup is available in the companion GitHub repository: dapr-by-example.

You can clone it and follow along as you read, or use it as a reference architecture for your own Dapr‑enabled services.

High‑Level Architecture

At runtime, the flow looks like this:

Client
  ↓
Order Service
  ↓
Dapr Sidecar
  ├─ State Store (Redis / Postgres)
  ├─ Pub/Sub (Kafka / RabbitMQ / Service Bus)
  └─ Storage (S3 / Azure Blob)

Your application talks only to Dapr.
Dapr talks to the infrastructure.

This separation is what makes the system portable, testable, and easy to evolve.

The Order Model

We’ll use a simple order model shared across examples.

{
  "orderId": "order-123",
  "amount": 100
}

Step 1: Accepting an Order

Go example

type Order struct {
    OrderID string `json:"orderId"`
    Amount  int    `json:"amount"`
}

func createOrder(w http.ResponseWriter, r *http.Request) {
    var order Order
    json.NewDecoder(r.Body).Decode(&order)

    saveOrder(order)
    publishOrder(order)
    storeReceipt(order)

    w.WriteHeader(http.StatusAccepted)
}

The handler coordinates the workflow.

All infrastructure interactions happen through Dapr.

.NET example

app.MapPost("/orders", async (Order order, DaprClient dapr) =>
{
    await dapr.SaveStateAsync("statestore", order.OrderId, order);
    await dapr.PublishEventAsync("pubsub", "orders", order);
    await dapr.InvokeBindingAsync(
        "storage",
        "create",
        Encoding.UTF8.GetBytes($"Order {order.OrderId}")
    );

    return Results.Accepted();
});

Step 2: Storing Order State

Go

func saveOrder(order Order) error {
    state := []map[string]interface{}{
        {
            "key":   order.OrderID,
            "value": order,
        },
    }

    body, _ := json.Marshal(state)

    _, err := http.Post(
        "http://localhost:3500/v1.0/state/statestore",
        "application/json",
        bytes.NewBuffer(body),
    )

    return err
}

.NET

await dapr.SaveStateAsync("statestore", order.OrderId, order);

The backing store can be Redis, Postgres, or something else, the code doesn’t care.

Step 3: Publishing an Event

Go

func publishOrder(order Order) error {
    body, _ := json.Marshal(order)

    _, err := http.Post(
        "http://localhost:3500/v1.0/publish/pubsub/orders",
        "application/json",
        bytes.NewBuffer(body),
    )

    return err
}

.NET

await dapr.PublishEventAsync("pubsub", "orders", order);

Consumers can subscribe to this event without the order service knowing who they are.

Dapr handles CloudEvents, retries, and delivery semantics.

Step 4: Writing to Object Storage

Go

func storeReceipt(order Order) error {
    payload := map[string]interface{}{
        "operation": "create",
        "data":      []byte("Order receipt"),
        "metadata": map[string]string{
            "blobName": order.OrderID + ".txt",
        },
    }

    body, _ := json.Marshal(payload)

    _, err := http.Post(
        "http://localhost:3500/v1.0/bindings/storage",
        "application/json",
        bytes.NewBuffer(body),
    )

    return err
}

.NET

await dapr.InvokeBindingAsync(
    "storage",
    "create",
    Encoding.UTF8.GetBytes("Order receipt"),
    new Dictionary<string, string>
    {
        ["blobName"] = $"{order.OrderId}.txt"
    }
);

This works with S3, Azure Blob Storage, or any supported provider.

What Changed Compared to a Traditional Approach?

Notice what’s missing from the application code:

  • No Redis client
  • No Kafka or Service Bus SDK
  • No cloud storage SDK
  • No connection strings
  • No retry or backoff logic

All of that lives in Dapr components and configuration.

Your application code stays focused on the workflow, not the plumbing

Why This Matters in Real Systems

This approach enables:

Infrastructure portability

Swap Redis for Postgres, Kafka for Service Bus, or S3 for Azure Blob without code changes.

Polyglot services

Go and .NET services use the same APIs and patterns.

Cleaner boundaries

Application code focuses on business logic.

Incremental evolution

Infrastructure decisions can change independently of services.

Consistent observability

Tracing and metrics flow through Dapr automatically.

When This Pattern Works Best

This style of architecture works particularly well when:

  • Services own their own state
  • Systems are event‑driven
  • Teams want to avoid vendor lock‑in
  • Multiple languages are in use

It’s less useful for:

  • Query‑heavy, relational workloads
  • Highly specialised broker features
  • Single‑service applications

Final Thoughts

Dapr is not a silver bullet, and it doesn’t remove the need to understand your infrastructure. What it does provide is a consistent, portable abstraction over the parts of distributed systems that are otherwise repetitive and error‑prone.

Used thoughtfully, Dapr can significantly reduce the amount of glue code in your systems and make them easier to evolve over time.

If you want to see how this looks in Kubernetes, the Appendix covers some real‑world manifests.

Series Recap

You now have everything you need to build a real‑world Dapr‑enabled service from scratch.

Architecture, Cloud Native, Dapr Series, Platform Engineering

Part 2 – Running Dapr Locally: Setup, Run, and Debug Your First Service

In Part 1, we explored what Dapr is and why it exists. Now it’s time to make it real. Before you can use state management, pub/sub, or any other building block, you need a smooth local development workflow, one that feels natural, fast, and familiar.

Dapr is often associated with Kubernetes and cloud deployments, but most development happens on a laptop. If Dapr doesn’t fit cleanly into your inner loop, it won’t be adopted at all. This post focuses on exactly that: running and debugging Dapr locally, using the same workflow you’d expect for any other service.

What “Running Dapr Locally” Actually Means

Running Dapr locally does not mean:

  • Running Kubernetes
  • Deploying to the cloud
  • Learning a new development model

It means:

  • Running your application as a normal process
  • Running Dapr as a sidecar alongside it
  • Using local infrastructure (or containers) for dependencies

Dapr was designed for fast, iterative development and that’s what we’ll focus on here.

Installing Dapr Locally

Dapr consists of two main parts:

  • The Dapr CLI
  • The Dapr runtime

Once the CLI is installed, initialising Dapr locally is a one‑time step:

dapr init

This sets up:

  • The Dapr runtime
  • A local Redis instance (used by default for state and pub/sub)
  • The placement service (used only for actors)

You don’t need to understand all of these yet. The important part is: Dapr now has everything it needs to run locally.

Note: In local mode, Dapr loads components at startup and does not hot‑reload them. In Kubernetes, components can be updated dynamically.

Your First Local Dapr App

At its simplest, running an app with Dapr looks like this:

.NET example

dapr run \
  --app-id myapp \
  --app-port 8080 \
  --dapr-http-port 3500 \
  -- dotnet run

Or for Go:

Go example

dapr run \
  --app-id myapp \
  --app-port 8080 \
  --dapr-http-port 3500 \
  -- go run main.go

What’s happening here:

  • Your application runs exactly as it normally would
  • Dapr starts a sidecar process alongside it
  • Dapr listens on port 3500
  • Your app listens on its own port (e.g. 8080)

From your application’s point of view, nothing special is happening and that’s the point.

Understanding the Local Architecture

Locally, the architecture looks like this:

Your App (8080)
      ↓
Dapr Sidecar (3500)
      ↓
Local Infrastructure (Redis, etc.)

Your application:

  • Receives HTTP requests as usual
  • Calls Dapr via HTTP or gRPC when it needs state, pub/sub, or bindings

Dapr:

  • Handles communication with infrastructure
  • Manages retries, timeouts, and serialisation
  • Emits logs and metrics independently

This separation is key to understanding how Dapr fits into your workflow.

Adding Components Locally

Dapr integrations are configured using components, which are simple YAML files.

Locally, components are usually placed in a components/ directory:

components/
└── statestore.yaml

When you run Dapr, you point it at this directory:

dapr run \
  --app-id myapp \
  --app-port 8080 \
  --components-path ./components \
  -- dotnet run

This mirrors how Dapr is configured in production, the same components, the same structure, just running locally.

Note: If you don’t specify a components path, Dapr uses the default directory at ~/.dapr/components.

Debugging with Dapr

This is where Dapr fits surprisingly well into normal development workflows.

Debugging the application

Your application runs as a normal process:

  • Attach a debugger
  • Set breakpoints
  • Step through code
  • Inspect variables

Nothing about Dapr changes this.

Debugging Dapr itself

Dapr runs as a separate process, with its own logs.

Useful commands include:

dapr list
dapr logs --app-id myapp

This separation makes it easier to answer an important question:

“Is this a bug in my application, or a configuration/infrastructure issue?”

Common Local Pitfalls

A few things that commonly trip people up:

Port conflicts

Dapr needs its own HTTP and gRPC ports.

Forgetting to restart Dapr

Component changes require restarting the sidecar.

Confusing app logs with Dapr logs

They are separate processes, check both.

Missing components path

If Dapr can’t find your components, integrations won’t work.

Once you understand these, local development becomes predictable and fast

Why This Matters for the Rest of the Series

Everything else in this series builds on this local setup:

  • State management
  • Pub/Sub
  • Bindings and storage
  • End‑to‑end workflows

The same dapr run workflow applies everywhere. Once you’re comfortable running and debugging Dapr locally, the rest of the building blocks feel much less intimidating.

What’s Next

Now that we can run and debug Dapr locally, we can start using it for real work.

In the next post, we’ll look at State Management with Dapr, using Redis and Postgres, all running locally, using the setup described here.