.NET, Architecture, Cloud Native, Dapr Series, Platform Engineering

Bonus: Using Dapr with .NET Aspire for a Modern Local Development Experience

Dapr keeps infrastructure concerns out of your application code. Aspire keeps local orchestration out of your head. Put them together and you get one of the smoothest .NET development loops available today.

This bonus post is optional and .NET‑specific, but if you’re a .NET engineer, it’s worth your time. It shows how Aspire and Dapr complement each other, how they fit into a clean local development workflow, and how to wire them together in a real project.

This bonus post uses .NET Aspire 13.1 and Dapr 1.16, the current stable versions at the time of writing. Aspire 13.1 provides the new distributed application model, automatic dashboard integration, and first‑class Dapr sidecar orchestration. Dapr 1.16 provides the building blocks (state, pub/sub, bindings, secrets, observability) used throughout the examples in this repository, including the code to demonstrate running with Aspire.

How Aspire and Dapr Work Together

What Each Provides

Dapr provides infrastructure building blocks:

  • State management
  • Pub/sub
  • Bindings
  • Secrets
  • Observability
  • Service invocation

Aspire provides local application orchestration:

  • Service discovery
  • Configuration
  • Containerized dependencies
  • A unified dashboard
  • A single command to run your entire app

Once you understand what each tool provides individually, the next step is seeing how they work together in a real application.

How They Fit Together

Aspire defines your application topology.
Dapr injects infrastructure capabilities into each service.

The workflow looks like this:

  1. Aspire AppHost defines your services.
  2. Dapr sidecars run next to each service.
  3. Dapr components are loaded automatically by each sidecar when ResourcesPaths is configured.
  4. Your services use the Dapr client without caring about ports or infrastructure.

This results in a clean, local environment without the usual orchestration overhead.

Together, Aspire and Dapr give you a unified way to run multiple services, sidecars, and infrastructure components with minimal configuration.

Now that we’ve covered how Aspire and Dapr complement each other conceptually, let’s look at a minimal example that wires them together in a real project.

A Minimal Aspire + Dapr Example

Folder structure

src/
  orderservice-dotnet/
  inventoryservice-dotnet/
  aspirehost/
components/

The components/ folder contains standard Dapr component YAML files (state store, pub/sub, etc.). Aspire mounts this folder automatically so the sidecars can load them. These YAML files match the Dapr building block examples used throughout the repository, so the same components work whether you run via Aspire or the Dapr CLI.

Step 1: Add the Dapr hosting package

In your AppHost project:

dotnet add package CommunityToolkit.Aspire.Hosting.Dapr

Step 2: Define your services with Dapr sidecars

Aspire generates strongly typed project references under the Projects namespace (e.g., Projects.orderservice_dotnet). These types are created automatically when the AppHost and services are part of the same solution.

With the project structure in place, the AppHost simply wires each service to a Dapr sidecar and mounts the shared components folder.

AppHost.cs:

using CommunityToolkit.Aspire.Hosting.Dapr;

var builder = DistributedApplication.CreateBuilder(args);

builder.AddProject<Projects.orderservice_dotnet>("ordersservice")
    .WithDaprSidecar(new DaprSidecarOptions
    {
        AppId = "orderservice",
        ResourcesPaths = [Path.Combine("../..", "components")]
    });

builder.AddProject<Projects.inventoryservice_dotnet>("inventoryservice")
    .WithDaprSidecar(new DaprSidecarOptions
    {
        AppId = "inventoryservice",
        ResourcesPaths = [Path.Combine("../..", "components")]
    });


builder.Build().Run();

What this gives you:

  • Aspire launches all services defined in the AppHost
  • Aspire launches the Dapr sidecars that are configured using .WithDaprSidecar()
  • Dapr components are loaded automatically by each sidecar based on the configured ResourcesPaths.
  • Everything appears in the Aspire dashboard

No manual dapr run commands.
No launch profiles.
No need to manually assign ports.

Observability Across Both Worlds

Aspire 13.1 automatically configures OTLP endpoints and dashboard settings, so no manual environment variables are required.

The Aspire dashboard surfaces both application‑level and infrastructure‑level signals, giving you a unified view of the entire local environment.

You get two layers of visibility:

Aspire dashboard

  • Running services
  • Sidecars
  • Logs
  • Environment variables
  • Health checks

Dapr observability

  • Traces
  • Metrics
  • Component logs

Together, they give you a complete picture of both application and infrastructure behaviour.

When Aspire + Dapr Is a Great Fit

  • .NET microservices
  • Local-first development workflows
  • Teams who want a clean, discoverable environment
  • Developers who want to avoid docker-compose sprawl
  • Systems where infrastructure concerns must stay out of app code

When it’s not ideal

  • Polyglot systems
  • Environments where Aspire isn’t available
  • Teams needing full control over orchestration

Aspire can orchestrate non‑.NET services, but they won’t appear in the dashboard with the same depth of metadata, and Aspire cannot generate strongly typed project references for them.

Common Pitfalls (and How to Avoid Them)

  • Don’t mix Aspire service discovery with Dapr service invocation.
    They solve different problems.
  • Keep ports consistent when needed.
    Both Aspire and Dapr assign dynamic ports by default. If your services rely on fixed ports, you’ll need to specify them explicitly.
  • Most Dapr components don’t hot-reload.
    Restarting the AppHost when changing YAML ensures changes are picked up.

Final Thoughts

Aspire 13.1 and Dapr 1.16 work together cleanly: Aspire handles orchestration and developer experience, while Dapr provides infrastructure building blocks. Used together, they give you a fast, modern, production‑aligned development loop with almost no ceremony.

.NET, Observability, OpenTelemetry

Part 3 – Auto‑Instrumenting .NET with OpenTelemetry

In Part 2, we deployed Jaeger v2 using the OpenTelemetry Collector and exposed the Jaeger UI. Now it’s time to generate real traces without modifying application code or rebuilding container images.

This part shows how to use the OpenTelemetry Operator to inject the .NET auto‑instrumentation agent automatically. This approach is fully declarative, GitOps‑friendly, and ideal for platform teams who want consistent instrumentation across many services.

All manifests, ApplicationSets, Code and configuration used in this series are available in the companion GitHub repository

🧠 How Operator‑Managed .NET Auto‑Instrumentation Works

The OpenTelemetry Operator can automatically:

  • Inject the .NET auto‑instrumentation agent into your pod
  • Mount the agent files
  • Set all required environment variables
  • Configure OTLP exporters
  • Apply propagators
  • Ensure consistent agent versions across workloads

This means:

  • No Dockerfile changes
  • No manual environment variables
  • No code changes
  • No per‑service configuration drift

Instrumentation becomes a cluster‑level concern, not an application‑level burden.

📦 Defining the .NET Instrumentation Resource

To enable .NET auto‑instrumentation, create an Instrumentation CR

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: auto-dotnet
  namespace: apps
spec:
  dotnet:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest

This tells the Operator:

  • Manage the lifecycle of the agent declaratively
  • Use the official .NET auto‑instrumentation agent
  • Inject it into workloads in this namespace (or those that opt‑in)

Commit this file to Git and let ArgoCD sync it.

🏗️ Instrumenting a .NET Application (No Image Changes Required)

To instrument a .NET application, you simply annotate the Deployment:

metadata:
  annotations:
    instrumentation.opentelemetry.io/inject-dotnet: "true"

That’s it.

The Operator will:

  • Inject the agent
  • Mount the instrumentation files
  • Set all required environment variables
  • Configure the OTLP exporter
  • Enrich traces with Kubernetes metadata

Your Deployment YAML stays clean and simple.

📁 Example .NET Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dev-demo-dotnet
  namespace: apps
  annotations:
    instrumentation.opentelemetry.io/inject-dotnet: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dev-demo-dotnet
  template:
    metadata:
      labels:
        app: dev-demo-dotnet
    spec:
      containers:
        - name: dev-demo-dotnet
          image:  demo-dotnet:latest
          ports:
            - containerPort: 8080

Notice what’s missing:

  • No agent download
  • No Dockerfile changes
  • No environment variables
  • No profiler configuration

The Operator handles everything.

🔬 What the Operator Injects (Real Example)

Here is a simplified version of the actual mutated pod from your cluster. This shows exactly what the Operator adds:

initContainers:
  - name: opentelemetry-auto-instrumentation-dotnet
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest
    command: ["cp", "-r", "/autoinstrumentation/.", "/otel-auto-instrumentation-dotnet"]

Injected environment variables

env:
  - name: CORECLR_ENABLE_PROFILING
    value: "1"
  - name: CORECLR_PROFILER
    value: "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
  - name: CORECLR_PROFILER_PATH
    value: /otel-auto-instrumentation-dotnet/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so
  - name: DOTNET_STARTUP_HOOKS
    value: /otel-auto-instrumentation-dotnet/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll
  - name: DOTNET_ADDITIONAL_DEPS
    value: /otel-auto-instrumentation-dotnet/AdditionalDeps
  - name: DOTNET_SHARED_STORE
    value: /otel-auto-instrumentation-dotnet/store
  - name: OTEL_DOTNET_AUTO_HOME
    value: /otel-auto-instrumentation-dotnet
  - name: OTEL_SERVICE_NAME
    value: dev-demo-dotnet
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: http://jaeger-inmemory-instance-collector.monitoring.svc.cluster.local:4318

Kubernetes metadata enrichment

- name: OTEL_RESOURCE_ATTRIBUTES
  value: k8s.container.name=dev-demo-dotnet,...

Volume for instrumentation files

volumes:
  - name: opentelemetry-auto-instrumentation-dotnet
    emptyDir:
      sizeLimit: 200Mi

This is the Operator doing exactly what it was designed to do:
injecting a complete, production‑grade instrumentation layer without touching your application code.

🚀 Deploying the Instrumented App

Once the Instrumentation CR and Deployment are committed:

  1. ArgoCD syncs the changes
  2. The Operator mutates the pod
  3. The .NET agent is injected
  4. The app begins emitting OTLP traces

Check the pod:

kubectl get pods -n apps

You’ll see:

  • An init container
  • A mounted instrumentation volume
  • Injected environment variables

🔍 Verifying That Traces Are Flowing

1. Port‑forward the Jaeger UI

kubectl -n monitoring port-forward svc/jaeger-inmemory-instance-collector 16686:16686

Open:

http://localhost:16686

2. Generate traffic

kubectl -n apps port-forward svc/dev-demo-dotnet 8080:8080
curl http://localhost:8080/

3. Check the Jaeger UI

You should now see:

  • Service: dev-demo-dotnet
  • HTTP server spans
  • Outgoing calls (if any)
  • Full trace graphs

If you see traces, the Operator‑managed pipeline is working end‑to‑end.

🧪 Troubleshooting Common Issues

No traces appear

  • Ensure the Deployment has the annotation
  • Ensure the Instrumentation CR is in the same namespace
  • Check Operator logs for mutation errors
  • Verify the Collector’s OTLP ports (4317/4318)

App restarts repeatedly

  • The Operator may be injecting into a non‑.NET container
  • Ensure your image is .NET 8+

Traces appear but missing context

  • The Operator sets tracecontext,baggage automatically
  • Ensure no middleware strips headers

🧭 What’s Next

With Jaeger v2 deployed and .NET auto‑instrumentation fully automated, you now have a working observability pipeline that requires:

  • No code changes
  • No image modifications
  • No per‑service configuration

In Part 4, we’ll take this setup and make it fully declarative using ArgoCD:

  • Repo structure
  • ArgoCD Applications
  • Sync strategies
  • Drift correction
  • Multi‑component GitOps workflows

This is where the system becomes operationally robust.