.NET, Observability, OpenTelemetry

Part 3 – Auto‑Instrumenting .NET with OpenTelemetry

In Part 2, we deployed Jaeger v2 using the OpenTelemetry Collector and exposed the Jaeger UI. Now it’s time to generate real traces without modifying application code or rebuilding container images.

This part shows how to use the OpenTelemetry Operator to inject the .NET auto‑instrumentation agent automatically. This approach is fully declarative, GitOps‑friendly, and ideal for platform teams who want consistent instrumentation across many services.

All manifests, ApplicationSets, Code and configuration used in this series are available in the companion GitHub repository

🧠 How Operator‑Managed .NET Auto‑Instrumentation Works

The OpenTelemetry Operator can automatically:

  • Inject the .NET auto‑instrumentation agent into your pod
  • Mount the agent files
  • Set all required environment variables
  • Configure OTLP exporters
  • Apply propagators
  • Ensure consistent agent versions across workloads

This means:

  • No Dockerfile changes
  • No manual environment variables
  • No code changes
  • No per‑service configuration drift

Instrumentation becomes a cluster‑level concern, not an application‑level burden.

📦 Defining the .NET Instrumentation Resource

To enable .NET auto‑instrumentation, create an Instrumentation CR

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: auto-dotnet
  namespace: apps
spec:
  dotnet:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest

This tells the Operator:

  • Manage the lifecycle of the agent declaratively
  • Use the official .NET auto‑instrumentation agent
  • Inject it into workloads in this namespace (or those that opt‑in)

Commit this file to Git and let ArgoCD sync it.

🏗️ Instrumenting a .NET Application (No Image Changes Required)

To instrument a .NET application, you simply annotate the Deployment:

metadata:
  annotations:
    instrumentation.opentelemetry.io/inject-dotnet: "true"

That’s it.

The Operator will:

  • Inject the agent
  • Mount the instrumentation files
  • Set all required environment variables
  • Configure the OTLP exporter
  • Enrich traces with Kubernetes metadata

Your Deployment YAML stays clean and simple.

📁 Example .NET Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dev-demo-dotnet
  namespace: apps
  annotations:
    instrumentation.opentelemetry.io/inject-dotnet: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dev-demo-dotnet
  template:
    metadata:
      labels:
        app: dev-demo-dotnet
    spec:
      containers:
        - name: dev-demo-dotnet
          image:  demo-dotnet:latest
          ports:
            - containerPort: 8080

Notice what’s missing:

  • No agent download
  • No Dockerfile changes
  • No environment variables
  • No profiler configuration

The Operator handles everything.

🔬 What the Operator Injects (Real Example)

Here is a simplified version of the actual mutated pod from your cluster. This shows exactly what the Operator adds:

initContainers:
  - name: opentelemetry-auto-instrumentation-dotnet
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest
    command: ["cp", "-r", "/autoinstrumentation/.", "/otel-auto-instrumentation-dotnet"]

Injected environment variables

env:
  - name: CORECLR_ENABLE_PROFILING
    value: "1"
  - name: CORECLR_PROFILER
    value: "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
  - name: CORECLR_PROFILER_PATH
    value: /otel-auto-instrumentation-dotnet/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so
  - name: DOTNET_STARTUP_HOOKS
    value: /otel-auto-instrumentation-dotnet/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll
  - name: DOTNET_ADDITIONAL_DEPS
    value: /otel-auto-instrumentation-dotnet/AdditionalDeps
  - name: DOTNET_SHARED_STORE
    value: /otel-auto-instrumentation-dotnet/store
  - name: OTEL_DOTNET_AUTO_HOME
    value: /otel-auto-instrumentation-dotnet
  - name: OTEL_SERVICE_NAME
    value: dev-demo-dotnet
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: http://jaeger-inmemory-instance-collector.monitoring.svc.cluster.local:4318

Kubernetes metadata enrichment

- name: OTEL_RESOURCE_ATTRIBUTES
  value: k8s.container.name=dev-demo-dotnet,...

Volume for instrumentation files

volumes:
  - name: opentelemetry-auto-instrumentation-dotnet
    emptyDir:
      sizeLimit: 200Mi

This is the Operator doing exactly what it was designed to do:
injecting a complete, production‑grade instrumentation layer without touching your application code.

🚀 Deploying the Instrumented App

Once the Instrumentation CR and Deployment are committed:

  1. ArgoCD syncs the changes
  2. The Operator mutates the pod
  3. The .NET agent is injected
  4. The app begins emitting OTLP traces

Check the pod:

kubectl get pods -n apps

You’ll see:

  • An init container
  • A mounted instrumentation volume
  • Injected environment variables

🔍 Verifying That Traces Are Flowing

1. Port‑forward the Jaeger UI

kubectl -n monitoring port-forward svc/jaeger-inmemory-instance-collector 16686:16686

Open:

http://localhost:16686

2. Generate traffic

kubectl -n apps port-forward svc/dev-demo-dotnet 8080:8080
curl http://localhost:8080/

3. Check the Jaeger UI

You should now see:

  • Service: dev-demo-dotnet
  • HTTP server spans
  • Outgoing calls (if any)
  • Full trace graphs

If you see traces, the Operator‑managed pipeline is working end‑to‑end.

🧪 Troubleshooting Common Issues

No traces appear

  • Ensure the Deployment has the annotation
  • Ensure the Instrumentation CR is in the same namespace
  • Check Operator logs for mutation errors
  • Verify the Collector’s OTLP ports (4317/4318)

App restarts repeatedly

  • The Operator may be injecting into a non‑.NET container
  • Ensure your image is .NET 8+

Traces appear but missing context

  • The Operator sets tracecontext,baggage automatically
  • Ensure no middleware strips headers

🧭 What’s Next

With Jaeger v2 deployed and .NET auto‑instrumentation fully automated, you now have a working observability pipeline that requires:

  • No code changes
  • No image modifications
  • No per‑service configuration

In Part 4, we’ll take this setup and make it fully declarative using ArgoCD:

  • Repo structure
  • ArgoCD Applications
  • Sync strategies
  • Drift correction
  • Multi‑component GitOps workflows

This is where the system becomes operationally robust.

kubernetes, Observability, OpenTelemetry

Part 2 – Deploying Jaeger v2 with the OpenTelemetry Collector

In Part 1, we explored why Jaeger v2, OpenTelemetry, and GitOps form a natural, modern observability stack. Now we get hands‑on. This part walks through deploying Jaeger v2 using the OpenTelemetry Collector, using a minimal but production‑aligned configuration.

The goal is simple:
Get a working Jaeger v2 deployment running in Kubernetes, backed by the OpenTelemetry Collector, with the Jaeger UI exposed and ready to receive traces.

This is the foundation the rest of the series builds on.

All manifests and configuration used in this post are available in the companion GitHub repository

🧩 What Jaeger v2 Actually Is

Jaeger v2 is not a standalone set of components anymore. It is a distribution of the OpenTelemetry Collector that bundles:

  • OTLP receivers (gRPC/HTTP)
  • Jaeger query service
  • Jaeger UI
  • A Storage backend (memstore for demos, pluggable for production)

Instead of deploying multiple Jaeger components, you deploy one Collector with the right extensions enabled.

This dramatically simplifies operations and aligns Jaeger with the broader OpenTelemetry ecosystem.

📦 The Deployment Model

In this walkthrough, Jaeger v2 is deployed as:

  • An OpenTelemetryCollector resource (managed by the OpenTelemetry Operator)
  • A single Kubernetes Deployment created by the Operator
  • A Service exposing:
    • OTLP gRPC on port 4317
    • OTLP HTTP on port 4318
    • Jaeger UI on port 16686

This keeps the setup simple, reproducible, and GitOps‑friendly.

📁 The Collector (Jaeger v2)

Here is the core of the Collector configuration used in this series:

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: jaeger-inmemory-instance
spec:
  mode: deployment
  image: jaegertracing/jaeger:latest
  ports:
    - name: jaeger-ui
      port: 16686
      protocol: TCP
      targetPort: 0
    - name: otlp-grpc
      port: 4317
      protocol: TCP
      targetPort: 0
    - name: otlp-http
      port: 4318
      protocol: TCP
      targetPort: 0
  config:
    service:
      extensions: [jaeger_storage, jaeger_query]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [jaeger_storage_exporter]
    extensions:
      jaeger_query:
        storage:
          traces: memstore
      jaeger_storage:
        backends:
          memstore:
            memory:
              max_traces: 100000
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318
    processors:
      batch:
        timeout: 1s
        send_batch_size: 8192
    exporters:
      jaeger_storage_exporter:
        trace_storage: memstore

Why this matters

  • OTLP gRPC (4317) is the default for .NET auto‑instrumentation
  • OTLP HTTP (4318) supports other languages and tools
  • Jaeger UI (16686) is built into the Collector
  • memstore keeps the demo simple and dependency‑free
  • The Operator handles lifecycle, rollout, and health checks

This is Jaeger v2 in its simplest, cleanest form.

🖥️ Exposing the Jaeger UI

Once deployed, the Jaeger UI is available on port 16686.

For local access:

kubectl port-forward svc/jaeger-inmemory-instance-collector 16686:16686 -n monitoring

Once deployed, the Jaeger UI is available on port 16686.

Then open:

http://localhost:16686

This UI is where you’ll verify traces in Part 3.

🚀 Deploying the Collector with Argo CD

Although you could apply the YAML manually:

kubectl apply -f platform/collector/collector.yaml

This series uses GitOps as the primary deployment method.

Argo CD applies the Collector automatically through the applicationset-platform.yaml ApplicationSet, which syncs the contents of:

platform/collector/

into the monitoring namespace.

Once ArgoCD syncs, you should see:

kubectl get pods -n monitoring
NAME                                                                READY   STATUS
jaeger-inmemory-instance-collector-xxxxx       1/1     Running

🔍Verifying the Deployment

Before moving on to instrumentation, confirm the Collector is healthy.

1. Check pod readiness

kubectl get pods -n monitoring

2. Check logs

kubectl logs deploy/jaeger-inmemory-instance-collector -n monitoring

You should see logs indicating:

  • OTLP receiver started
  • Jaeger query service started
  • Jaeger UI listening on 16686

3. Check service ports

kubectl get svc jaeger-inmemory-instance-collector -n monitoring

You should see:

  • 4317 (gRPC)
  • 4318 (HTTP)
  • 16686 (UI)

🧭 What’s Next

With Jaeger v2 deployed and ready to receive traces, the next step is to instrument an application. In Part 3, we’ll walk through:

  • How .NET auto‑instrumentation works
  • How the OpenTelemetry Operator injects the .NET agent automatically
  • How to enable instrumentation using an Instrumentation resource
  • How to annotate a Deployment to opt‑in
  • How to deploy the app and see traces appear in Jaeger

This is where the system comes alive.