.NET, Observability, OpenTelemetry

Part 3 – Auto‑Instrumenting .NET with OpenTelemetry

In Part 2, we deployed Jaeger v2 using the OpenTelemetry Collector and exposed the Jaeger UI. Now it’s time to generate real traces without modifying application code or rebuilding container images.

This part shows how to use the OpenTelemetry Operator to inject the .NET auto‑instrumentation agent automatically. This approach is fully declarative, GitOps‑friendly, and ideal for platform teams who want consistent instrumentation across many services.

All manifests, ApplicationSets, Code and configuration used in this series are available in the companion GitHub repository

🧠 How Operator‑Managed .NET Auto‑Instrumentation Works

The OpenTelemetry Operator can automatically:

  • Inject the .NET auto‑instrumentation agent into your pod
  • Mount the agent files
  • Set all required environment variables
  • Configure OTLP exporters
  • Apply propagators
  • Ensure consistent agent versions across workloads

This means:

  • No Dockerfile changes
  • No manual environment variables
  • No code changes
  • No per‑service configuration drift

Instrumentation becomes a cluster‑level concern, not an application‑level burden.

📦 Defining the .NET Instrumentation Resource

To enable .NET auto‑instrumentation, create an Instrumentation CR

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: auto-dotnet
  namespace: apps
spec:
  dotnet:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest

This tells the Operator:

  • Manage the lifecycle of the agent declaratively
  • Use the official .NET auto‑instrumentation agent
  • Inject it into workloads in this namespace (or those that opt‑in)

Commit this file to Git and let ArgoCD sync it.

🏗️ Instrumenting a .NET Application (No Image Changes Required)

To instrument a .NET application, you simply annotate the Deployment:

metadata:
  annotations:
    instrumentation.opentelemetry.io/inject-dotnet: "true"

That’s it.

The Operator will:

  • Inject the agent
  • Mount the instrumentation files
  • Set all required environment variables
  • Configure the OTLP exporter
  • Enrich traces with Kubernetes metadata

Your Deployment YAML stays clean and simple.

📁 Example .NET Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dev-demo-dotnet
  namespace: apps
  annotations:
    instrumentation.opentelemetry.io/inject-dotnet: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dev-demo-dotnet
  template:
    metadata:
      labels:
        app: dev-demo-dotnet
    spec:
      containers:
        - name: dev-demo-dotnet
          image:  demo-dotnet:latest
          ports:
            - containerPort: 8080

Notice what’s missing:

  • No agent download
  • No Dockerfile changes
  • No environment variables
  • No profiler configuration

The Operator handles everything.

🔬 What the Operator Injects (Real Example)

Here is a simplified version of the actual mutated pod from your cluster. This shows exactly what the Operator adds:

initContainers:
  - name: opentelemetry-auto-instrumentation-dotnet
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest
    command: ["cp", "-r", "/autoinstrumentation/.", "/otel-auto-instrumentation-dotnet"]

Injected environment variables

env:
  - name: CORECLR_ENABLE_PROFILING
    value: "1"
  - name: CORECLR_PROFILER
    value: "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
  - name: CORECLR_PROFILER_PATH
    value: /otel-auto-instrumentation-dotnet/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so
  - name: DOTNET_STARTUP_HOOKS
    value: /otel-auto-instrumentation-dotnet/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll
  - name: DOTNET_ADDITIONAL_DEPS
    value: /otel-auto-instrumentation-dotnet/AdditionalDeps
  - name: DOTNET_SHARED_STORE
    value: /otel-auto-instrumentation-dotnet/store
  - name: OTEL_DOTNET_AUTO_HOME
    value: /otel-auto-instrumentation-dotnet
  - name: OTEL_SERVICE_NAME
    value: dev-demo-dotnet
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: http://jaeger-inmemory-instance-collector.monitoring.svc.cluster.local:4318

Kubernetes metadata enrichment

- name: OTEL_RESOURCE_ATTRIBUTES
  value: k8s.container.name=dev-demo-dotnet,...

Volume for instrumentation files

volumes:
  - name: opentelemetry-auto-instrumentation-dotnet
    emptyDir:
      sizeLimit: 200Mi

This is the Operator doing exactly what it was designed to do:
injecting a complete, production‑grade instrumentation layer without touching your application code.

🚀 Deploying the Instrumented App

Once the Instrumentation CR and Deployment are committed:

  1. ArgoCD syncs the changes
  2. The Operator mutates the pod
  3. The .NET agent is injected
  4. The app begins emitting OTLP traces

Check the pod:

kubectl get pods -n apps

You’ll see:

  • An init container
  • A mounted instrumentation volume
  • Injected environment variables

🔍 Verifying That Traces Are Flowing

1. Port‑forward the Jaeger UI

kubectl -n monitoring port-forward svc/jaeger-inmemory-instance-collector 16686:16686

Open:

http://localhost:16686

2. Generate traffic

kubectl -n apps port-forward svc/dev-demo-dotnet 8080:8080
curl http://localhost:8080/

3. Check the Jaeger UI

You should now see:

  • Service: dev-demo-dotnet
  • HTTP server spans
  • Outgoing calls (if any)
  • Full trace graphs

If you see traces, the Operator‑managed pipeline is working end‑to‑end.

🧪 Troubleshooting Common Issues

No traces appear

  • Ensure the Deployment has the annotation
  • Ensure the Instrumentation CR is in the same namespace
  • Check Operator logs for mutation errors
  • Verify the Collector’s OTLP ports (4317/4318)

App restarts repeatedly

  • The Operator may be injecting into a non‑.NET container
  • Ensure your image is .NET 8+

Traces appear but missing context

  • The Operator sets tracecontext,baggage automatically
  • Ensure no middleware strips headers

🧭 What’s Next

With Jaeger v2 deployed and .NET auto‑instrumentation fully automated, you now have a working observability pipeline that requires:

  • No code changes
  • No image modifications
  • No per‑service configuration

In Part 4, we’ll take this setup and make it fully declarative using ArgoCD:

  • Repo structure
  • ArgoCD Applications
  • Sync strategies
  • Drift correction
  • Multi‑component GitOps workflows

This is where the system becomes operationally robust.

Architecture, Cloud Native, Observability

Part 1 – Why Jaeger v2, OpenTelemetry, and GitOps Belong Together

Modern distributed systems generate a staggering amount of telemetry. Logs, metrics, and traces flow from dozens or hundreds of independently deployed services. Teams want deep visibility without drowning in operational overhead. They want consistency without slowing down delivery. And they want observability that scales with the system, not against it.

This is where Jaeger v2, OpenTelemetry, and GitOps converge into a clean, modern, future‑proof model.

This series walks through a complete, working setup that combines:

  • Jaeger v2, built on the OpenTelemetry Collector
  • OpenTelemetry auto‑instrumentation, with a focus on .NET
  • ArgoCD, managing everything declaratively through GitOps
  • A multi‑environment architecture, with dev/staging/prod deployed through ApplicationSets

Before we dive into YAML, pipelines, and instrumentation, it’s worth understanding why these technologies fit together so naturally and why they represent the future of platform‑level observability.

All manifests, ApplicationSets, and configuration used in this series are available in the companion GitHub repository

🧭 The Shift to Jaeger v2: Collector‑First Observability

Jaeger v1 was built around a bespoke architecture: agents, collectors, query services, and storage backends. It worked well for its time, but it wasn’t aligned with the industry’s move toward OpenTelemetry as the standard for telemetry data.

Jaeger v2 changes that.

What’s new in Jaeger v2

  • Built on the OpenTelemetry Collector
  • Accepts OTLP as the ingestion protocol
  • Consolidates components into a simpler deployment
  • Integrates Jaeger’s query and UI directly into the Collector
  • Aligns with the OpenTelemetry ecosystem instead of maintaining parallel infrastructure

In practice, Jaeger v2 is no longer a standalone tracing pipeline.
It is a distribution of the OpenTelemetry Collector, with Jaeger’s query and UI components integrated into the same deployment.

This reduces operational complexity and brings Jaeger into the same ecosystem as metrics, logs, and traces, all flowing through the same Collector pipeline.

🌐 OpenTelemetry: The Universal Instrumentation Layer

OpenTelemetry has become the de facto standard for collecting telemetry across languages and platforms. Instead of maintaining language‑specific SDKs, exporters, and agents, teams can rely on a unified model:

  • One protocol (OTLP)
  • One collector pipeline
  • One set of instrumentation libraries
  • One ecosystem of processors, exporters, and extensions

For application teams, this means:

  • Less vendor lock‑in
  • Less custom instrumentation
  • More consistency across services

For platform teams, it means:

  • A single collector pipeline to operate
  • A single place to apply sampling, filtering, and routing
  • A consistent deployment model across environments

And with the OpenTelemetry Operator, you can enable auto‑instrumentation, especially for languages like .NET, without touching application code. The Operator injects the right environment variables, startup hooks, and exporters automatically.

🚀 Why GitOps (ArgoCD) Completes the Picture

Observability components are critical infrastructure. They need to be:

  • Versioned
  • Auditable
  • Reproducible
  • Consistent across environments

GitOps provides exactly that.

With ArgoCD:

  • The Collector configuration lives in Git
  • The Instrumentation settings live in Git
  • The Jaeger UI and supporting components live in Git
  • The applications live in Git
  • The environment‑specific overrides live in Git

ArgoCD continuously ensures that the cluster matches what’s declared in the repository. If someone changes a Collector config manually, ArgoCD corrects it. If a deployment drifts, ArgoCD heals it. If you want to roll out a new sampling policy, you commit a change and let ArgoCD sync it.

Git becomes the single source of truth for your entire observability stack.

🏗️ How These Pieces Fit Together

Here’s the high‑level architecture this series will build:

  • OpenTelemetry Collector (Jaeger v2)
    • Receives OTLP traffic
    • Processes and exports traces
    • Hosts the Jaeger v2 query and UI components
  • Applications
    • Auto‑instrumented using OpenTelemetry agents
    • Emit traces to the Collector via OTLP
  • ArgoCD
    • Watches the Git repository
    • Applies Collector, Instrumentation, and app manifests
    • Uses ApplicationSets to generate per‑environment deployments
    • Enforces ordering with sync waves
    • Ensures everything stays in sync

This architecture is intentionally simple. It’s designed to be:

  • Easy to deploy
  • Easy to understand
  • Easy to extend into production patterns
  • Easy to scale across environments and clusters

🎯 What You’ll Learn in This Series

Over the next four parts, we’ll walk through:

Part 2 – Deploying Jaeger v2 with the OpenTelemetry Collector

A working Collector configuration, including receivers, processors, exporters, and the Jaeger UI.

Part 3 – Auto‑instrumenting .NET with OpenTelemetry

How to enable tracing in a .NET application without modifying code, using the OpenTelemetry .NET auto‑instrumentation agent.

Part 4 – Managing Everything with ArgoCD (GitOps)

How to structure your repo, define ArgoCD Applications, and sync the entire observability stack declaratively.

Part 5 – Troubleshooting, Scaling, and Production Hardening

Sampling strategies, storage backends, multi‑cluster patterns, and common pitfalls.

🧩 Why This Matters

Observability is no longer optional. It’s foundational. But the tooling landscape has been fragmented for years. Jaeger v2, OpenTelemetry, and GitOps represent a convergence toward:

  • Standardisation
  • Operational simplicity
  • Developer autonomy
  • Platform consistency

This series is designed to give you a practical, reproducible path to adopting that model, starting with the simplest working setup and building toward production‑ready patterns.

You can find the full configuration for this part — including the Collector manifests and Argo CD setup — in the GitHub repository