In Part 2, we deployed Jaeger v2 using the OpenTelemetry Collector and exposed the Jaeger UI. Now it’s time to generate real traces without modifying application code or rebuilding container images.
This part shows how to use the OpenTelemetry Operator to inject the .NET auto‑instrumentation agent automatically. This approach is fully declarative, GitOps‑friendly, and ideal for platform teams who want consistent instrumentation across many services.
All manifests, ApplicationSets, Code and configuration used in this series are available in the companion GitHub repository
🧠 How Operator‑Managed .NET Auto‑Instrumentation Works
The OpenTelemetry Operator can automatically:
- Inject the .NET auto‑instrumentation agent into your pod
- Mount the agent files
- Set all required environment variables
- Configure OTLP exporters
- Apply propagators
- Ensure consistent agent versions across workloads
This means:
- No Dockerfile changes
- No manual environment variables
- No code changes
- No per‑service configuration drift
Instrumentation becomes a cluster‑level concern, not an application‑level burden.
📦 Defining the .NET Instrumentation Resource
To enable .NET auto‑instrumentation, create an Instrumentation CR
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: auto-dotnet
namespace: apps
spec:
dotnet:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest
This tells the Operator:
- Manage the lifecycle of the agent declaratively
- Use the official .NET auto‑instrumentation agent
- Inject it into workloads in this namespace (or those that opt‑in)
Commit this file to Git and let ArgoCD sync it.
🏗️ Instrumenting a .NET Application (No Image Changes Required)
To instrument a .NET application, you simply annotate the Deployment:
metadata:
annotations:
instrumentation.opentelemetry.io/inject-dotnet: "true"
That’s it.
The Operator will:
- Inject the agent
- Mount the instrumentation files
- Set all required environment variables
- Configure the OTLP exporter
- Enrich traces with Kubernetes metadata
Your Deployment YAML stays clean and simple.
📁 Example .NET Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-demo-dotnet
namespace: apps
annotations:
instrumentation.opentelemetry.io/inject-dotnet: "true"
spec:
replicas: 1
selector:
matchLabels:
app: dev-demo-dotnet
template:
metadata:
labels:
app: dev-demo-dotnet
spec:
containers:
- name: dev-demo-dotnet
image: demo-dotnet:latest
ports:
- containerPort: 8080
Notice what’s missing:
- No agent download
- No Dockerfile changes
- No environment variables
- No profiler configuration
The Operator handles everything.
🔬 What the Operator Injects (Real Example)
Here is a simplified version of the actual mutated pod from your cluster. This shows exactly what the Operator adds:
initContainers:
- name: opentelemetry-auto-instrumentation-dotnet
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest
command: ["cp", "-r", "/autoinstrumentation/.", "/otel-auto-instrumentation-dotnet"]
Injected environment variables
env:
- name: CORECLR_ENABLE_PROFILING
value: "1"
- name: CORECLR_PROFILER
value: "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
- name: CORECLR_PROFILER_PATH
value: /otel-auto-instrumentation-dotnet/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so
- name: DOTNET_STARTUP_HOOKS
value: /otel-auto-instrumentation-dotnet/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll
- name: DOTNET_ADDITIONAL_DEPS
value: /otel-auto-instrumentation-dotnet/AdditionalDeps
- name: DOTNET_SHARED_STORE
value: /otel-auto-instrumentation-dotnet/store
- name: OTEL_DOTNET_AUTO_HOME
value: /otel-auto-instrumentation-dotnet
- name: OTEL_SERVICE_NAME
value: dev-demo-dotnet
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://jaeger-inmemory-instance-collector.monitoring.svc.cluster.local:4318
Kubernetes metadata enrichment
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.container.name=dev-demo-dotnet,...
Volume for instrumentation files
volumes:
- name: opentelemetry-auto-instrumentation-dotnet
emptyDir:
sizeLimit: 200Mi
This is the Operator doing exactly what it was designed to do:
injecting a complete, production‑grade instrumentation layer without touching your application code.
🚀 Deploying the Instrumented App
Once the Instrumentation CR and Deployment are committed:
- ArgoCD syncs the changes
- The Operator mutates the pod
- The .NET agent is injected
- The app begins emitting OTLP traces
Check the pod:
kubectl get pods -n apps
You’ll see:
- An init container
- A mounted instrumentation volume
- Injected environment variables
🔍 Verifying That Traces Are Flowing
1. Port‑forward the Jaeger UI
kubectl -n monitoring port-forward svc/jaeger-inmemory-instance-collector 16686:16686
Open:
http://localhost:16686
2. Generate traffic
kubectl -n apps port-forward svc/dev-demo-dotnet 8080:8080
curl http://localhost:8080/
3. Check the Jaeger UI
You should now see:
- Service:
dev-demo-dotnet - HTTP server spans
- Outgoing calls (if any)
- Full trace graphs
If you see traces, the Operator‑managed pipeline is working end‑to‑end.
🧪 Troubleshooting Common Issues
No traces appear
- Ensure the Deployment has the annotation
- Ensure the Instrumentation CR is in the same namespace
- Check Operator logs for mutation errors
- Verify the Collector’s OTLP ports (4317/4318)
App restarts repeatedly
- The Operator may be injecting into a non‑.NET container
- Ensure your image is .NET 8+
Traces appear but missing context
- The Operator sets
tracecontext,baggageautomatically - Ensure no middleware strips headers
🧭 What’s Next
With Jaeger v2 deployed and .NET auto‑instrumentation fully automated, you now have a working observability pipeline that requires:
- No code changes
- No image modifications
- No per‑service configuration
In Part 4, we’ll take this setup and make it fully declarative using ArgoCD:
- Repo structure
- ArgoCD Applications
- Sync strategies
- Drift correction
- Multi‑component GitOps workflows
This is where the system becomes operationally robust.