In Part 1, we explored why Jaeger v2, OpenTelemetry, and GitOps form a natural, modern observability stack. Now we get hands‑on. This part walks through deploying Jaeger v2 using the OpenTelemetry Collector, using a minimal but production‑aligned configuration.
The goal is simple:
Get a working Jaeger v2 deployment running in Kubernetes, backed by the OpenTelemetry Collector, with the Jaeger UI exposed and ready to receive traces.
This is the foundation the rest of the series builds on.
All manifests and configuration used in this post are available in the companion GitHub repository
🧩 What Jaeger v2 Actually Is
Jaeger v2 is not a standalone set of components anymore. It is a distribution of the OpenTelemetry Collector that bundles:
- OTLP receivers (gRPC/HTTP)
- Jaeger query service
- Jaeger UI
- A Storage backend (memstore for demos, pluggable for production)
Instead of deploying multiple Jaeger components, you deploy one Collector with the right extensions enabled.
This dramatically simplifies operations and aligns Jaeger with the broader OpenTelemetry ecosystem.
📦 The Deployment Model
In this walkthrough, Jaeger v2 is deployed as:
- An OpenTelemetryCollector resource (managed by the OpenTelemetry Operator)
- A single Kubernetes Deployment created by the Operator
- A Service exposing:
- OTLP gRPC on port 4317
- OTLP HTTP on port 4318
- Jaeger UI on port 16686
This keeps the setup simple, reproducible, and GitOps‑friendly.
📁 The Collector (Jaeger v2)
Here is the core of the Collector configuration used in this series:
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: jaeger-inmemory-instance
spec:
mode: deployment
image: jaegertracing/jaeger:latest
ports:
- name: jaeger-ui
port: 16686
protocol: TCP
targetPort: 0
- name: otlp-grpc
port: 4317
protocol: TCP
targetPort: 0
- name: otlp-http
port: 4318
protocol: TCP
targetPort: 0
config:
service:
extensions: [jaeger_storage, jaeger_query]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger_storage_exporter]
extensions:
jaeger_query:
storage:
traces: memstore
jaeger_storage:
backends:
memstore:
memory:
max_traces: 100000
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 8192
exporters:
jaeger_storage_exporter:
trace_storage: memstore
Why this matters
- OTLP gRPC (4317) is the default for .NET auto‑instrumentation
- OTLP HTTP (4318) supports other languages and tools
- Jaeger UI (16686) is built into the Collector
- memstore keeps the demo simple and dependency‑free
- The Operator handles lifecycle, rollout, and health checks
This is Jaeger v2 in its simplest, cleanest form.
🖥️ Exposing the Jaeger UI
Once deployed, the Jaeger UI is available on port 16686.
For local access:
kubectl port-forward svc/jaeger-inmemory-instance-collector 16686:16686 -n monitoring
Once deployed, the Jaeger UI is available on port 16686.
Then open:
http://localhost:16686
This UI is where you’ll verify traces in Part 3.
🚀 Deploying the Collector with Argo CD
Although you could apply the YAML manually:
kubectl apply -f platform/collector/collector.yaml
This series uses GitOps as the primary deployment method.
Argo CD applies the Collector automatically through the applicationset-platform.yaml ApplicationSet, which syncs the contents of:
platform/collector/
into the monitoring namespace.
Once ArgoCD syncs, you should see:
kubectl get pods -n monitoring
NAME READY STATUS
jaeger-inmemory-instance-collector-xxxxx 1/1 Running
🔍Verifying the Deployment
Before moving on to instrumentation, confirm the Collector is healthy.
1. Check pod readiness
kubectl get pods -n monitoring
2. Check logs
kubectl logs deploy/jaeger-inmemory-instance-collector -n monitoring
You should see logs indicating:
- OTLP receiver started
- Jaeger query service started
- Jaeger UI listening on 16686
3. Check service ports
kubectl get svc jaeger-inmemory-instance-collector -n monitoring
You should see:
- 4317 (gRPC)
- 4318 (HTTP)
- 16686 (UI)
🧭 What’s Next
With Jaeger v2 deployed and ready to receive traces, the next step is to instrument an application. In Part 3, we’ll walk through:
- How .NET auto‑instrumentation works
- How the OpenTelemetry Operator injects the .NET agent automatically
- How to enable instrumentation using an
Instrumentationresource - How to annotate a Deployment to opt‑in
- How to deploy the app and see traces appear in Jaeger
This is where the system comes alive.
One thought on “Part 2 – Deploying Jaeger v2 with the OpenTelemetry Collector”
Comments are closed.