OpenTelemetry
Bridge Keystone tracing to your OTel backend (Honeycomb, Tempo, Jaeger) and accept OTLP into Keystone.
Keystone implements both sides of the OpenTelemetry handshake:
- OTel-shaped events. Every wrapped LLM call and every
traced()span includes ametadatablock following the GenAI semantic conventions —gen_ai.system,gen_ai.request.model,gen_ai.usage.input_tokens, etc. - OTLP ingest endpoint. Point any OTel SDK's exporter at
https://keystone.polarity.so/otel/v1/tracesand the spans land in the same trace store as native Keystone events.
This means you don't have to choose: use Keystone's native SDKs for the rich wrap() / traced() ergonomics, hand spans off to your existing OTel backend, or keep your existing OTel-native code and just point the exporter at Keystone.
Forwarding to an existing OTel backend
If you already run Honeycomb / Tempo / Jaeger / Datadog and want Keystone spans there too:
import { trace as otelTrace } from "@opentelemetry/api";
import { Keystone } from "@polarityinc/polarity-keystone";
const tracer = otelTrace.getTracer("my-app");
const ks = new Keystone();
ks.initTracing(undefined, { otelTracer: tracer });
// Now every traced() span is also an OTel span with gen_ai.* attributes.After this, every traced() span is also an OTel span. Your existing OTel SDK handles export to wherever you've configured (Honeycomb, Tempo, the OTel Collector, etc.).
The Go SDK doesn't take the OTel tracer as a constructor arg — instead, register a flush callback so OTel spans are guaranteed to drain when your process shuts down:
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/sdk/trace"
keystone "github.com/Polarityinc/keystone-sdk-go"
)
tp := trace.NewTracerProvider(/* ... */)
otel.SetTracerProvider(tp)
keystone.RegisterOtelFlush(func(ctx context.Context) error {
return tp.Shutdown(ctx)
})
// Later, flush at shutdown:
keystone.FlushOtel(ctx)OTLP ingest endpoint
Already have an OTel-instrumented application? Point your exporter at Keystone:
export OTEL_EXPORTER_OTLP_ENDPOINT=https://keystone.polarity.so
export OTEL_EXPORTER_OTLP_HEADERS=authorization=Bearer\ ks_live_xxxxxOr programmatically:
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
const exporter = new OTLPTraceExporter({
url: "https://keystone.polarity.so/otel/v1/traces",
headers: { authorization: `Bearer ${process.env.KEYSTONE_API_KEY}` },
});Spans posted via OTLP are converted to Keystone trace events at ingest:
| OTel attribute | Keystone field |
|---|---|
gen_ai.system | metadata.gen_ai.system |
gen_ai.request.model | cost.model |
gen_ai.usage.input_tokens | cost.input_tokens |
gen_ai.usage.output_tokens | cost.output_tokens |
| span name | tool |
| span duration | duration_ms |
| span status | status |
| span kind | event_type |
The mapping is loss-free for GenAI-conforming spans. Non-GenAI spans (e.g., HTTP server spans, DB query spans) are stored as tool_call events with the OTel attributes preserved in metadata.
Round-tripping
Combining the two directions:
- Your application uses Keystone's
wrap()for native ergonomics. - Keystone emits each span with both native fields and OTel-conforming
metadata. - The OTel tracer (passed to
initTracing) duplicates the span to your OTel pipeline. - The OTel pipeline exports to your backend and also to Keystone via OTLP.
- Keystone deduplicates by
span_idso you don't see double events.
For most teams, just one direction matters — pick whichever fits your existing infrastructure.
What gets the gen_ai.* treatment
wrap()-emitted events for LLM calls include the full GenAI metadata block:
{
"ts": "...",
"event_type": "llm_call",
"tool": "anthropic.create",
"metadata": {
"gen_ai.system": "anthropic",
"gen_ai.request.model": "claude-sonnet-4-5",
"gen_ai.response.model": "claude-sonnet-4-5",
"gen_ai.usage.input_tokens": 4200,
"gen_ai.usage.output_tokens": 1800,
"gen_ai.operation.name": "chat"
}
}traced()-emitted custom spans don't get GenAI attributes by default — they're not LLM calls. If you want to add them, set them on the OTel span manually:
import { trace } from "@opentelemetry/api";
const span = trace.getActiveSpan();
span?.setAttribute("gen_ai.system", "my-rag-pipeline");Honeycomb, Tempo, Jaeger, Datadog
| Backend | OTLP endpoint pattern |
|---|---|
| Honeycomb | https://api.honeycomb.io/v1/traces (with x-honeycomb-team header) |
| Tempo (self-hosted) | https://your-tempo.example.com/v1/traces |
| Jaeger (self-hosted) | https://your-jaeger.example.com/v1/traces |
| Datadog | https://trace.agent.datadoghq.com/v0.7/traces (with DD agent intermediary) |
| New Relic | https://otlp.nr-data.net:4318/v1/traces (with api-key header) |
Use the OTel Collector if you want to multiplex to multiple backends including Keystone:
# otel-collector.yaml
receivers:
otlp:
protocols:
grpc: { endpoint: 0.0.0.0:4317 }
http: { endpoint: 0.0.0.0:4318 }
exporters:
otlp/keystone:
endpoint: https://keystone.polarity.so/otel/v1/traces
headers: { authorization: "Bearer ${KEYSTONE_API_KEY}" }
otlp/honeycomb:
endpoint: api.honeycomb.io:443
headers: { x-honeycomb-team: "${HONEYCOMB_API_KEY}" }
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp/keystone, otlp/honeycomb]Your application points at the local Collector; the Collector fans out to both Keystone and Honeycomb.
Flush callbacks
Both SDKs support a flush hook so you can guarantee OTel spans drain before your process exits:
import { registerOtelFlush, flushOtel } from "@polarityinc/polarity-keystone";
registerOtelFlush(async () => {
await tracerProvider.forceFlush();
await tracerProvider.shutdown();
});
// At shutdown:
process.on("SIGTERM", async () => {
await flushOtel();
process.exit(0);
});What about logs and metrics?
Keystone is a tracing-first product. For logs, use whatever logging stack you already have. For metrics:
- Aggregate trace metrics are computed and exposed via the
experiments.metrics(id)API and the Traces dashboard tab. - Custom metrics can be derived from your OTel spans in your existing OTel pipeline.
Self-hosted considerations
For self-hosted Keystone, configure the OTLP receiver in keystone.yaml:
otel:
enabled: true
endpoint: 0.0.0.0:4318 # HTTP/JSON
grpc_endpoint: 0.0.0.0:4317 # gRPC (optional)
cors_allowed_origins:
- "*"Or run the OTel Collector in front of Keystone for protocol translation, sampling, batching, etc.