Pre-Requisites
- Create an account on Qualifire
- Get your API key from the Qualifire dashboard
Quick Start
Use just 2 lines of code to send OpenTelemetry traces across all providers to Qualifire.Using with LiteLLM Proxy
1
Setup config.yaml
Configure the LiteLLM proxy with OpenTelemetry callback:
2
Start the proxy
3
Test it!
Environment Variables
| Variable | Description |
|---|---|
OTEL_EXPORTER | The exporter type. Use otlp_http for Qualifire |
OTEL_ENDPOINT | Qualifire telemetry endpoint: https://proxy.qualifire.ai/api/telemetry |
OTEL_HEADERS | Authentication header: X-Qualifire-API-Key=<your-api-key> |
What Gets Traced?
OpenTelemetry traces capture detailed information about each LLM call:- Span data - Start time, end time, and duration
- Request attributes - Model, messages, parameters
- Response attributes - Generated content, finish reason
- Token usage - Prompt tokens, completion tokens, total tokens
- Error information - Exception details if the call fails
- Custom attributes - Any metadata you add to your requests
- View end-to-end traces across your AI pipeline
- Analyze latency and performance metrics
- Debug issues with detailed span information
- Correlate traces with evaluations and guardrail results