Skip to main content
Send OpenTelemetry (OTEL) traces from LiteLLM to Qualifire for complete observability across your LLM calls.
Looking for real-time guardrails? Check out the Qualifire Guardrails Integration for content moderation, prompt injection detection, and more.

Pre-Requisites

  1. Create an account on Qualifire
  2. Get your API key from the Qualifire dashboard
pip install litellm

Quick Start

Use just 2 lines of code to send OpenTelemetry traces across all providers to Qualifire.
litellm.callbacks = ["otel"]
import litellm
import os

# Set OpenTelemetry configuration for Qualifire
os.environ["OTEL_EXPORTER"] = "otlp_http"
os.environ["OTEL_ENDPOINT"] = "https://proxy.qualifire.ai/api/telemetry"
os.environ["OTEL_HEADERS"] = "X-Qualifire-API-Key=your-qualifire-api-key"

# LLM API Keys
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"

# Set otel as a callback & LiteLLM will send traces to Qualifire
litellm.callbacks = ["otel"]

# OpenAI call
response = litellm.completion(
  model="gpt-4o",
  messages=[
    {"role": "user", "content": "Hi 👋 - i'm openai"}
  ]
)

Using with LiteLLM Proxy

1

Setup config.yaml

Configure the LiteLLM proxy with OpenTelemetry callback:
model_list:
  - model_name: gpt-4o
    litellm_params:
      model: openai/gpt-4o
      api_key: os.environ/OPENAI_API_KEY

litellm_settings:
  callbacks: ["otel"]

general_settings:
  master_key: "sk-1234"

environment_variables:
  OTEL_EXPORTER: "otlp_http"
  OTEL_ENDPOINT: "https://proxy.qualifire.ai/api/telemetry"
  OTEL_HEADERS: "X-Qualifire-API-Key=your-qualifire-api-key"
2

Start the proxy

litellm --config config.yaml
3

Test it!

curl -X POST 'http://0.0.0.0:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}]}'

Environment Variables

VariableDescription
OTEL_EXPORTERThe exporter type. Use otlp_http for Qualifire
OTEL_ENDPOINTQualifire telemetry endpoint: https://proxy.qualifire.ai/api/telemetry
OTEL_HEADERSAuthentication header: X-Qualifire-API-Key=<your-api-key>

What Gets Traced?

OpenTelemetry traces capture detailed information about each LLM call:
  • Span data - Start time, end time, and duration
  • Request attributes - Model, messages, parameters
  • Response attributes - Generated content, finish reason
  • Token usage - Prompt tokens, completion tokens, total tokens
  • Error information - Exception details if the call fails
  • Custom attributes - Any metadata you add to your requests
Once data is in Qualifire, you can:
  • View end-to-end traces across your AI pipeline
  • Analyze latency and performance metrics
  • Debug issues with detailed span information
  • Correlate traces with evaluations and guardrail results

Additional Resources