Skip to main content

What is Agent Tracing?

Agent Tracing in Qualifire provides a detailed, end-to-end view of your AI agent’s operations. By leveraging the open standard of OpenTelemetry (OTLP), you can track every step of a complex workflow, from the initial prompt to the final output, including all intermediate LLM calls, tool usage, and decision-making processes. This powerful observability feature allows you to:
  • Debug complex agent behaviors: Pinpoint the exact source of errors, latency, or unexpected outputs.
  • Analyze performance: Identify bottlenecks and optimize the performance of your agents.
  • Monitor costs: Track token usage and cost for each step in a workflow.
  • Ensure reliability: Understand how your agent chains and tool integrations are functioning in production.

How It Works

Qualifire’s tracing is built on the foundations of OpenTelemetry, an open-source observability framework.
  • Trace: Represents an entire end-to-end workflow or transaction. For example, a single user request to your chatbot would constitute one trace.
  • Span: Represents a single operation or unit of work within a trace. A trace is composed of one or more spans. For example, an LLM call, a database query, or a tool execution would each be a span.
  • Span Events: These are timestamped events that occur within a span, providing additional context.
Qualifire exposes an OTLP-compatible endpoint at /telemetry/traces. This means you can use any OpenTelemetry-compliant client or SDK to send trace data directly to our platform, allowing for seamless integration with your existing observability setup.

Getting Started

1

Install the Qualifire SDK

Install the Qualifire client SDK:
pip install qualifire
2

Configure tracing in your application

Configure the SDK in your application’s entrypoint. This will automatically instrument popular libraries like OpenAI and LangChain to send traces to Qualifire.
from qualifire_tracing import configure_qualifire_tracing

configure_qualifire_tracing(
  gateway_url="https://proxy.qualifire.ai",
  api_key="YOUR_QUALIFIRE_API_KEY"
)

# Your application code here...
# For example, an OpenAI client call:
from openai import OpenAI
client = OpenAI() # This client is now automatically traced

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Explain OpenTelemetry in one sentence."}]
)
The qualifire package automatically detects and instruments supported libraries like OpenAI, LangChain, and Anthropic upon configuration.
3

Verify traces in the dashboard

Open the Qualifire dashboard to see your traces appear. The UI provides a rich visualization including a hierarchical tree view of all spans.
If traces aren’t appearing, verify your API key is correct and that your application can reach https://proxy.qualifire.ai. Check your application logs for any OTLP export errors.

Visualizing Traces

Once your application is instrumented, traces will appear in the Qualifire dashboard. Our UI provides a rich visualization of your traces, including:
  • A hierarchical tree view of all spans within a trace.
  • Detailed summaries of performance, cost, and governance metrics.
  • In-depth analytics for each span, including attributes, events, and linked model invocations.