Basic
Qualifire provides an SDK to help you integrate our services into your application. The SDK is available for the following languages:- Node.js
- Python
Node.js
To use the Node.js SDK, you need to install it using npm:Copy
npm install qualifire
Copy
import { Qualifire } from "qualifire";
const qualifire = new Qualifire({
apiKey: "YOUR_API_KEY", // Optional: defaults to QUALIFIRE_API_KEY env var
baseUrl: "https://proxy.qualifire.ai", // Optional: custom base URL
});
If the
apiKey argument is not provided, the SDK will look for a value in the
environment variable QUALIFIRE_API_KEY.Types
The SDK exports the following types for use with evaluations:Copy
import type {
EvaluationProxyAPIRequest,
EvaluationRequestV2,
EvaluationResponse,
Framework,
LLMMessage,
ModelMode,
PolicyTarget,
} from "qualifire";
// Framework - supported LLM frameworks
type Framework = "openai" | "vercelai" | "gemini" | "claude";
// ModelMode - controls quality/speed tradeoff for checks
type ModelMode = "speed" | "balanced" | "quality";
// PolicyTarget - specifies what to check
type PolicyTarget = "input" | "output" | "both";
// LLMMessage - message format for evaluations
interface LLMMessage {
role: string;
content?: string;
tool_calls?: LLMToolCall[];
}
Request-Response Mode (Recommended)
The SDK supports direct integration with popular LLM frameworks. Simply pass the original request and response objects along with the framework name:Supported frameworks:openai, vercelai, gemini, claudeCopy
import { Qualifire } from "qualifire";
import OpenAI from "openai";
const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });
const openai = new OpenAI({ apiKey: "YOUR_OPENAI_API_KEY" });
// Make your OpenAI request
const openAiRequest = {
model: "gpt-4o",
messages: [
{
role: "system",
content: "You are a helpful assistant that can answer questions.",
},
{
role: "user",
content: [
{
type: "text",
text: "Is the sky blue?",
},
],
},
],
};
const openAiResponse = await openai.chat.completions.create(openAiRequest);
// Evaluate with Qualifire
const qualifireResponse = await qualifire.evaluate({
framework: "openai",
request: openAiRequest,
response: openAiResponse,
contentModerationCheck: true,
groundingCheck: true,
hallucinationsCheck: true,
instructionsFollowingCheck: true,
piiCheck: true,
promptInjections: true,
toolSelectionQualityCheck: false,
});
console.log(qualifireResponse);
Streaming Mode
For streaming responses, collect the chunks and pass them as an array:Copy
import { Qualifire } from "qualifire";
import OpenAI from "openai";
const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });
const openai = new OpenAI({ apiKey: "YOUR_OPENAI_API_KEY" });
const openAiRequest = {
stream: true,
model: "gpt-4o",
messages: [
{
role: "system",
content: "You are a helpful assistant that can answer questions.",
},
{
role: "user",
content: [
{
type: "text",
text: "Is the sky blue?",
},
],
},
],
};
const openAiResponseStream = await openai.chat.completions.create(
openAiRequest
);
// Collect all chunks
const responseChunks: any[] = [];
for await (const chunk of openAiResponseStream) {
responseChunks.push(chunk);
}
// Evaluate with collected chunks
const qualifireResponse = await qualifire.evaluate({
framework: "openai",
request: openAiRequest,
response: responseChunks,
groundingCheck: true,
promptInjections: true,
});
Fine-grained Messages Mode
You can also send parsed messages directly for evaluation:Copy
import { Qualifire } from "qualifire";
const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });
const response = await qualifire.evaluate({
messages: [
{ role: "user", content: "What is the capital of France?" },
{ role: "assistant", content: "Paris" },
],
contentModerationCheck: true,
hallucinationsCheck: true,
groundingCheck: true,
piiCheck: true,
promptInjections: true,
assertions: ["don't give medical advice"],
});
Copy
const response = await qualifire.evaluate({
input: "What is the capital of France?",
output: "Paris",
contentModerationCheck: true,
hallucinationsCheck: true,
});
Evaluation with Mode Configuration
You can fine-tune the quality/speed tradeoff for each check type:Copy
const response = await qualifire.evaluate({
messages: [
{ role: "user", content: "What is the capital of France?" },
{ role: "assistant", content: "Paris" },
],
hallucinationsCheck: true,
groundingCheck: true,
assertions: ["don't give medical advice"],
// Mode configuration
hallucinationsMode: "quality",
groundingMode: "balanced",
assertionsMode: "speed",
consistencyMode: "balanced",
// Multi-turn options
groundingMultiTurnMode: true,
policyMultiTurnMode: true,
// Policy target
policyTarget: "output",
});
Invoke Evaluation by ID
You can invoke a pre-configured evaluation by its ID:Copy
const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });
const response = await qualifire.invokeEvaluation({
input: "What is the capital of France?",
output: "Paris",
evaluationId: "g2r8puzojwb8q6yi2f6x162a", // Get this from the evaluations page
});
Evaluation Response
Theevaluate method returns an EvaluationResponse object:Copy
const response = await qualifire.evaluate({
input: "Hello",
output: "World",
piiCheck: true,
});
console.log(response?.status); // "passed" or "failed"
console.log(response?.score); // Overall score (0-100)
response?.evaluationResults.forEach((item) => {
console.log(`Type: ${item.type}`);
item.results.forEach((result) => {
console.log(` - ${result.name}: ${result.label} (score: ${result.score})`);
console.log(` Reason: ${result.reason}`);
});
});
// Example output:
// {
// "status": "failed",
// "score": 75,
// "evaluationResults": [
// {
// "type": "grounding",
// "results": [
// {
// "name": "grounding",
// "score": 75,
// "label": "INFERABLE",
// "confidence_score": 100,
// "reason": "The AI's output provides a detailed explanation..."
// }
// ]
// }
// ]
// }
Instrumentation (Tracing)
The SDK provides automatic tracing with theinit() method:Copy
import { Qualifire } from "qualifire";
import OpenAI from "openai";
const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });
// Initialize tracing
qualifire.init();
const openai = new OpenAI({
apiKey: "YOUR_OPENAI_API_KEY",
baseUrl: "https://proxy.qualifire.ai/api/providers/openai",
defaultHeaders: {
"X-Qualifire-API-Key": "YOUR_QUALIFIRE_API_KEY",
},
});
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "user",
content: "Tell me a joke",
},
],
});
Deprecated Parameters
The following parameters are deprecated and will automatically enablecontentModerationCheck:| Deprecated | Use Instead |
|---|---|
dangerousContentCheck | contentModerationCheck |
harassmentCheck | contentModerationCheck |
hateSpeechCheck | contentModerationCheck |
sexualContentCheck | contentModerationCheck |
| Deprecated | Use Instead |
|---|---|
grounding_check | groundingCheck |
hallucinations_check | hallucinationsCheck |
pii_check | piiCheck |
prompt_injections | promptInjections |
tool_selection_quality_check | toolSelectionQualityCheck |
syntax_checks | syntaxChecks |
instructions_following_check | instructionsFollowingCheck |
Python
To use the Python SDK, you need to install it using pip:Copy
pip install qualifire
Copy
import qualifire
client = qualifire.client.Client(
api_key="YOUR API KEY",
)
res = client.invoke_evaluation(
input="what is the capital of France",
output="Paris",
evaluation_id='g2r8puzojwb8q6yi2f6x162a' # get this from the evaluations page
)
api_key is not provided the SDK will look for a value in the environment variable QUALIFIRE_API_KEY.Client Configuration
TheClient constructor accepts the following parameters:Copy
client = qualifire.client.Client(
api_key="YOUR API KEY", # Optional: defaults to QUALIFIRE_API_KEY env var
base_url="https://...", # Optional: custom base URL
version="v1", # Optional: API version
debug=False, # Optional: enable debug mode
verify=True, # Optional: SSL verification
)
Types
The SDK provides the following types for use with evaluations:Copy
from qualifire.types import (
LLMMessage,
LLMToolCall,
LLMToolDefinition,
ModelMode,
PolicyTarget,
SyntaxCheckArgs,
)
# ModelMode - controls quality/speed tradeoff for checks
ModelMode.SPEED # Fastest, lower accuracy
ModelMode.BALANCED # Default balance
ModelMode.QUALITY # Highest accuracy, slower
# PolicyTarget - specifies what to check
PolicyTarget.INPUT # Check only input
PolicyTarget.OUTPUT # Check only output
PolicyTarget.BOTH # Check both (default)
# Message types
message = LLMMessage(
role="user",
content="Hello, world!",
tool_calls=None, # Optional list of LLMToolCall
)
tool_call = LLMToolCall(
name="get_weather",
arguments={"location": "New York"},
id="call_123", # Optional
)
tool_definition = LLMToolDefinition(
name="get_weather",
description="Get weather for a location",
parameters={
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
},
)
# Syntax check arguments
syntax_check = SyntaxCheckArgs(args="strict")
Basic Evaluation
Copy
from qualifire.types import ModelMode, PolicyTarget
res = client.evaluate(
input="what is the capital of France",
output="Paris",
prompt_injections=True,
pii_check=True,
hallucinations_check=True,
grounding_check=True,
content_moderation_check=True, # Replaces deprecated content checks
assertions=["don't give medical advice"],
)
Evaluation with Mode Configuration
You can fine-tune the quality/speed tradeoff for each check type:Copy
from qualifire.types import ModelMode, PolicyTarget
res = client.evaluate(
input="what is the capital of France",
output="Paris",
hallucinations_check=True,
grounding_check=True,
assertions=["don't give medical advice"],
# Mode configuration
hallucinations_mode=ModelMode.QUALITY,
grounding_mode=ModelMode.BALANCED,
assertions_mode=ModelMode.SPEED,
consistency_mode=ModelMode.BALANCED,
# Multi-turn options
grounding_multi_turn_mode=True,
policy_multi_turn_mode=True,
# Policy target
policy_target=PolicyTarget.OUTPUT,
)
Evaluation with Tool Calls
Copy
from qualifire.types import LLMMessage, LLMToolCall, LLMToolDefinition
res = client.evaluate(
messages=[
LLMMessage(
role="user",
content="What is the weather tomorrow in New York?",
),
LLMMessage(
role="assistant",
content="please run the following tool",
tool_calls=[
LLMToolCall(
id="tool_call_id",
name="get_weather_forecast",
arguments={
"location": "New York, NY",
"date": "tomorrow",
},
),
],
),
],
available_tools=[
LLMToolDefinition(
name="get_weather_forecast",
description="Provides the weather forecast for a given location and date.",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g., San Francisco, CA",
},
"date": {
"type": "string",
"description": "The date for the forecast, e.g., tomorrow, or YYYY-MM-DD",
},
},
"required": [
"location",
"date",
],
},
),
],
tool_selection_quality_check=True,
tsq_mode=ModelMode.BALANCED,
)
Syntax Checks
You can validate the syntax of outputs (e.g., JSON, SQL):Copy
from qualifire.types import SyntaxCheckArgs
res = client.evaluate(
input="Generate a JSON object with a name field",
output='{"name": "John"}',
syntax_checks={
"json": SyntaxCheckArgs(args="strict"),
},
)
Evaluation Response
Theevaluate method returns an EvaluationResponse object:Copy
res = client.evaluate(
input="Hello",
output="World",
pii_check=True,
)
print(res.status) # Status of the evaluation
print(res.score) # Overall score
for item in res.evaluationResults:
print(f"Type: {item.type}")
for result in item.results:
print(f" - {result.name}: {result.label} (score: {result.score})")
print(f" Flagged: {result.flagged}")
print(f" Reason: {result.reason}")
Deprecated Parameters
The following parameters are deprecated and will automatically enablecontent_moderation_check:dangerous_content_check→ usecontent_moderation_checkharassment_check→ usecontent_moderation_checkhate_speech_check→ usecontent_moderation_checksexual_content_check→ usecontent_moderation_check
Instrumentation
The SDK provides a way to instrument your code with theinit function.Copy
import qualifire
from openai import OpenAI
# Configure Qualifire tracing
qualifire.init(
gateway_url="https://proxy.qualifire.ai",
api_key="YOUR_QUALIFIRE_API_KEY"
)
client = OpenAI(
api_key="OpenAI API Key",
base_url="https://proxy.qualifire.ai/api/providers/openai",
default_headers={
"X-Qualifire-API-Key": "YOUR API KEY",
}
)
res = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "tell me a joke",
},
],
)
Copy
import qualifire
from langchain.chat_models import init_chat_model
from langgraph.prebuilt import create_react_agent
qualifire.init(
api_key="YOUR API KEY",
)
tools = ...
llm = init_chat_model(
"openai:gpt-4.1",
api_key="Your OpenAI Api Key",
base_url="https://proxy.qualifire.ai/api/providers/openai/",
default_headers={
"X-Qualifire-API-Key": "Your Qualifire Api Key"
},
)
agent = create_react_agent(
llm,
tools,
prompt="system prompt...",
)
question = "Tell me a joke"
for step in agent.stream(
{"messages": [{"role": "user", "content": question}]},
stream_mode="values",
):
step["messages"][-1].pretty_print()
API Reference documentation is here.