Skip to main content

Basic

Qualifire provides an SDK to help you integrate our services into your application. The SDK is available for the following languages:

Node.js

To use the Node.js SDK, you need to install it using npm:
npm install qualifire
Then, you can use the SDK in your application:
import { Qualifire } from "qualifire";

const qualifire = new Qualifire({
  apiKey: "YOUR_API_KEY", // Optional: defaults to QUALIFIRE_API_KEY env var
  baseUrl: "https://proxy.qualifire.ai", // Optional: custom base URL
});
If the apiKey argument is not provided, the SDK will look for a value in the environment variable QUALIFIRE_API_KEY.

Types

The SDK exports the following types for use with evaluations:
import type {
  EvaluationProxyAPIRequest,
  EvaluationRequestV2,
  EvaluationResponse,
  Framework,
  LLMMessage,
  ModelMode,
  PolicyTarget,
} from "qualifire";

// Framework - supported LLM frameworks
type Framework = "openai" | "vercelai" | "gemini" | "claude";

// ModelMode - controls quality/speed tradeoff for checks
type ModelMode = "speed" | "balanced" | "quality";

// PolicyTarget - specifies what to check
type PolicyTarget = "input" | "output" | "both";

// LLMMessage - message format for evaluations
interface LLMMessage {
  role: string;
  content?: string;
  tool_calls?: LLMToolCall[];
}
The SDK supports direct integration with popular LLM frameworks. Simply pass the original request and response objects along with the framework name:Supported frameworks: openai, vercelai, gemini, claude
import { Qualifire } from "qualifire";
import OpenAI from "openai";

const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });
const openai = new OpenAI({ apiKey: "YOUR_OPENAI_API_KEY" });

// Make your OpenAI request
const openAiRequest = {
  model: "gpt-4o",
  messages: [
    {
      role: "system",
      content: "You are a helpful assistant that can answer questions.",
    },
    {
      role: "user",
      content: [
        {
          type: "text",
          text: "Is the sky blue?",
        },
      ],
    },
  ],
};

const openAiResponse = await openai.chat.completions.create(openAiRequest);

// Evaluate with Qualifire
const qualifireResponse = await qualifire.evaluate({
  framework: "openai",
  request: openAiRequest,
  response: openAiResponse,
  contentModerationCheck: true,
  groundingCheck: true,
  hallucinationsCheck: true,
  instructionsFollowingCheck: true,
  piiCheck: true,
  promptInjections: true,
  toolSelectionQualityCheck: false,
});

console.log(qualifireResponse);

Streaming Mode

For streaming responses, collect the chunks and pass them as an array:
import { Qualifire } from "qualifire";
import OpenAI from "openai";

const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });
const openai = new OpenAI({ apiKey: "YOUR_OPENAI_API_KEY" });

const openAiRequest = {
  stream: true,
  model: "gpt-4o",
  messages: [
    {
      role: "system",
      content: "You are a helpful assistant that can answer questions.",
    },
    {
      role: "user",
      content: [
        {
          type: "text",
          text: "Is the sky blue?",
        },
      ],
    },
  ],
};

const openAiResponseStream = await openai.chat.completions.create(
  openAiRequest
);

// Collect all chunks
const responseChunks: any[] = [];
for await (const chunk of openAiResponseStream) {
  responseChunks.push(chunk);
}

// Evaluate with collected chunks
const qualifireResponse = await qualifire.evaluate({
  framework: "openai",
  request: openAiRequest,
  response: responseChunks,
  groundingCheck: true,
  promptInjections: true,
});

Fine-grained Messages Mode

You can also send parsed messages directly for evaluation:
import { Qualifire } from "qualifire";

const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });

const response = await qualifire.evaluate({
  messages: [
    { role: "user", content: "What is the capital of France?" },
    { role: "assistant", content: "Paris" },
  ],
  contentModerationCheck: true,
  hallucinationsCheck: true,
  groundingCheck: true,
  piiCheck: true,
  promptInjections: true,
  assertions: ["don't give medical advice"],
});
Or with simple input/output:
const response = await qualifire.evaluate({
  input: "What is the capital of France?",
  output: "Paris",
  contentModerationCheck: true,
  hallucinationsCheck: true,
});

Evaluation with Mode Configuration

You can fine-tune the quality/speed tradeoff for each check type:
const response = await qualifire.evaluate({
  messages: [
    { role: "user", content: "What is the capital of France?" },
    { role: "assistant", content: "Paris" },
  ],
  hallucinationsCheck: true,
  groundingCheck: true,
  assertions: ["don't give medical advice"],
  // Mode configuration
  hallucinationsMode: "quality",
  groundingMode: "balanced",
  assertionsMode: "speed",
  consistencyMode: "balanced",
  // Multi-turn options
  groundingMultiTurnMode: true,
  policyMultiTurnMode: true,
  // Policy target
  policyTarget: "output",
});

Invoke Evaluation by ID

You can invoke a pre-configured evaluation by its ID:
const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });

const response = await qualifire.invokeEvaluation({
  input: "What is the capital of France?",
  output: "Paris",
  evaluationId: "g2r8puzojwb8q6yi2f6x162a", // Get this from the evaluations page
});

Evaluation Response

The evaluate method returns an EvaluationResponse object:
const response = await qualifire.evaluate({
  input: "Hello",
  output: "World",
  piiCheck: true,
});

console.log(response?.status); // "passed" or "failed"
console.log(response?.score); // Overall score (0-100)

response?.evaluationResults.forEach((item) => {
  console.log(`Type: ${item.type}`);
  item.results.forEach((result) => {
    console.log(`  - ${result.name}: ${result.label} (score: ${result.score})`);
    console.log(`    Reason: ${result.reason}`);
  });
});

// Example output:
// {
//   "status": "failed",
//   "score": 75,
//   "evaluationResults": [
//     {
//       "type": "grounding",
//       "results": [
//         {
//           "name": "grounding",
//           "score": 75,
//           "label": "INFERABLE",
//           "confidence_score": 100,
//           "reason": "The AI's output provides a detailed explanation..."
//         }
//       ]
//     }
//   ]
// }

Instrumentation (Tracing)

The SDK provides automatic tracing with the init() method:
import { Qualifire } from "qualifire";
import OpenAI from "openai";

const qualifire = new Qualifire({ apiKey: "YOUR_QUALIFIRE_API_KEY" });

// Initialize tracing
qualifire.init();

const openai = new OpenAI({
  apiKey: "YOUR_OPENAI_API_KEY",
  baseUrl: "https://proxy.qualifire.ai/api/providers/openai",
  defaultHeaders: {
    "X-Qualifire-API-Key": "YOUR_QUALIFIRE_API_KEY",
  },
});

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    {
      role: "user",
      content: "Tell me a joke",
    },
  ],
});
This will log the evaluations and traces in the web UI for you to check.

Deprecated Parameters

The following parameters are deprecated and will automatically enable contentModerationCheck:
DeprecatedUse Instead
dangerousContentCheckcontentModerationCheck
harassmentCheckcontentModerationCheck
hateSpeechCheckcontentModerationCheck
sexualContentCheckcontentModerationCheck
Snake_case variants are also deprecated in favor of camelCase:
DeprecatedUse Instead
grounding_checkgroundingCheck
hallucinations_checkhallucinationsCheck
pii_checkpiiCheck
prompt_injectionspromptInjections
tool_selection_quality_checktoolSelectionQualityCheck
syntax_checkssyntaxChecks
instructions_following_checkinstructionsFollowingCheck
API Reference documentation is here.