Skip to main content
The Vercel AI SDK is a powerful toolkit for building AI-powered applications. Qualifire integrates seamlessly with the Vercel AI SDK by acting as a proxy layer, enabling you to add guardrails, evaluations, and observability to your AI applications.
1

Create an account + Generate an API Key

Log into qualifire or create an account. Once you have an account, you can generate an API key.
2

Set environment variables

Set the following environment variables in your project:
QUALIFIRE_API_KEY=<your Qualifire API key>
QUALIFIRE_BASE_URL=https://proxy.qualifire.ai/api/providers/openai
3

Install the Vercel AI SDK

Install the Vercel AI SDK and your preferred provider SDK:
npm install ai @ai-sdk/openai zod
4

Configure the provider client with Qualifire

Configure your AI provider to route requests through Qualifire by setting the baseURL and adding the Qualifire API key header:
import { createOpenAI } from "@ai-sdk/openai";

const openaiClient = createOpenAI({
  baseURL:
    process.env.QUALIFIRE_BASE_URL ||
    "https://proxy.qualifire.ai/api/providers/openai",
  headers: {
    "X-Qualifire-API-Key": process.env.QUALIFIRE_API_KEY || "",
  },
});

// Use with any model
const model = openaiClient("gpt-4o");

Usage Examples

Basic Text Generation

import { generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";

const openaiClient = createOpenAI({
  baseURL: "https://proxy.qualifire.ai/api/providers/openai",
  headers: {
    "X-Qualifire-API-Key": process.env.QUALIFIRE_API_KEY || "",
  },
});

const { text } = await generateText({
  model: openaiClient("gpt-4o"),
  prompt: "What is the capital of France?",
});

console.log(text);

Streaming Responses

import { streamText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";

const openaiClient = createOpenAI({
  baseURL: "https://proxy.qualifire.ai/api/providers/openai",
  headers: {
    "X-Qualifire-API-Key": process.env.QUALIFIRE_API_KEY || "",
  },
});

const result = streamText({
  model: openaiClient("gpt-4o"),
  prompt: "Write a short poem about coding.",
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

AI Agents with Tools

Here’s a complete example of building an AI agent with tools using the Vercel AI SDK and Qualifire:
import { tool, UIMessage, stepCountIs, createAgentUIStreamResponse } from "ai";
import { Experimental_Agent as Agent } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { z } from "zod";

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const openaiClient = createOpenAI({
    baseURL:
      process.env.QUALIFIRE_BASE_URL ||
      "https://proxy.qualifire.ai/api/providers/openai",
    headers: {
      "X-Qualifire-API-Key": process.env.QUALIFIRE_API_KEY || "",
    },
  });

  const myAgent = new Agent({
    model: openaiClient("gpt-4o"),
    instructions: `You are a helpful assistant that can look up weather information.`,
    stopWhen: stepCountIs(5),
    tools: {
      getWeather: tool({
        description: "Get the current weather for a location",
        inputSchema: z.object({
          city: z.string().describe("The city to get weather for"),
        }),
        execute: async ({ city }) => {
          // Simulated weather data
          return { temperature: 72, conditions: "sunny", city };
        },
      }),
    },
    toolChoice: "auto",
  });

  return await createAgentUIStreamResponse({
    agent: myAgent,
    uiMessages: messages,
  });
}

Key Configuration

The key to integrating Qualifire with the Vercel AI SDK is configuring your provider client with:
  1. baseURL: Point to the Qualifire proxy endpoint for your provider
  2. headers: Include the X-Qualifire-API-Key header with your API key
ProviderBase URL
OpenAIhttps://proxy.qualifire.ai/api/providers/openai
Anthropichttps://proxy.qualifire.ai/api/providers/anthropic
Googlehttps://proxy.qualifire.ai/api/providers/google
All your AI requests will now be routed through Qualifire, enabling guardrails, evaluations, tracing, and other observability features without any changes to your application logic.

Next Steps