1
Install the Qualifire SDK
Install the Qualifire Python SDK:
2
Configure Guardrails in LiteLLM
Define your guardrails under the
guardrails section in your config.yaml:litellm config.yaml
Supported values for
mode:pre_call- Run before LLM call, on inputpost_call- Run after LLM call, on input & outputduring_call- Run during LLM call, on input. Same aspre_callbut runs in parallel as LLM call. Response not returned until guardrail check completes
3
Start LiteLLM Gateway
Start the LiteLLM gateway with your configuration:
4
Test Your Integration
Test your integration with a request. The guardrail will block requests that violate your policies.
Using Pre-configured Evaluations
You can use evaluations pre-configured in the Qualifire Dashboard by specifying theevaluation_id:
litellm config.yaml
When
evaluation_id is provided, LiteLLM will use invoke_evaluation()
instead of evaluate(), running the pre-configured evaluation from your
dashboard.Available Checks
Qualifire supports the following evaluation checks:| Check | Parameter | Description |
|---|---|---|
| Prompt Injections | prompt_injections: true | Identify prompt injection attempts |
| Hallucinations | hallucinations_check: true | Detect factual inaccuracies or hallucinations |
| Grounding | grounding_check: true | Verify output is grounded in provided context |
| PII Detection | pii_check: true | Detect personally identifiable information |
| Content Moderation | content_moderation_check: true | Check for harmful content (harassment, hate speech, etc.) |
| Tool Selection Quality | tool_selection_quality_check: true | Evaluate quality of tool/function calls |
| Custom Assertions | assertions: [...] | Custom assertions to validate against the output |
Example with Multiple Checks
Example with Custom Assertions
Parameter Reference
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | QUALIFIRE_API_KEY env var | Your Qualifire API key |
api_base | str | None | Custom API base URL (optional) |
evaluation_id | str | None | Pre-configured evaluation ID from Qualifire dashboard |
prompt_injections | bool | true (if no other checks) | Enable prompt injection detection |
hallucinations_check | bool | None | Enable hallucination detection |
grounding_check | bool | None | Enable grounding verification |
pii_check | bool | None | Enable PII detection |
content_moderation_check | bool | None | Enable content moderation |
tool_selection_quality_check | bool | None | Enable tool selection quality check |
assertions | List[str] | None | Custom assertions to validate |
on_flagged | str | "block" | Action when content is flagged: "block" or "monitor" |
Default Behavior
- If no
evaluation_idis provided and no checks are explicitly enabled,prompt_injectionsdefaults totrue - When
evaluation_idis provided, it takes precedence and individual check flags are ignored on_flagged: "block"raises an HTTP 400 exception when violations are detectedon_flagged: "monitor"logs violations but allows the request to proceed
Complete Configuration Example
litellm config.yaml
Tool Call Support
Qualifire supports evaluating tool/function calls. When usingtool_selection_quality_check, the guardrail will analyze tool calls in assistant messages:
Environment Variables
| Variable | Description |
|---|---|
QUALIFIRE_API_KEY | Your Qualifire API key |
QUALIFIRE_BASE_URL | Custom API base URL (optional) |