1
Add Qualifire Credentials to Portkey
Add your Qualifire API key to Portkey:
- Click on the
Admin Settingsbutton on Sidebar - Navigate to
Pluginstab under Organisation Settings - Click on the edit button for the Qualifire integration
- Add your Qualifire API Key - obtain this from your Qualifire account at https://app.qualifire.ai/settings/api-keys/
2
Add Qualifire's Guardrail Checks
Create and configure guardrails:
- Navigate to the
Guardrailspage and click theCreatebutton - Search for any of the Qualifire guardrail checks and click
Add - Configure the specific parameters for your chosen guardrail
- Set any
actionsyou want on your check, and create the Guardrail!
Guardrail Actions allow you to orchestrate your guardrails logic. You can learn more about them in the Portkey Guardrails documentation.
3
Add Guardrail ID to Config
Add the Guardrail ID to your Portkey Config:
- When you save a Guardrail, you’ll get an associated Guardrail ID
- Add this ID to the
input_guardrailsoroutput_guardrailsparams in your Portkey Config - Create these Configs in Portkey UI, save them, and get an associated Config ID to attach to your requests
4
Make Your Request
Use your Config ID in your requests. Your requests are now protected by Qualifire’s comprehensive guardrail system!
For more details on Configs, refer to the Portkey Config documentation.
Available Guardrail Checks
Qualifire provides a comprehensive set of guardrail checks organized into five main categories:Security
| Check Name | Description | Parameters | Supported Hooks |
|---|---|---|---|
| PII Check | Checks that neither the user nor the model included PIIs | None | beforeRequestHook, afterRequestHook |
| Prompt Injections Check | Checks that the prompt does not contain any injections to the model | None | beforeRequestHook |
Safety
| Check Name | Description | Parameters | Supported Hooks |
|---|---|---|---|
| Content Moderation Check | Checks for harmful content including sexual content, harassment, hate speech, and dangerous content in the user input or model output | None | beforeRequestHook, afterRequestHook |
Reliability
| Check Name | Description | Parameters | Supported Hooks |
|---|---|---|---|
| Instruction Following Check | Checks that the model followed the instructions provided in the prompt | None | afterRequestHook |
| Grounding Check | Checks that the model is grounded in the context provided | mode (optional) - See Mode Parameter | afterRequestHook |
| Hallucinations Check | Checks that the model did not hallucinate | mode (optional) - See Mode Parameter | afterRequestHook |
| Tool Use Quality Check | Checks the model’s tool use quality. Including correct tool selection, parameters and values | mode (optional) - See Mode Parameter | afterRequestHook |
Policy
| Check Name | Description | Parameters | Supported Hooks |
|---|---|---|---|
| Policy Violations Check | Checks that the prompt and response didn’t violate any given policies | policies (array of strings) - See Policy Violations Checkmode (optional) - See Mode Parameterpolicy_target (optional) - See Policy Violations Check | beforeRequestHook, afterRequestHook |
Configuration Examples
Mode Parameter
Several guardrail checks support amode parameter that controls the trade-off between accuracy and speed:
quality: Highest accuracy, slower processingbalanced: Good balance between accuracy and speed (default)speed: Fastest processing, lower accuracy
Policy Violations Check
For the Policy Violations Check, you can specify custom policies to enforce, the mode, and the target:Parameters
policies(required): Array of strings defining custom policies to enforcemode(optional): One ofquality,balanced, orspeed. Default:balancedpolicy_target(optional): One ofinput,output, orboth. Specifies whether to run the policy check on the request, response, or both. This must match the configured hooks:input: Only forbeforeRequestHookoutput: Only forafterRequestHookboth: For bothbeforeRequestHookandafterRequestHook
Use Cases
Qualifire’s guardrails are particularly useful for:- Content Moderation: Filtering harmful or inappropriate content in user inputs and AI responses
- Compliance: Ensuring AI responses adhere to company policies and regulatory requirements
- Quality Assurance: Detecting hallucinations, instruction violations, and poor tool usage
- Data Protection: Preventing PII exposure and ensuring data privacy