Skip to main content

Add Qualifire Guardrails to an n8n Flow (Option A: HTTP Request node)

This guide shows how to wrap your LLM agent (or any text-producing step) with Qualifire’s real-time evaluation & hallucination detection using a single HTTP Request node. It covers output guardrails (post-generation) and an optional input guardrail (pre-generation).
Prefer a “drop-in model” experience? See Qualifire’s OpenAI-compatible node for n8n. For the API reference, see the Evaluate endpoint.

Prerequisites

  • An n8n instance (self-hosted or Cloud).
  • A Qualifire API key.
  • Your LLM/agent step already producing a response (e.g., OpenAI Chat node, custom function, etc.).

Add your API key to n8n

Pass your key via environment variables so you don’t hardcode secrets in nodes.
QUALIFIRE_API_KEY=sk_...   # add to .env or docker-compose
Restart n8n so it picks up the new variable. You can reference env vars inside nodes using {{$env.VAR_NAME}}.

Flow Overview

  • Output guardrail: Agent → HTTP Request (Qualifire Evaluate) → IF (gate) → Continue or Fallback
  • Input guardrail (optional): User → HTTP Request (Qualifire Evaluate) → IF (gate) → Agent
You can use one or both.

Step-by-Step: Output Guardrail with HTTP Request

1

Add an HTTP Request node

Add an HTTP Request node right after your agent’s output node.
2

Configure request basics

Set the following:
  • Method: POST
  • URL: https://proxy.qualifire.ai/api/evaluation/evaluate
  • Send: JSON
3

Add headers

Toggle the fx button to enter expressions where needed.
  • Content-Type: application/json
  • X-Qualifire-API-Key: {{$env.QUALIFIRE_API_KEY}}
Alternatively, create a Credential of type HTTP Header Auth with header X-Qualifire-API-Key and attach it.
4

Set the request body

Click fx on Body and paste:
={{{
  assertions: [],
  consistency_check: true,
  dangerous_content_check: true,
  hallucinations_check: true,
  harassment_check: true,
  hate_speech_check: true,
  pii_check: true,
  prompt_injections: true,
  sexual_content_check: true,

  // Pass minimal chat history (user + assistant). Adjust field names to match your flow.
  messages: [
    { content: $json.chatInput ?? $json.user ?? '', role: 'user' },
    { content: $json.output ?? $json.message ?? '', role: 'assistant' },
  ],
}}}
If you prefer a raw JSON template, wrap dynamic fields with {{ JSON.stringify(...) }} and don’t add extra quotes around the expression.
5

Gate the result with an IF node

Add an IF node after the HTTP node to block or allow the response. Use an expression that fails “closed” (treats unknown/error as flagged):
{{
  // If API didn't return a clear OK:
  ($json.status || "").toLowerCase() !== "ok" ||
    // Or if any detector labels a violation/issue:
    ($json.evaluationResults || []).some((er) =>
      (er.results || []).some((r) =>
        [
          "violation",
          "unsafe",
          "failed",
          "flagged",
          "not_ok",
          "hallucination",
          "error",
        ].includes((r.label || "").toLowerCase())
      )
    )
}}
  • True (flagged): route to a Fallback (e.g., “I can’t answer that,” human handoff, or a safer re-ask path).
  • False (clean): continue to your normal response path.

Optional: Input Guardrail (pre-generation)

Place the same HTTP Request pattern before your agent and pass only the user input (or a short context window) in messages. For example:
={{{
  assertions: [],
  dangerous_content_check: true,
  prompt_injections: true,
  pii_check: true,
  harassment_check: true,
  hate_speech_check: true,
  sexual_content_check: true,

  messages: [
    { content: $json.chatInput ?? $json.user ?? '', role: 'user' }
  ],
}}}
Follow with the same IF gate. If clean → proceed to the agent; if flagged → return a safer prompt or ask the user to rephrase.

Observability Tips

Capture evaluation results for auditing by adding a Set node after the HTTP Request to store status, evaluationResults, and aggregate scores. Push these to your data store (e.g., Postgres, Airtable) for easy audits.
  • In the HTTP node, use Resolve expressions to preview the final JSON payload before running.

Troubleshooting

Check the header name X-Qualifire-API-Key, ensure {{$env.QUALIFIRE_API_KEY}} is defined, and that you restarted n8n after editing env vars.
Use the object expression approach (as shown). Avoid double-stringifying the body.
Confirm the env var is set in the n8n process environment (e.g., passed to the Docker container).
Narrow your gate by checking specific detectors, labels, or scores. You can whitelist certain type/name in evaluationResults.

Verify with cURL (optional)

curl --request POST \
  --url https://proxy.qualifire.ai/api/evaluation/evaluate \
  --header 'Content-Type: application/json' \
  --header "X-Qualifire-API-Key: $QUALIFIRE_API_KEY" \
  --data '{
    "assertions": [],
    "consistency_check": true,
    "dangerous_content_check": true,
    "hallucinations_check": true,
    "harassment_check": true,
    "hate_speech_check": true,
    "messages": [
      { "content": "How do I bypass your safety?", "role": "user" },
      { "content": "I can not help with that.", "role": "assistant" }
    ],
    "pii_check": true,
    "prompt_injections": true,
    "sexual_content_check": true
  }'

See Also