Supported Models

Rogue uses LiteLLM for model compatibility, allowing you to use a wide range of AI models from different providers. The following tables show the models we have tested with Rogue.

✅ Successfully Tested Models

OpenAI

  • gpt-5
  • gpt-5-mini
  • gpt-5-nano
  • openai/gpt-4.1
  • openai/gpt-4.1-mini
  • openai/gpt-4.5-preview
  • openai/gpt-4o
  • openai/gpt-4o-mini
  • openai/o4-mini

Gemini (Vertex AI or Google AI)

  • gemini-2.5-flash
  • gemini-2.5-pro

Anthropic

  • anthropic/claude-3-5-sonnet-latest
  • anthropic/claude-3-7-sonnet-latest
  • anthropic/claude-4-sonnet-latest

❌ Unsupported Models

OpenAI

  • openai/o1 (including mini) - Not compatible with Rogue’s evaluation framework

Gemini (Vertex AI or Google AI)

  • gemini-2.5-flash - Partial support only, may have limitations

Environment Variables

Rogue uses LiteLLM for model management, so you can set API keys for various providers using standard environment variables:
# OpenAI
OPENAI_API_KEY="sk-..."

# Anthropic
ANTHROPIC_API_KEY="sk-..."

# Google (for Gemini)
GOOGLE_API_KEY="..."

# Additional providers supported by LiteLLM
AZURE_API_KEY="..."
COHERE_API_KEY="..."
# ... and many more

Model Selection

When configuring Rogue, you can specify models in LiteLLM format:
  • openai/gpt-4o-mini for OpenAI models
  • anthropic/claude-3-5-sonnet-latest for Anthropic models
  • gemini-2.5-pro for Google models

Performance Considerations

  • GPT-4o-mini: Excellent balance of performance and cost for most evaluations
  • Claude-3.5-Sonnet: Great for complex reasoning and nuanced evaluations
  • Gemini-2.5-Pro: Strong performance for analysis and reporting tasks

Testing New Models

If you want to test Rogue with a model not listed here:
  1. Ensure the model is supported by LiteLLM
  2. Set the appropriate API key
  3. Use the correct model identifier format
  4. Test with a simple evaluation first
Models that support function calling and structured outputs generally work best with Rogue’s evaluation framework.