Supported Models
Rogue uses LiteLLM for model compatibility, allowing you to use a wide range of AI models from different providers. The following tables show the models we have tested with Rogue.✅ Successfully Tested Models
OpenAI
- gpt-5
- gpt-5-mini
- gpt-5-nano
- openai/gpt-4.1
- openai/gpt-4.1-mini
- openai/gpt-4.5-preview
- openai/gpt-4o
- openai/gpt-4o-mini
- openai/o4-mini
Gemini (Vertex AI or Google AI)
- gemini-2.5-flash
- gemini-2.5-pro
Anthropic
- anthropic/claude-3-5-sonnet-latest
- anthropic/claude-3-7-sonnet-latest
- anthropic/claude-4-sonnet-latest
❌ Unsupported Models
OpenAI
- openai/o1 (including mini) - Not compatible with Rogue’s evaluation framework
Gemini (Vertex AI or Google AI)
- gemini-2.5-flash - Partial support only, may have limitations
Environment Variables
Rogue uses LiteLLM for model management, so you can set API keys for various providers using standard environment variables:Model Selection
When configuring Rogue, you can specify models in LiteLLM format:openai/gpt-4o-mini
for OpenAI modelsanthropic/claude-3-5-sonnet-latest
for Anthropic modelsgemini-2.5-pro
for Google models
Performance Considerations
- GPT-4o-mini: Excellent balance of performance and cost for most evaluations
- Claude-3.5-Sonnet: Great for complex reasoning and nuanced evaluations
- Gemini-2.5-Pro: Strong performance for analysis and reporting tasks
Testing New Models
If you want to test Rogue with a model not listed here:- Ensure the model is supported by LiteLLM
- Set the appropriate API key
- Use the correct model identifier format
- Test with a simple evaluation first