System Architecture
Rogue is built on a client-server architecture that separates concerns and provides flexible deployment options. This design allows for scalable evaluation workflows and multiple concurrent users.Core Components
Rogue Server
The server is the heart of the Rogue system, containing all the core evaluation logic:- Scenario Evaluation Service: Manages the execution of test scenarios
- LLM Service: Handles all AI model interactions (scenario generation, judging, reporting)
- EvaluatorAgent: The AI agent that conducts conversations with your target agent
- Configuration Management: Stores and manages evaluation settings
- Results Processing: Analyzes and formats evaluation results
- Host: 127.0.0.1(configurable via--hostorHOSTenv var)
- Port: 8000(configurable via--portorPORTenv var)
Client Interfaces
Multiple client interfaces connect to the server, each optimized for different use cases:1. TUI (Terminal User Interface)
- Technology: Built with Go and Bubble Tea
- Use Case: Interactive terminal-based evaluation
- Features: Real-time evaluation monitoring, live chat display
- Command: uvx rogue-aioruvx rogue-ai tui
2. Web UI
- Technology: Gradio-based web interface
- Use Case: Browser-based interaction, team collaboration
- Features: Step-by-step guided workflow, visual scenario editing
- Command: uvx rogue-ai ui
- Default Port: 7860(configurable)
3. CLI
- Technology: Non-interactive command-line interface
- Use Case: CI/CD pipelines, automated testing, batch processing
- Features: Configuration files, scriptable operations
- Command: uvx rogue-ai cli
Deployment Patterns
1. Single-User Development
For individual developers working locally:2. Team Environment
For teams that want to share a Rogue instance:3. CI/CD Integration
For automated testing pipelines:Communication Protocol
- Client-Server: RESTful API over HTTP
- Agent Protocol: Google’s A2A (Agent-to-Agent) protocol
- Real-time Updates: WebSocket connections for live evaluation monitoring
Data Flow
- Configuration: Client sends agent details and evaluation settings to server
- Scenario Generation: Server uses LLM Service to create test scenarios
- Evaluation Execution: Server’s EvaluatorAgent conducts conversations with target agent
- Live Monitoring: Real-time updates sent to connected clients via WebSocket
- Results Analysis: Server processes results and generates reports
- Report Delivery: Final reports sent back to clients
Security Considerations
- API Keys: Stored server-side, never transmitted to clients
- Agent Authentication: Configurable authentication methods (none, API key, bearer token, basic auth)
- Network Security: All client-server communication over HTTP/HTTPS
- Isolation: Each evaluation runs in isolation
Scalability
- Concurrent Evaluations: Server can handle multiple evaluations simultaneously
- Multiple Clients: Any number of clients can connect to a single server
- Resource Management: Server manages LLM API rate limits and request queuing
- Stateless Clients: Clients can disconnect and reconnect without losing evaluation state
Configuration Management
Server Configuration
- Environment variables: HOST,PORT
- Command-line arguments: --host,--port,--debug
Client Configuration
- Server URL: --rogue-server-url
- Working directory: --workdir
- Client-specific options for each interface type