Model Selection

Pick the best model for research synthesis

The model parameter on POST /research lets you choose which LLM synthesizes the final answer. If you omit it, Caesar automatically selects a model based on your query.

Start by omitting model to let Caesar auto-select. Set model when you need consistent output, a specific latency profile, or a fixed provider.

Supported models

ModelProviderSummary
gpt-5.2OpenAIHighest quality synthesis for complex, multi-step research tasks
gemini-3-proGoogleAdvanced reasoning with strong long context performance
gemini-3-flashGoogleBest performance for high-volume, low-latency research
claude-opus-4.5AnthropicTop-tier analysis and long form writing quality

The model parameter controls research synthesis. Retrieval and source gathering remain the same.

When to use each model

gpt-5.2

Best for

  • Complex, multi-step reasoning
  • Cross-domain technical synthesis
  • High-stakes decisions that need the strongest accuracy

Trade-offs

  • Typically higher latency than Flash-tier models
gemini-3-pro

Best for

  • Deep analysis in code, math, or STEM topics
  • Long-context synthesis over large documents or datasets
  • Balanced quality for strong reasoning in technical research

Trade-offs

  • Slower than Flash for simple or high-throughput workloads
gemini-3-flash

Best for

  • Large scale processing and batch research
  • Low latency, high volume workloads
  • Agentic or iterative tasks that need fast turns

Trade-offs

  • Less depth than Pro or Opus on very complex analysis
claude-opus-4.5

Best for

  • Long form synthesis and narrative quality
  • Nuanced analysis and careful reasoning
  • Research outputs that need strong readability

Trade-offs

  • Higher latency than smaller or Flash-tier models

Example

1{
2 "query": "Compare major approaches to carbon capture and their performance",
3 "model": "gemini-3-flash",
4 "reasoning_loops": 2
5}

Quick picker

GoalRecommended model
Fast, high volume researchgemini-3-flash
Strong reasoning with long contextgemini-3-pro
Maximum accuracy on complex researchgpt-5.2
Long form synthesis and narrative qualityclaude-opus-4.5

Learn more