Research Reports

Lesson 4 of 5

Generation & Review Process

Estimated time: 8 minutes

Generation & Review Process

You've configured your sources and built your templates. Now it's time to run the pipeline and review the output. This lesson covers the generation workflow, how to refine results, and the review tools that keep report quality high.

Prerequisites

    The Generation Pipeline

      Request ──> Parse Intent ──> Select Sources ──> Query (parallel)
                                                           │
                                                           v
      Final Report <── Format <── Synthesize <── Extract & Rank
           │
           v
      Review Queue (you check before sharing)
    

    Each stage feeds the next. The entire pipeline typically runs in 2-5 minutes depending on how many sources you've configured and the report length.

    Generate a Report via Chat

    The simplest way to kick off research is a chat message. OpenClaw parses your intent and fills in the gaps.

    Chat Message

    Understand the Generation Stages

    While the report generates, OpenClaw shows progress through five stages.

    StageWhat HappensTypical Duration
    ParseExtracts topic, focus areas, constraints from your request< 1 sec
    QueryHits all relevant sources in parallel15-30 sec
    ExtractPulls key data, stats, and quotes from results30-60 sec
    SynthesizeCross-references and organizes into template sections60-90 sec
    FormatApplies citation style, tables, and final polish10-20 sec

    Source Deduplication

    During the Extract stage, OpenClaw identifies when multiple sources report the same fact (e.g., "the AI healthcare market is worth $X billion"). It keeps the most authoritative source and notes corroboration. This prevents your report from repeating the same stat five times.

    Review the Draft

    Every report lands in a review state before you share it. OpenClaw flags potential issues.

    Review Summary

    The quality metrics give you a quick read on reliability. Pay attention to flags — they highlight where the AI had thin evidence.

    Refine Specific Sections

    You don't have to regenerate the entire report if one section needs work. Target individual sections.

    Chat Refinement

    You can also edit the report directly — any manual edits are preserved if you regenerate other sections.

    Regeneration Scope

    When you regenerate a single section, only that section's source queries re-run. The rest of the report stays untouched. If you want a full refresh (e.g., new sources have appeared), use --full to regenerate everything.

    Approve and Finalize

    Once you're satisfied, approve the report. This locks the content and generates the final formatted version.

    Terminal
    openclaw research approve --report latest
    Approval Output

    For topics you track regularly, schedule automatic generation.

    openclaw.config.yaml

    Each Monday, OpenClaw generates a fresh report and pings your Slack channel for review. Useful for weekly market briefs or competitive intelligence.

    When you generate multiple reports on the same topic over time, OpenClaw can diff them.

    You: Compare this week's AI healthcare report with last month's
    
    OpenClaw: Key changes since Feb 10:
      - Market size estimate increased from $15.1B to $16.8B
      - New player: MedAI raised $120M Series C
      - Trend added: LLMs in clinical note summarization
      - Removed: Two companies from Key Players (acquired)
    
    Knowledge Check

    What should you do when OpenClaw flags a section with 'single source' warning?