Skip to content

WealthVisionAI-Source/autoresearch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

autoresearch

License: MIT OpenClaw Compatible Version

Structured deep-research for OpenClaw agents. Multi-pass investigation with source quality scoring, confidence levels, and open questions. For when one web search isn't enough.


The Problem

A single web_search call returns headlines. For complex, contested, or nuanced topics, you need more: multiple passes, source quality evaluation, contradiction handling, and an honest confidence score.

autoresearch gives your agent a structured investigative method:

  • 4-pass research loop: broad sweep → source evaluation → deep dive → synthesis
  • Source quality tiers (1/2/3) to distinguish primary evidence from speculation
  • Confidence score (0–10) so you know how much to trust the output
  • Open questions flagged — no false certainty

When to Use

Use autoresearch when:

  • The topic is complex, nuanced, or contested
  • You need to evaluate source credibility
  • Conflicting information exists across sources
  • You need synthesis, not just retrieval
  • You want a confidence score and open questions flagged

Use web_search when:

  • You need quick facts or current headlines
  • The answer is uncontested
  • A single search pass will suffice

The 4-Pass Research Loop

Pass 1 — Broad Sweep

3–5 diverse queries to map the landscape. Identifies key terms, major players, source types. Output: 10–15 candidate sources.

Pass 2 — Source Evaluation

For each candidate source: assign tier (1/2/3), check date, note bias, flag primary vs secondary. Eliminates Tier 3 unless no better option exists.

Pass 3 — Deep Dive

2–3 focused queries using findings from Pass 1. Fills gaps, resolves contradictions, locates primary sources. Uses web_fetch for detailed page content.

Pass 4 — Synthesis

Writes the research brief. Scores confidence 0–10. Lists open questions. Notes conflicting evidence.


Source Quality Tiers

Tier What it includes
Tier 1 Primary sources, official government/agency releases, peer-reviewed research, official statistics, direct documents
Tier 2 Major news outlets (Reuters, AP, BBC), established industry publications, reputable think tanks
Tier 3 Blogs, tabloids, opinion pieces, unnamed sources, social media claims without corroboration

Confidence Score

Score Meaning
9–10 Multiple Tier 1 sources in strong agreement; minimal gaps
7–8 Solid Tier 1/2 consensus; minor unresolved questions
5–6 Mixed sources; some contradictions; moderate gaps
3–4 Limited Tier 1; heavy Tier 2/3 reliance; significant gaps
1–2 Scarce credible sources; major contradictions; high uncertainty

Output Format

# Research Brief: [Topic]

## Key Findings
- [Finding 1] — Source: [Tier 1/2/3 source]
- [Finding 2] — Source: ...

## Source Quality Summary
- X Tier 1 sources used
- Y Tier 2 sources used
- Z Tier 3 sources (noted where used)

## Conflicting Evidence
[Where sources disagree and why]

## Confidence Score: X/10
[Rationale]

## Open Questions
- [What we don't know yet]
- [What would increase confidence]

Usage

use autoresearch to investigate [topic]
deep research on [subject] — I need source quality scores
run autoresearch on UAP disclosure news
research the history of [topic] with confidence scoring

Worked Example — UAP Research

A research brief on UAP (Unidentified Aerial Phenomena) disclosure might include:

  • Tier 1: Congressional hearing transcripts, DoD/AARO reports, declassified documents
  • Tier 2: Reuters/AP reporting on Congressional testimony, established defence publications
  • Tier 3: Podcasts, blogs, social media claims (noted but low weight)
  • Confidence: 6/10 — solid on documented testimony, low on physical evidence claims
  • Open questions: nature of retrieved materials, chain of custody for physical evidence

File Structure

autoresearch/
├── SKILL.md                  ← OpenClaw entry point
├── README.md                 ← This file
├── benchmark/
│   ├── tasks.json            ← Sample research tasks
│   └── scorer.md             ← Quality scoring rubric
├── examples/
│   └── uap_example.md       ← Worked UAP research brief
├── templates/
│   └── research_brief.md    ← Research brief template
└── runner/
    └── run_research.md      ← Research loop instructions

Support the Project

Ko-fi · 💛 GitHub Sponsors


License

MIT © 2026 — free to use, modify, and redistribute.

About

Structured deep-research for OpenClaw agents. Multi-pass investigation with source quality scoring and confidence levels.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors