Skip to content

fatihgune/interviewer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

interviewer

License: MIT Claude Code Plugin GitHub Stars GitHub Issues Last Commit


Turn vague ideas, feature requests, and half-written tickets into structured, actionable specifications -- without writing a single line of documentation yourself.

You get a Jira ticket that says "add caching." Caching where? What eviction policy? What latency target? What data? The ticket doesn't say. You start building, make assumptions, get halfway through, and learn in review that the PM meant something completely different. Three days wasted.

interviewer fixes this. It runs an adaptive Socratic interview that asks the right questions in the right order, detects what's already in your codebase so it doesn't ask obvious things, scores how ambiguous your requirements still are, and produces a REQUIREMENTS.md that any engineer can pick up and build from.

What it produces

Given a vague request like "we need better error handling," the plugin interviews you through 5-15 rounds and generates:

# Requirements: Structured Error Handling

## Goal
Implement consistent error handling across all API endpoints with
categorized error types, structured JSON responses, and centralized logging.

## Scope
### In Scope
- Custom error classes for validation, auth, not-found, and internal errors
- Global error middleware that catches and formats all errors
- Structured error logging with correlation IDs

### Out of Scope
- Client-side error display changes
- Retry logic for downstream service failures

## Acceptance Criteria
- [ ] All API endpoints return errors in { code, message, details } format
- [ ] Unhandled exceptions are caught by global middleware, not per-route
- [ ] Error logs include correlation ID, stack trace, and request context

Ambiguity Score: 0.12/1.0
Rounds: 7

No guessing. No assumptions. Just the clarity you needed before writing the first line of code.

Who is this for

  • Engineers who are tired of building the wrong thing because the requirements were vague
  • Tech leads who want to front-load clarity before sprint work begins
  • Product managers who want a structured way to communicate intent
  • Teams doing async work where "just ask in standup" doesn't scale
  • Anyone who has ever said "wait, that's not what I meant" during code review

Prerequisites

Installation

From GitHub:

claude plugin install fatihgune/interviewer

From a local clone:

git clone https://github.com/fatihgune/interviewer.git
claude --plugin-dir ./interviewer

Getting started

Navigate to any project directory and run:

/interview add user authentication with SSO

Or start without context:

/interview

The plugin will ask "What are we building or solving?" and take it from there.

Brownfield detection

If you run the interview inside an existing project, the plugin automatically scans your codebase for project markers (package.json, pyproject.toml, go.mod, etc.), detects your tech stack, and asks confirmation-style questions instead of generic discovery ones:

  • "I see Express.js with JWT middleware in src/auth/. Should the new feature use this existing auth?"
  • NOT: "Do you have any authentication set up?"

This means fewer rounds, better questions, and no redundant answers.

The interview loop

Each round presents 1-2 focused questions with suggested answers you can pick or override. The plugin adapts every question based on everything you've said so far -- no fixed question lists.

Q3: What latency is acceptable for cached responses?

  [1] Under 50ms (in-memory cache like Redis)
  [2] Under 500ms (acceptable for CDN or distributed cache)
  [3] Custom answer

(say 'done' when you have enough clarity)

Ambiguity scoring

When you say "done," the plugin scores your requirements across three dimensions:

Dimension Weight What it measures
Goal Clarity 40% Is the goal specific and well-defined?
Constraint Clarity 30% Are constraints and limitations specified?
Success Criteria 30% Are success criteria measurable and verifiable?

If the ambiguity score is above 0.2, the plugin tells you which dimensions are weak and suggests targeted follow-up questions. You can continue or proceed anyway.

Output

The final REQUIREMENTS.md is written to your working directory, ready for implementation:

Requirements written to REQUIREMENTS.md.
Ambiguity score: 0.12.
Ready for implementation.

Commands

Command Purpose
/interview <context> Start an interview with initial context (ticket, idea, bug description)
/interview Start a blank interview -- the plugin asks what to work on

How it works

Questioning strategy

The interview follows a deliberate sequence to reduce ambiguity as fast as possible:

  1. The "what" -- What exactly is being built? What problem does it solve?
  2. Scope boundaries -- What is explicitly out of scope?
  3. Constraints -- Technical constraints, deadlines, dependencies
  4. Acceptance criteria -- How will we know this is done?
  5. Edge cases -- What happens when things fail?

It uses ontological questions to cut through vagueness:

  • "What IS this, precisely?"
  • "Is this the root cause or a symptom?"
  • "What are we assuming here?"
  • "What's the simplest version that still delivers value?"

Adaptive behavior

  • If an answer reveals a new ambiguity, the plugin pivots immediately
  • If an answer is vague ("it should be fast"), the plugin pushes for specifics ("what latency? 100ms? 1s?")
  • If you contradict a previous answer, it surfaces the contradiction directly
  • Detailed tickets need 3-5 rounds; vague ideas need 8-15

Output structure

REQUIREMENTS.md          # Written to your working directory
  # Requirements: <title>
  ## Goal
  ## Context              # Type (greenfield/brownfield), stack, source
  ## Scope                # In scope / out of scope
  ## Constraints
  ## Acceptance Criteria   # Checkboxes, measurable
  ## Edge Cases
  ## Interview Summary
  Ambiguity Score: X/1.0
  Rounds: N

Contributing

Contributions are welcome. Here are some ways to help:

  • Report bugs: Open an issue at github.com/fatihgune/interviewer/issues
  • Improve the interviewer agent: The questioning logic lives in agents/interviewer.md. If the plugin asks bad questions or misses important dimensions, improve the rules.
  • Improve the skill flow: The interview orchestration is in skills/interview/SKILL.md. If the flow should handle new scenarios (multi-team projects, incident response, etc.), extend it there.
  • Add brownfield detectors: The skill currently detects common project markers. If your stack uses a different marker, add it to the detection list.

Related Concepts

If you've been searching for any of the following, this plugin is what you're looking for:

  • AI requirements gathering tool
  • Automated requirements engineering
  • Socratic interview for software specifications
  • Vague ticket to clear spec converter
  • Requirements clarification chatbot
  • Ambiguity scoring for requirements
  • Jira ticket refinement tool
  • Product requirements document generator
  • Claude Code plugin for requirements
  • Sprint planning clarification tool
  • Feature spec generator from conversation
  • PRD generator from vague ideas

GitHub Topics: claude-code-plugin requirements-gathering requirements-engineering specification-generator developer-tools project-planning agile product-management ambiguity-reduction technical-interview

License

MIT

About

Claude Code plugin that turns vague ideas, feature requests, and half-written tickets into structured, actionable specifications through adaptive Socratic interviews with ambiguity scoring.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors