Score prose Β· Diagnose failures Β· Rewrite clean Β· Two-pass validation
AI writing has patterns. And once you know what they are, you'll never read the same way again. It has Predictable phrases, recycled structures, rhythms that repeat so consistently they feel algorithmic β because they are. This skill teaches Claude to find them, name them, and cut them out by the root. Use it when you're drafting, rewriting, or editing anything where "good enough" isn't the standard.
Run your text through any βhumanizerβ and itβll happily swap βutilizeβ for βuse,β delete every βfurthermore,β and its Done.
A real person reads it anyway and still spots it instantly.
Because the giveaway isnβt the fancy words. It's the patterns underneath them β the lazy connectors, the topic sentences that spoil the whole idea in the first line, the paragraphs that circle back and say exactly what they said at the start. Those deeper patterns survive every surface-level polish.
This skill catches them.
Before:
"Here's the thing: building products is hard. Not because the technology is complex. Because people are complex. Let that sink in."
After:
"Building products is hard. Technology is manageable. People aren't."
What changed: Opener removed. Binary contrast cut. Emphasis crutch gone. Three direct statements.
Before (subtle failure β passes most tools):
"When you're deep in a project, it's easy to lose sight of the original goal. This happens to most teams."
After:
"Most teams drift. Nobody notices until the deadline."
What changed: "When you're deep in" is a slow conditional entry. "This happens to most teams" observes from a distance instead of naming who does the thing. Neither phrase appears on a standard banned list. Both produce the same softness. The skill catches both.
Layer 1: Sentence-level patterns
| Category | Examples caught |
|---|---|
| Throat-clearing openers | "Here's the thing:" Β· "It turns out" Β· "The uncomfortable truth is" |
| Emphasis crutches | "Let that sink in." Β· "Full stop." Β· "Make no mistake" |
| Business jargon | navigate Β· leverage Β· lean into Β· circle back Β· game-changer |
| All adverbs | really Β· just Β· literally Β· genuinely Β· actually Β· crucially |
| Meta-commentary | "Let me walk you through..." Β· "In this section, we'll..." |
| Vague declaratives | "The implications are significant" Β· "The stakes are high" |
| False agency | "the decision emerges" Β· "the culture shifts" Β· "the data tells us" |
| Binary contrasts | "Not because X. Because Y." Β· "The answer isn't X. It's Y." |
| Passive voice | every construction where a human subject can be named instead |
| Em dashes | removed, always |
Layer 2: Second-generation tells (what survives the first pass)
These don't appear on any standard list. They survive because they look clean sentence-by-sentence. They fail at the paragraph level.
| Pattern | Why it fails |
|---|---|
| "This means that..." | Lazy connector β the writer didn't earn the transition |
| "When you X, you Y" as an opener | Conditional filler β sounds thoughtful, reads as stall |
| "There's something about X that..." | Circling instead of landing |
| "What this looks like in practice..." | Announcing the example instead of giving it |
| "It's a reminder that..." | Moralizing closer |
| "And yet." (standalone) | Performative tension |
| 3+ sentences opening with "The" or "This" | Opener uniformity |
| Last sentence of a paragraph restating the first | Semantic repetition |
| Every paragraph beginning with a topic sentence | Topic sentence pattern |
Start every dimension at 10. Deductions are specific and enumerable β same text produces the same score every session.
| Dimension | Deduct when |
|---|---|
| Directness | Context-setting before the point Β· restated conclusion Β· passive constructions |
| Naturalness | Banned phrases Β· second-generation tells Β· adverbs Β· register shifts |
| Rhythm | Uniform sentence lengths Β· repeated openers Β· consecutive punchy closers Β· em dashes |
| Clarity | Vague declaratives Β· inanimate subjects doing human verbs Β· lazy extremes |
| Density | Paragraph's last sentence restates its first Β· same idea across paragraphs Β· cuttable sentences |
Thresholds:
| Score | Action |
|---|---|
| 46β50 | Clean. Deliver as-is. |
| 36β45 | Light edit. Fix and deliver. |
| <35 | Full rewrite before delivering. |
Two-pass rule: After any rewrite, the skill runs all checks again before scoring. The second pass catches what the first one missed.
Most people use one of four things to clean AI writing. None of them solve the actual problem.
Vague instruction, vague result. The model removes a few obvious phrases and calls it done. Nothing is pinned down, nothing is consistent, and the next session produces a different result on the same text.
| Feature | Stop-the-Slop | Vague AI prompt |
|---|---|---|
| Catches second-generation tells | β | β rarely |
| Paragraph-level pattern diagnosis | β | β |
| Opener uniformity check | β | β |
| Semantic repetition across paragraphs | β | β |
| Topic sentence pattern detection | β | β |
| Calibrated deduction scoring | β | β |
| Same score on same text every session | β | β varies |
| Two-pass validation before delivery | β | β |
| Asks before inventing specifics | β | β guesses |
| Tells you exactly what failed and why | β | β |
Detectors tell you if you failed. They don't tell you what to fix or how. A score of 78% AI tells you nothing about which sentence caused it, why it reads that way, or what a better version looks like. They're a verdict without a diagnosis.
They also update constantly. A technique that scores 0% today scores 60% next quarter when the model retrains.
| Feature | Stop-the-Slop | AI detectors |
|---|---|---|
| Tells you what to fix | β | β |
| Rewrites the problem | β | β |
| Diagnosis with exact failing phrase | β | β |
| Works on voice quality, not statistics | β | β |
| Results don't expire when detector updates | β | β |
| Paragraph-level pattern detection | β | |
| Catches false agency and passive voice | β | β |
| Catches opener uniformity | β | |
| Free to use | β | β $10β$30/mo |
They optimize for one thing: lowering a detector score. The technique is word swapping and sentence length variation β mechanical changes that fool a statistical model. The output often reads worse than the input. Their own before/after examples show broken grammar at high intensity.
They also stop at the sentence level. Paragraph architecture, opener uniformity, semantic repetition across sections β none of that is touched.
| Feature | Stop-the-Slop | Humanizer tools |
|---|---|---|
| Output reads better than input | β | |
| Works at paragraph level | β | β |
| Second-generation tell detection | β | β |
| Doesn't break grammar | β | |
| Diagnosis with exact failing phrase | β | β |
| Calibrated scoring | β | β |
| Doesn't expire when detectors update | β | β |
| Asks before adding specifics | β | β invents |
| Free to use | β | β subscription |
| Voice-preserving | β | β replaces voice |
Every other tool asks: does this pass a detector?
This skill asks: does this sound like a person wrote it?
Those are different questions. The first has an expiry date. The second doesn't.
Writers who use AI to draft and need the output to sound like them, not like a model trying to sound helpful.
Ghostwriters reviewing AI-assisted drafts before they go to clients.
Content teams who run everything through a final quality check before publishing.
Founders and operators who write their own content and want a fast second pass before hitting send.
It won't make writing sound like you specifically β that's a voice problem, not a slop problem. This skill removes what's wrong. A separate skill handles installing what's yours.
It won't guarantee AI detection scores. Detection tools measure statistical patterns. This skill measures whether prose sounds like a person wrote it. Those are different problems. Optimizing for detector scores produces broken grammar. This doesn't.
stop-the-slop/
βββ SKILL.md # Core instructions, scoring, quick checks
βββ references/
β βββ examples.md # 13 before/after examples (5 obvious, 8 subtle)
β βββ phrases.md # Banned phrases + second-generation tells
β βββ structures.md # Structural patterns with generative rules
βββ LICENSE
Claude Projects: Upload SKILL.md and the references/ folder to project knowledge.
Claude Code: Add this folder as a skill.
API / system prompt: Include SKILL.md in your system prompt. Reference files load on demand.
Say any of these and the skill activates:
"rewrite this" Β· "edit this" Β· "does this sound AI?" Β· "clean this up" Β· "make this sound human" Β· "score my writing" Β· "review this draft" Β· "too formal" Β· "too robotic" Β· "this doesn't sound like me"
Or paste text and ask for feedback. If prose quality is the question, the skill runs.
MIT. Use freely, share widely.