From 0e200a1df7d0f472807176b4cac4680a2b791c35 Mon Sep 17 00:00:00 2001 From: Ambar Date: Sat, 28 Feb 2026 11:59:32 +0800 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index b74bb3b..ba84921 100644 --- a/README.md +++ b/README.md @@ -108,7 +108,7 @@ When you give an LLM the ability to take real actions — send emails, write fil 2. **The LLM violates a rule you declared.** "Never delete records", "never send more than 20 emails per hour", "stop if the error rate exceeds 5%" — these rules need to be enforced, not just stated in a prompt. 3. **Something crashes mid-run and leaves state inconsistent.** An action was half-applied and you don't know what happened. -Prompt engineering ("please don't do bad things") is not enforcement — the model can hallucinate past it. Post-hoc filtering fires after the damage. ClampAI enforces at the **execution layer**: the LLM produces a decision; the kernel decides whether it executes. The two are separate, and neither can override the other. +Prompt engineering ("please don't do bad things") is not enforcement, the model can hallucinate past it. Post-hoc filtering fires after the damage. ClampAI enforces at the **execution layer**: the LLM produces a decision; the kernel decides whether it executes. The two are separate, and neither can override the other. ---