Claude Code plugin · Open source · Human-in-the-loop
claude-human-review — Plain-English Diff Narration for Claude Code
claude-human-review is a Claude Code plugin that pauses on every diff and tells you, in plain English, what the agent is about to do. Why it's renaming the function. Why it's deleting the test. Why it touched the migration file. I built it because I love what Claude Code can do in a tight loop, and I do not love the failure mode where it rewrites three unrelated files because the prompt was ambiguous and you only notice on the PR.
Why I built it
I love what Claude Code can do in a tight loop. I do not love the failure mode where the agent rewrites three unrelated files because the prompt was ambiguous and you only notice on the PR. The fix isn't "don't use the agent" — it's "make every edit cheap to review."
claude-human-review hooks the plugin lifecycle and emits a short narration before each apply. Reviewing becomes a paragraph of English instead of a diff hunt. (I am, hah, the demographic for this.)
How it works
The plugin registers a pre-edit hook with Claude Code. When the agent proposes a change, the plugin asks Claude to summarize the diff against the user's original request, surfaces that summary in the terminal, and waits for approve / reject / amend. The narration is structured: what file, what change, why it serves the goal.
The implementation is small on purpose. Most of the value isn't in the code — it's in the discipline it forces. Every edit becomes a checkpoint instead of a silent step in a sequence.
What I take from it
Human-in-the-loop doesn't have to be friction. With the right narration you can move at agent speed for routine work and slow down only when the explanation doesn't match expectation. That's the right shape of trust for AI agent development on real codebases — and, I suspect, the part of the practice that ages best as the underlying models keep changing.