AI
How does Claude Code find a regression that the test suite did not catch?
A captured `claude --print` session reverts a real bug landed in a refactor commit. The user complaint named a specific wrong number (0.111 vs 0.1 for a 1-cancellation-out-of-10 month). Claude ran `git log`, jumped straight to `git show <sha>` of the suspect commit because the commit message named the affected feature, then ran `git revert` and `npm test`. Five tool calls; the article shows why this works, why it would not work for a regression that has been there for weeks, and what the test gap was that let the bug through in the first place.