A captured claude --print session against the demo, with a UserPromptSubmit hook that appends the current branch, last 3 commits, and git diff --stat to every prompt: the user asked “Summarize what work is currently in flight” with two uncommitted changes on disk, and Claude’s thinking block confirmed it read the injection (“I can see from the git status and recent commits…”) before running 2 Bash calls to verify the actual diff content. The hook turned a vague prompt into one Claude could answer in 3 turns at $0.12, and surfaced a real Bash-matcher footgun where the allow rule did not catch the form Claude reached for.
The setup
This is the fifth and final hook lifecycle phase wired into the demo. The other four (SessionStart, PreToolUse, PostToolUse, Stop) are covered in their own articles. The minimal settings.json snippet that adds just UserPromptSubmit:
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/inject-work-in-flight.sh"
}
]
}
]
}
}
UserPromptSubmit does not support matchers; it always fires on every prompt. Add this entry under the existing top-level hooks object alongside any other lifecycle hooks. The hook script:
#!/usr/bin/env bash
set -euo pipefail
cd "${CLAUDE_PROJECT_DIR:-$(pwd)}"
branch=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "no-git")
last_commits=$(git log --oneline -3 2>/dev/null || true)
diff_stat=$(git diff --stat HEAD 2>/dev/null | head -10 || true)
unstaged=$(git diff --name-only 2>/dev/null | wc -l | tr -d ' ' || echo 0)
untracked=$(git ls-files --others --exclude-standard 2>/dev/null | head -5 || true)
# Stay silent on a clean tree so we do not burn context tokens on
# prompts that have nothing in flight to summarize.
if [ "$unstaged" = "0" ] && [ -z "$untracked" ]; then
exit 0
fi
cat <<EOF
[work-in-flight context, injected by UserPromptSubmit hook]
branch: $branch
last commits:
$last_commits
uncommitted changes (${unstaged} files):
$diff_stat
untracked:
$untracked
EOF
exit 0
Two design choices worth flagging. First, the script exits silently on a clean tree. Every prompt pays the hook latency anyway (a fork plus a couple of git plumbing calls), but this saves the model from seeing irrelevant context on prompts where nothing is in flight. Second, the script uses plain stdout, which the docs document as the simplest path: any text printed gets appended as additional context. The structured additionalContext JSON path is more discrete (the docs note it shows up “more discretely” in the transcript), but plain stdout is enough for a summary block that benefits from being visible.
The pre-capture state
Before running the capture, two uncommitted changes on disk:
+ cancelAllForCitizen(_citizenId: string, _reason?: string): Booking[] {
+ throw new Error('cancelAllForCitizen: not implemented yet')
+ }
+ describe.skip('cancelAllForCitizen', () => {
+ it('cancels every booking belonging to the citizen', () => {
+ // pending implementation
+ })
+ })
A stub method and a skipped test placeholder. The shape of “I started this, did not finish, would forget by tomorrow” that the hook is supposed to surface.
The prompt
Summarize what work is currently in flight in this repo. Be concise.
Vague on purpose. With nothing injected, Claude would have to start by running git status or git log to figure out what to summarize. The interesting question is how much shorter the session gets when the hook hands those results over for free.
What Claude did, in 3 turns and 2 Bash calls
The session ran 15 seconds, 3 turns, $0.12. The tool-call census:
| Turn | Tool | Input |
|---|---|---|
| 1 | Bash | git -C /Users/kalle/Documents/kalle.works/active/answerqa-demo-app diff HEAD |
| 2 | Bash | git diff HEAD -- src/booking/service.ts src/booking/service.test.ts |
The first Bash call was rejected with "This command requires approval". The demo’s allow list includes Bash(git diff:*) but not the -C <path> form Claude reached for. The settings.json prefix matcher matches the literal string git diff at the start of the command, so git -C ... diff HEAD does not match. Claude noticed the rejection, stripped the -C and used the relative-path form on the second try, which matched.
Claude’s first thinking block, captured in events.jsonl, gave away that the injection landed:
The user wants a summary of what work is currently in flight. I can
see from the git status and recent commits that there are
uncommitted changes in `src/booking/service.test.ts` and
`src/booking/service.ts`. Let me read those files to understand
what's in progress.
“I can see from the git status and recent commits” is the tell. Without the hook, Claude would have started with a tool call (git status, git log, or both), not a thinking block that names them as known facts. The hook saved Claude one to two tool calls at the start. It did not save the verification step (git diff to read actual content), which is fine. The injection is a starter, not a substitute for reading what changed.
The final summary (the result event, with one em-dash lightly normalized to a comma):
**Work in flight:** a `cancelAllForCitizen` method on
`BookingService`, stub added to `service.ts` (throws "not
implemented") with a matching `describe.skip` test placeholder in
`service.test.ts`. The implementation hasn't been written yet.
Recent commits also show active work on Claude Code hooks: a
UserPromptSubmit hook that injects work-in-flight context (just
landed), and a Stop hook that blocks session end while tests fail.
Two paragraphs. The first describes the current uncommitted work (which came from git diff, not the hook’s stat alone). The second describes recent committed work (which came from the hook’s last commits block). The hook’s value is the second paragraph: Claude would not have grepped git log unprompted for “what was committed recently.”
The events.jsonl gap
UserPromptSubmit fires on every prompt. This session had one prompt, so the hook fired once. Searching events.jsonl for hook-related system events:
$ jq -c 'select(.type == "system" and (.subtype | startswith("hook_")))' events.jsonl
{"subtype":"hook_started","hook_event":"SessionStart",...}
{"subtype":"hook_response","hook_event":"SessionStart","outcome":"success",...}
Only SessionStart shows. UserPromptSubmit fired (the thinking block proves it), but the events stream has no hook_started, hook_response, or notification event for it. This matches the pattern from PreToolUse (article #37: surfaces as tool_result errors), PostToolUse (article #39: surfaces nothing), and Stop (article #41: surfaces a misleadingly-named stop-hook-error notification only on block). UserPromptSubmit beats them all for invisibility: there is no event at all when it fires, blocking or not.
If you need to verify a UserPromptSubmit hook ran, the only options are: write a side-effect log inside the hook script (touch a file, append a line, increment a counter), or grep the user message in events.jsonl for the injected text. The events stream alone will not tell you the hook executed.
What the model sees
The hook’s stdout is appended to the user’s prompt as additional context. The user message Claude saw was approximately:
Summarize what work is currently in flight in this repo. Be concise.
[work-in-flight context, injected by UserPromptSubmit hook]
branch: main
last commits:
5b1f9a6 chore(hooks): wire up UserPromptSubmit hook injecting work-in-flight context
5e1aa01 docs(scenarios): add #41 Stop hook gating session end on tests
43bbc7a chore(hooks): wire up Stop hook refusing finish while tests fail
uncommitted changes (2 files):
src/booking/service.test.ts | 6 ++++++
src/booking/service.ts | 4 ++++
2 files changed, 10 insertions(+)
Claude treats the injected block as part of the user’s intent. It does not see a separate “system” boundary between the user’s actual words and the hook’s appended text. That is by design (the docs call it “additional context for Claude”) but worth knowing: the model cannot tell what was the human and what was the script.
If you want a clearer separation, the JSON additionalContext path through hookSpecificOutput is documented as more discrete in the transcript. For most “give Claude context” use cases, plain stdout appended to the prompt is fine, the model handles it correctly, and the lower setup cost wins. For “inject a rule the model should follow but the user did not ask for,” the structured path is safer because the user can audit hook-injected directives separately from their own prompts.
Footguns
Bash matcher prefix-strictness bites the injection. Claude reached for git -C /full/path diff HEAD, which is functionally identical to git diff HEAD from the right cwd. The allow list had Bash(git diff:*) and the call was rejected. Why this matters: the hook’s injected context primes Claude to verify with git, and the most natural verification command may not match the allow rule you wrote. Either widen the rule (Bash(git*), with the cost of granting more than you wanted), or write the hook injection to suggest the form that matches (use git diff HEAD -- <files> rather than git -C ... diff). Allow-rule prefix matching is whitespace-sensitive at the very start of the command, so -C after git shifts the matchable prefix.
Every prompt pays the hook cost, including prompts where the context is irrelevant. A 50-prompt session with this hook runs the script 50 times. Each run is a fork plus a couple of git invocations, on the order of 200ms locally. That is 10 seconds of latency added across the session. Why this matters: if your hook does anything heavier (a database query, a remote call, a full file read), you compound the latency on every prompt regardless of whether the user is asking about the thing the hook injects. The clean-tree silent-exit pattern this article uses is one mitigation; another is to gate the hook on a quick keyword check of the user’s prompt (read stdin, decide whether to inject).
The injection is invisible to the user. The user sent “Summarize what work is in flight.” Claude’s response cited specific file paths and line counts that came from the hook’s stat output, not from the user’s typing. To the user, it looks like Claude knows things it should not know. Why this matters: in a transcript review or audit (compliance, security review, post-incident), the hook-injected context is buried in the user message in events.jsonl rather than surfaced as a separate system event. Document the hook explicitly to your team, and consider writing the hook to prefix its output with a clear marker (this article’s script uses [work-in-flight context, injected by UserPromptSubmit hook]) so a transcript reader can spot what came from where.
Block decisions are silent in events.jsonl too. The article’s hook always allows; if it returned {"decision": "block", "reason": "..."} to refuse a prompt entirely, the prompt would be erased from context and the user would see only the reason. There is no hook_response event for that either. Why this matters: a UserPromptSubmit hook that blocks prompts (e.g., “no prod commands in this session”) is a real production pattern, but the verifiability story is the same as the additive case. Instrument the script with a side-effect log if you ship blocking behavior.
Hook injection counts against the context window. Every prompt that has work in flight gets the branch, three commit lines, and a diff stat appended. On a fresh session that is fine; on a 50-prompt session with the same uncommitted state, the hook prefix is repeated 50 times, eating tokens that compound across the session. Why this matters: if the underlying context (branch, last commits) is stable across the session, prefer SessionStart for the static part (the --continue and --resume documentation cover the durability story). Use UserPromptSubmit only for the genuinely per-prompt facts, like a fast-changing diff stat.
The model treats the injection as user intent. Claude responded as though the user had asked “summarize the work in src/booking/service.ts and the recent hook commits” because both paths were in the appended context. Why this matters: a hook that injects too much context will steer the model in directions the user did not intend. Keep the injection tight and topical to the kind of prompt the hook is meant to accelerate. If your hook injects a 500-line full diff, the next prompt about an unrelated file will still be primed by that diff.
When to use a UserPromptSubmit context-injection hook
- Per-prompt freshness matters. “What did I just do?” “What is in flight?” These questions need data from now, not from session boot. SessionStart’s once-per-session injection goes stale within minutes if the user is actively committing.
- The relevant context is small and bounded. Branch name, top of git log, diff stat: bounded growth, useful coverage. A full git diff is not bounded and is the wrong shape for this hook.
- The hook output is general-purpose enough to help most prompts. “Working tree summary” applies to any prompt about active work. “Latest sales figures” applies to one prompt out of fifty and pollutes the rest.
- You want the model to know something the user assumed without telling Claude. “Today is 2026-05-10,” “This repo uses pnpm not npm,” “There is a feature flag X gating this code path” are good fits when they cannot be reliably derived from the codebase or stale CLAUDE.md.
When NOT to use this pattern
- CLAUDE.md or SessionStart already covers it. Static project conventions belong in CLAUDE.md. One-time context (recent commits at boot) belongs in SessionStart. UserPromptSubmit is the right phase only when the data changes during the session.
- The hook is slow. A hook that takes 2 seconds adds 2 seconds to every prompt’s latency. If your “context” requires a network call, cache it elsewhere (a sidecar daemon, a file the hook reads in milliseconds) rather than running it inline on every prompt.
- Different prompts need different context. Prompt-shape-aware injection is a reasonable pattern, but the simple “always inject” form does not handle it. Write the hook to read the prompt from stdin (Claude Code passes the JSON envelope), classify, and inject conditionally.
- The injected text is sensitive. Branch names, file paths, and commit messages can contain customer names, ticket IDs, security findings. The injected context goes into the events.jsonl transcript, which may be backed up or analyzed by tooling. If you would not paste it into Slack, do not inject it on every prompt.
- Context window is already tight. On a 50-prompt session with a verbose hook, the cumulative injection eats meaningful budget. Track it: dump the events.jsonl, count the hook-prefix tokens (each prompt gets the full block), and decide whether the saved tool calls are paying for the burned tokens. They usually are; not always.
- You need predictable, verifiable behavior. UserPromptSubmit is the most invisible of the lifecycle hooks in events.jsonl. If your workflow needs an audit trail of what context Claude saw on each turn, instrument the hook to log to disk, or use a different mechanism (CLAUDE.md, an explicit slash command the user invokes).