Summarize the current coding session. Shows learnings and accomplishments in terminal, appends to today's Obsidian daily note under "## Notes >
Log your coding session progress to Obsidian daily notes, organized by project.
Read ~/.claude/settings.json and extract:
env.OBSIDIAN_VAULT - Path to Obsidian vaultenv.SESSION_EXPORTS_BASE - External storage for session exports (default: ~/.claude/session-exports)If OBSIDIAN_VAULT is not set, display: "Vault not configured. Run /vault first."
Read vault structure from <vault_path>/vault-config.yaml.
Paths used from vault-config.yaml:
daily_notes → Daily Notes folderprojects → Projects folderprocessed_coding → Vault location for lightweight summaries (full exports go to SESSION_EXPORTS_BASE)Section format: ## Progress → ### [[Project Name]]
botThe rating is an optional assessment of how well Claude performed during the session.
Rating scale:
| Rating | Meaning |
|---|---|
| 7 | Exceptional - exceeded expectations |
| 6 | Great - very helpful |
| 5 | Good - solid assistance |
| 4 | Okay - got the job done |
| 3 | Poor - struggled significantly |
| 2 | Bad - mostly unhelpful |
| 1 | Terrible - counterproductive |
Input methods:
/devlog 5 or /devlog 5 --slackAn optional free-text comment providing qualitative feedback about the session. Can be provided inline after the rating (e.g., /devlog 5 great session). If omitted, the user is prompted. Stored as string or null. See Step 4 for full parsing rules.
First, read <vault_path>/vault-config.yaml to get the daily_notes folder name.
Then check if today's daily note exists:
<vault_path>/<daily_notes>/YYYY-MM-DD.md
Use today's date in YYYY-MM-DD format.
If the file does NOT exist:
If the file exists: Continue to Step 1.5.
Determine if the current working directory is a git worktree (e.g., .claude/worktrees/worktree-*):
GIT_DIR=$(git rev-parse --path-format=absolute --git-dir)
GIT_COMMON_DIR=$(git rev-parse --path-format=absolute --git-common-dir)
if [ "$GIT_DIR" != "$GIT_COMMON_DIR" ]; then
IS_WORKTREE=true
MAIN_REPO_PATH=$(dirname "$GIT_COMMON_DIR")
else
IS_WORKTREE=false
MAIN_REPO_PATH=$(pwd)
fi
# Guard: real worktrees always have /worktrees/ in GIT_DIR
[ "$IS_WORKTREE" = true ] && [[ "$GIT_DIR" != */worktrees/* ]] && IS_WORKTREE=false && MAIN_REPO_PATH=$(pwd)
MAIN_REPO_BASENAME=$(basename "$MAIN_REPO_PATH")
# Encoded path for session-exports directory (matches Claude's project-path encoding)
MAIN_PROJECT_PATH=$(echo "$MAIN_REPO_PATH" | sed 's|/|-|g')
IS_WORKTREE=true, MAIN_REPO_PATH points to the main repository rootIS_WORKTREE=false, MAIN_REPO_PATH = CWD (no behavior change)Use MAIN_REPO_BASENAME and MAIN_PROJECT_PATH in subsequent steps instead of CWD-derived values.
Identify which project this session belongs to:
.md files in the Projects folder (<vault_path>/<projects> from vault-config.yaml)MAIN_REPO_BASENAME from Step 1.5 (e.g., personal-toolkit — uses main repo name even when running from a worktree)# Title heading### [[Thought Organizer Agent]])If no matching project found: Use "General" as the project name.
Export session to external storage (outside the vault) for future reference, with a lightweight summary in the vault.
Note: If session identification fails (session ID not resolved AND transcript not found), raise an error and DO NOT CONTINUE
Get current session info:
${CLAUDE_SESSION_ID}
(Claude Code replaces this literal with the actual UUID at skill load time -- no hooks needed)SESSION_ID="<the substituted UUID visible above>"
TRANSCRIPT_PATH=$(find ~/.claude/projects/ -name "${SESSION_ID}.jsonl" -type f 2>/dev/null | head -1)
PROJECT_PATH=$(echo "$TRANSCRIPT_PATH" | sed "s|^$HOME/.claude/projects/||" | sed "s|/${SESSION_ID}.jsonl$||")
TMPBASE="${CLAUDE_CODE_TMPDIR:-/tmp/claude-$(id -u)}"$TMPBASE/ matching the CWD project~/.claude/projects/{project-path}/{session-id}.jsonlVerify with content fingerprinting:
Before proceeding, confirm the transcript actually contains THIS conversation. Multiple sessions can exist for the same project path, and the session ID may point to the wrong one.
Build a session fingerprint (reused in Part B2 -- build it once, store for later):
/stylize, /scribe)Score the candidate transcript by grepping for all fingerprint terms at once:
grep -cE "term1|term2|term3|term4|term5" "$TRANSCRIPT_PATH"
Count how many distinct terms matched (each term that appears at least once = 1 match).
Decide:
Fallback: Content-based session discovery
If the primary transcript fails fingerprinting:
.jsonl files in ~/.claude/projects/{project-path}/ modified in the last 24 hours:find ~/.claude/projects/{project-path}/ -maxdepth 1 -name "*.jsonl" -mtime -1
grep -cE approach. Track match counts per file.Session ID transcript mismatch detected.
Using transcript {session-id} based on content matching.
Could not identify the correct session transcript.
Please provide the session ID manually.
Detect previous transcripts (recursive with LLM verification):
CRITICAL: Only count REAL continuation references, not casual mentions.
Compaction vs Continuation:
Compaction stays in the same JSONL file (same session ID). It writes a
compact_boundary system message followed by a summary user message. The
summary's "read the full transcript at:" path points to itself (self-reference).
Continuation creates a new JSONL file pointing to the previous file.
When discovery finds a self-referencing path (extracted path == file being searched),
skip it — it's a compaction, not a separate transcript.
A transcript reference is ONLY valid if it appears in a continuation message - either:
Both end with "read the full transcript at:" followed by the path.
DO NOT count:
JSONL Structure Reference: Each line is a JSON object with fields:
type: "user", "assistant", "system", "progress", "queue-operation"message.content: either a string (direct message) or array (tool results)queue-operation type: content is at top level (not nested in message)Step A: Find candidate lines
grep -n "read the full transcript at:" <transcript.jsonl>
Step B: Verify each candidate using JSON structure
For each candidate line, examine the JSON:
type == "user" OR type == "queue-operation"message.content is a string (NOT an array)
For "queue-operation" type: Check content at top level (not nested)"Implement the following plan:" (plan-mode)"This session is being continued" (compaction)grep -o "read the full transcript at: [^\"\\]*\.jsonl"Example verification:
Line 2: {"type":"user","message":{"content":"Implement the following plan:...read the full transcript at: /path/to/abc.jsonl"}}
✓ type == "user"
✓ message.content is a string
✓ Contains "Implement the following plan:"
→ VALID plan-mode continuation → abc.jsonl
Line 4: {"type":"queue-operation","content":"Implement the following plan:...read the full transcript at: /path/to/abc.jsonl"}
✓ type == "queue-operation"
✓ content is a string at top level
✓ Contains "Implement the following plan:"
→ VALID plan-mode continuation → abc.jsonl
Line 356: {"type":"user","message":{"content":[{"type":"tool_result",...}]}}
✓ type == "user"
✗ message.content is an ARRAY (tool_result)
→ SKIP (false positive from grep output)
Grep-based validation (using for-loop to avoid subshell issues):
Note: Do NOT use jq for validation - JSONL lines often contain unescaped control characters (tabs, newlines) that cause jq parse errors. Use grep pattern matching instead.
# IMPORTANT: Do NOT use "grep | while read" - the while loop runs in a subshell
# and output is lost. Use a for-loop over line numbers instead.
for linenum in $(grep -n "read the full transcript at:" "$TRANSCRIPT_PATH" | cut -d: -f1); do
line=$(sed -n "${linenum}p" "$TRANSCRIPT_PATH")
# Validate using grep pattern matching (not jq - avoids control char errors)
# Check for either "user" type OR "queue-operation" type
is_user=$(echo "$line" | grep -c '"type":"user"')
is_queue=$(echo "$line" | grep -c '"type":"queue-operation"')
if [ "$is_user" -eq 1 ]; then
# For user type, content must be a string (not array)
echo "$line" | grep -q '"content":"' || continue
elif [ "$is_queue" -eq 1 ]; then
# For queue-operation, content is at top level
echo "$line" | grep -q '"content":"' || continue
else
continue
fi
# Check for valid continuation patterns and extract reference
if echo "$line" | grep -q '"content":"Implement the following plan'; then
ref=$(echo "$line" | grep -o 'read the full transcript at: [^"\\]*\.jsonl' | sed 's/read the full transcript at: //')
# Skip self-references (compaction within same file)
[ "$ref" = "$TRANSCRIPT_PATH" ] && continue
[ -f "$ref" ] && echo "$ref"
elif echo "$line" | grep -q '"content":"This session is being continued'; then
ref=$(echo "$line" | grep -o 'read the full transcript at: [^"\\]*\.jsonl' | sed 's/read the full transcript at: //')
# Skip self-references (compaction within same file)
[ "$ref" = "$TRANSCRIPT_PATH" ] && continue
[ -f "$ref" ] && echo "$ref"
fi
done
Step C: Recursive discovery
For each valid transcript found, repeat Steps A-B until no new transcripts are discovered.
Step D: Determine chronological order using reference topology
DO NOT use file modification times - they are unreliable. Use the reference chain: transcripts that are referenced but don't reference others come FIRST.
Example from a real session:
Discovery:
- 830e4d0f (current) has plan implementation referencing de2864e0
- de2864e0 has plan implementations referencing 2bb50f3b AND c9c826aa
- c9c826aa has plan implementation referencing 2bb50f3b
- 2bb50f3b has no plan implementation references (ROOT)
Reference topology:
- 2bb50f3b: referenced by c9c826aa and de2864e0, references nothing → ROOT
- c9c826aa: referenced by de2864e0, references 2bb50f3b → SECOND
- de2864e0: referenced by current, references both above → THIRD
- 830e4d0f: references de2864e0 → CURRENT (last)
Final order: 2bb50f3b → c9c826aa → de2864e0 → 830e4d0f
Verification output: After discovery, list all found transcripts showing:
Handling compaction: If the conversation context has been compacted (summarized), the continuation reference may not be visible in the current context. In this case:
Detect compact_boundary entries for metadata:
grep -n '"subtype":"compact_boundary"' "$TRANSCRIPT_PATH"
Each match is a JSON line with type: "system", subtype: "compact_boundary",
compactMetadata.trigger ("auto" or "manual"), and compactMetadata.preTokens.
Collect these per-transcript for the compactions field in metadata.json.
Generate task title:
Bind canonical session-log paths (a single source of truth referenced by Steps 4.5.5, 7, 7.5.2, 12, 13 — do NOT re-derive locally):
VAULT_SUMMARY_FILENAME = {YYYY-MM-DD} {Project Name} {Task Title}.md — the bare filename Step 12 writesVAULT_SUMMARY_RELPATH = <processed_coding>/<VAULT_SUMMARY_FILENAME> — vault-relative path (e.g., Processed/Coding/2026-05-12 Atlas Karakeep Volume Fix.md)VAULT_SUMMARY_URLENCODED = VAULT_SUMMARY_RELPATH with spaces replaced by %20 — the form Obsidian needs inside markdown link parens and inside obsidian://open?...&file= URLsCreate external session folder:
{SESSION_EXPORTS_BASE}/{project-path}/ (expand ~ to $HOME)IS_WORKTREE=true, use MAIN_PROJECT_PATH from Step 1.5 as the {project-path} instead of the transcript-derived path. This ensures exports land at the main repo's directory (e.g., -Users-henrybae-Files-Startup-Projects-personal-toolkit/).PROJECT_PATH derived from the transcript path in Part A (e.g., "-Users-henrybae-Files-Startup-Projects-thought-organizer"){YYYY-MM-DD} {Project Name} {Task Title}/~/.claude/session-exports/-Users-henrybae-Files-Startup-Projects-thought-organizer/2026-01-22 Thought Organizer Agent Clippings Fix/mkdir -p "$EXPORT_PATH"~/.claude/projects/). Only the export DESTINATION changes.Copy transcripts:
session.jsonlsession-1.jsonl, session-2.jsonl, etc. (simplest, always works)session-1.jsonl, session-2.jsonl, session-3.jsonl, session-4.jsonlDetect and copy plan files:
"slug":"[^"]*" to extract ALL unique plan slugs~/.claude/plans/<slug>.mdplan.mdplan-1.md, plan-2.md (in chronological order)# heading from each plan fileCopy session folder contents (subagents, tool-results, etc.):
~/.claude/projects/{project-path}/{session-id}/
(project-path is derived from transcript path, e.g., "-Users-henrybae-Files-Startup-Projects-thought-organizer")ls -d "$session_folder" 2>/dev/nullcp -r "$session_folder"/* "$EXPORT_PATH/"subagents/ - Subagent JSONL transcripts (Explore, Plan, etc.)tool-results/ - Large tool outputsCapture CLAUDE.md configuration files:
Discover and copy CLAUDE.md files that were active during the session.
{project_path} below refers to MAIN_REPO_PATH from Step 1.5.
~/.claude/CLAUDE.md{project_path}/CLAUDE.md{project_path}/.claude/CLAUDE.mdIS_WORKTREE=true): Also check the worktree CWD for CLAUDE.md files that may differ from the main repo (e.g., worktree-specific .claude/CLAUDE.md)For each that exists:
mkdir -p "$EXPORT_PATH/claude-md"claude-md/global.mdclaude-md/project.mdclaude-md/project-dot-claude.md{"scope": "global|project|project-dot-claude", "source": "/abs/path", "file": "claude-md/global.md"}claude_md from metadata)8.5. Capture memory snapshot:
Snapshot the per-project auto-memory folder so future sessions can see what memory looked like at the time of this devlog.
~/.claude/projects/{project-path}/memory/PROJECT_PATH derived from the transcript path in Part AIS_WORKTREE=true, prefer MAIN_PROJECT_PATH from Step 1.5 (main repo path, encoded) — memory is keyed by project path, so use the same path the runtime usesmemory from metadata)mkdir -p "$EXPORT_PATH/memory"cp -r "$SOURCE_MEMORY"/. "$EXPORT_PATH/memory/"count=$(find "$EXPORT_PATH/memory" -maxdepth 1 -name "*.md" | wc -l | tr -d ' '){"source": "<absolute-source-path>", "count": <count>} Memory files are tiny (typically <100KB total even with many entries), so full snapshots are cheap. No diff logic needed — if you later want to compare memory across sessions, diff the memory/ folders of the two exports.
Detect subagent information for metadata:
$EXPORT_PATH/subagents/ existsagent-*.jsonl filesagent-{id}.jsonl)"slug":"...")"sessionId":"...")Extract git commits made during session:
Step A: Detect session-relevant git repos
Only track repos where files were actually created, edited, or committed during this session (including continuations from previous transcripts in the chain). Do NOT blindly scan all subdirectories.
CRITICAL: CWD being a git repo does NOT automatically make it session-relevant. Pre-existing uncommitted changes (those already present in the gitStatus snapshot at session start) do NOT count. Editing files outside any git repo (e.g., ~/.claude/settings.json) does NOT make CWD session-relevant.
gitStatus: If CWD repo's uncommitted changes ALL appear in the session-start gitStatus, they are pre-existing — do NOT count CWD as session-relevantgit -C <path> rev-parse --git-dirIS_WORKTREE=true, use MAIN_REPO_PATH from Step 1.5 as the CWD repo path (not the worktree directory)untracked_repos (informational only)git: null, skip rest of Step 10Step B: Check repo cleanliness (session-relevant repos only)
Only check repos identified in Step A — ignore unrelated repos entirely.
B1: Uncommitted changes
Run git -C "$repo_path" status --porcelain for each session-relevant repo
If no repo is dirty → continue to B2
If any repo is dirty → evaluate session readiness using session context before blocking.
Ready only when ALL of the following hold:
TODO(fixme), WIP:, stray commented-out blocks)If READY → auto-commit and continue:
git -C "$repo_path" diff, then run git -C "$repo_path" add -A followed by git -C "$repo_path" commit -m "<message>". Generate a Conventional Commits-style message (type(scope): summary, optional body) from session context.If NOT READY → block and explain why:
Uncommitted changes detected in:
- claude-config
Session does not look ready to commit:
- <one-line reason, e.g., "test run failed at the end and was not rerun">
Finish the work (or commit manually), then run /devlog again.
B2: Unpushed commits → auto-push
git -C "$repo_path" log @{u}..HEAD --oneline 2>/dev/nullgit -C "$repo_path" push. No readiness check here: /devlog is a session-end signal, and any commits that exist (whether from B1 auto-commit or earlier manual commits) were deliberate, so pushing is the natural follow-through.Push failed:
- claude-config: <error summary>
Resolve manually, then run /devlog again.
Priority: B1 runs before B2 (can't push what isn't committed). If B1 blocks, B2 is skipped. If B1 commits or passes cleanly, B2 pushes.
Step C: Get session start timestamp
Extract the first timestamp from the oldest transcript in the session chain:
# Skip line 1 (file-history-snapshot has no timestamp), get timestamp from line 2
head -2 "$OLDEST_TRANSCRIPT" | tail -1 | python3 -c "import sys,json; print(json.loads(sys.stdin.read())['timestamp'])"
The timestamp format is ISO 8601: 2026-01-27T23:51:46.497Z
Step D: Query commits per-repo
For each tracked repo:
git -C "$repo_path" log --all --since="$SESSION_START_TIMESTAMP" --format="%H" --reverse
--all searches across all branches (catches commits made in worktrees)--reverse ensures commits are in chronological order (oldest first)%H gives full commit hashesStep E: Build per-repo tracking data
For each tracked repo with commits:
commits = [hash1, hash2, ...] # chronological order
start_commit = $(git -C "$repo_path" rev-parse commits[0]^) # parent of first commit
end_commit = commits[-1] # last commit
commit_range = "start_commit..end_commit"
remote = $(git -C "$repo_path" remote get-url origin 2>/dev/null) # null if no remote
name = basename of repo_path
Edge case - if first commit has no parent (initial commit):
git -C "$repo_path" rev-parse $FIRST_COMMIT^ 2>/dev/null || echo "ROOT"
If ROOT, set start_commit: null and commit_range: "..end_commit"
If no commits found for a repo, exclude it from repos[].
Step F: Filter commits by conversation context (per-repo)
Apply the LLM-based conversation-context filter independently per repo.
Get commit messages for all commits found in each repo:
git -C "$repo_path" log --format="%H %s" <commit1> <commit2> ...
Compare each commit against the current conversation context:
For each commit, determine if it matches:
Only include matching commits in the repo's tracking data
Discard repo entries with zero matching commits after filtering
Example (multi-repo):
claude-config: fa31b22 "Bump devlog schema" → MATCH, e7c9d01 "Update slackbot" → NO MATCH
personal-toolkit: 3a8f1cc "Rewrite Step 9 for multi-repo" → MATCH
Step G: Assemble git object
repos[] = all repos with matching commits after filteringuntracked_repos = sub-repos found in single-repo mode (if any); omit field if none foundgit: nullreferences/SCHEMA.md v0.11 for field structure and examples10.5. Compute Session Cost:
After git tracking completes, query ccusage for per-session cost and token usage. This step is non-blocking — if ccusage fails for any reason, set cost: null in metadata.json and continue.
Runner: npx ccusage@latest (matches the existing daily-stats hook). Do not pass --offline — ccusage's cached pricing table lags new model releases (e.g., claude-opus-4-7 is absent from the cache, causing totalCost: 0). Let ccusage fetch fresh pricing.
Procedure:
Extract every session_id from phases[] (collected in Part A). Note: ccusage's session --json (without --id) groups by project directory, so it cannot filter by chat UUID — per-id queries are required for accurate per-session costs.
Run the N per-id queries in parallel (typically 1-4 phases; serial would add ~0.8s per phase). Example shell:
for sid in $SESSION_IDS; do
npx ccusage@latest session --id "$sid" --json 2>/dev/null > "$TMPDIR/cost-$sid.json" &
done
wait
Aggregate the per-phase JSON blobs. Each blob has totalCost, totalTokens, and entries[] (with model + token counts). ccusage does not fill per-entry costUSD in --id mode, so we track token-level breakdown per model but only record the authoritative aggregate totalCost:
import json, sys
blobs = json.load(sys.stdin)
totals = {"totalCost": 0.0, "inputTokens": 0, "outputTokens": 0,
"cacheCreationTokens": 0, "cacheReadTokens": 0}
by_model = {}
for b in blobs:
if not b or "totalCost" not in b: continue
totals["totalCost"] += b["totalCost"]
for e in b.get("entries", []):
m = e.get("model") or "unknown"
mb = by_model.setdefault(m, {"modelName": m, "inputTokens": 0,
"outputTokens": 0, "cacheCreationTokens": 0, "cacheReadTokens": 0})
for k in ("inputTokens", "outputTokens", "cacheCreationTokens", "cacheReadTokens"):
totals[k] += e.get(k, 0)
mb[k] += e.get(k, 0)
totals["modelBreakdowns"] = list(by_model.values())
print(json.dumps(totals))
Handle failures gracefully:
npx ccusage errors or returns empty for every phase → cost: null/devlog on a ccusage failure; log a one-line warning and proceedStore the resulting object as cost for use in Step 11.
Verification: The computed totalCost should match the statusline $X.XX readout at the time /devlog is run (within ~$0.01 for cache-pricing rounding).
{
"schema_version": "0.14",
"date": "2026-01-22",
"project": "Thought Organizer Agent",
"project_slug": "thought-organizer",
"project_path": "/Users/henrybae/Files/Startup/Projects/thought-organizer",
"worktree": {
"path": "/Users/henrybae/Files/Startup/Projects/thought-organizer/.claude/worktrees/worktree-feature",
"name": "worktree-feature"
},
"task": "Fix clippings heading duplication",
"task_title": "Clippings Fix",
"rating": 5,
"comment": "Solid session but got stuck on type inference",
"phases": [
{"name": "planning", "file": "planning.jsonl", "session_id": "aaa"},
{"name": "implementation-1", "file": "implementation-1.jsonl", "session_id": "bbb"}
],
"plan_files": [
{
"slug": "virtual-strolling-bee",
"file": "plan-1.md",
"title": "Plan: Add Plan File Export to Devlog",
"phase": "planning"
}
],
"subagents": [
{"agent_id": "a946520", "slug": "joyful-splashing-lake", "session_id": "aaa", "session_num": 1},
{"agent_id": "b123456", "slug": "gentle-flowing-river", "session_id": "bbb", "session_num": 2}
],
"git": {
"repos": [
{
"name": "thought-organizer",
"path": "/Users/henrybae/Files/Startup/Projects/thought-organizer",
"remote": "https://github.com/BaeHenryS/thought-organizer.git",
"start_commit": "9b75d4a",
"end_commit": "def5678",
"commits": ["abc1234", "def5678"],
"commit_range": "9b75d4a..def5678"
}
]
},
"compactions": [
{"file": "session-2.jsonl", "line": 430, "timestamp": "2026-03-11T09:24:10.107Z", "trigger": "auto", "pre_tokens": 169072}
],
"claude_md": [
{"scope": "global", "source": "/Users/henrybae/.claude/CLAUDE.md", "file": "claude-md/global.md"},
{"scope": "project", "source": "/Users/henrybae/Files/Startup/Projects/thought-organizer/CLAUDE.md", "file": "claude-md/project.md"}
],
"memory": {
"source": "/Users/henrybae/.claude/projects/-Users-henrybae-Files-Startup-Projects-thought-organizer/memory",
"count": 10
},
"files_modified": {
".": ["src/clipper.py"]
},
"cost": {
"totalCost": 4.237,
"inputTokens": 12453,
"outputTokens": 115821,
"cacheCreationTokens": 920348,
"cacheReadTokens": 1796588,
"modelBreakdowns": [
{
"modelName": "claude-opus-4-6",
"inputTokens": 12104,
"outputTokens": 110389,
"cacheCreationTokens": 900211,
"cacheReadTokens": 1750432
}
]
},
"linear": {
"identifier": "HEN-12",
"url": "https://linear.app/henrybae/issue/HEN-12",
"issue_id": "<UUID>",
"project_id": "<UUID>",
"candidates_detected": ["HEN-12"],
"state_before": "In Progress",
"state_after": "Done",
"comment_id": "<comment UUID>",
"created_retroactively": false
},
"outcome": "completed"
}
Schema reference: See references/SCHEMA.md in this skill folder for version history and field documentation.
[{"name": "session", "file": "session.jsonl", "session_id": "xxx"}]plan_files array: include only if plan files were exportedsubagents array: include only if subagent files were foundcompactions array: include only if compact_boundary entries were found in any transcriptclaude_md array: include only if CLAUDE.md files were foundmemory object: include only if the project's memory folder exists and has files (see Step 8.5)worktree object: include only when IS_WORKTREE=true from Step 1.5 (contains path and name); omit entirely when not in a worktreeproject_path: always uses MAIN_REPO_PATH from Step 1.5 (main repo path, not worktree path)git object: uses repos[] array (even single-repo = one element); set to null if no commits or no git reposcost object: set from Step 10.5; null if ccusage unavailable or every phase query failedlinear object: include only when LINEAR_ISSUE from Step 4.5 is non-null. Captures the issue identifier, URL, internal UUIDs, the candidates auto-detected, state transition outcome, the closing comment's ID, and whether the issue was created retroactively. Omit entirely otherwise.files_modified: Object (dict) keyed by directory path. Use "." for files within CWD (with repo-prefixed paths in multi-repo mode, e.g., "claude-config/skills/devlog/SKILL.md"). For files outside CWD, use ~-relative directory paths as keys (e.g., "~/Library/CloudStorage/.../Segment-B-Bookface": ["spotify.md", "netflix.md"]). See references/SCHEMA.md v0.12 for full specification and examples.After all files are exported and metadata.json is written, verify the copied transcripts actually contain this conversation's content.
Step 1: Reuse the session fingerprint built in Part A (Step 3, item 1). Add the approximate conversation flow (e.g., "started with planning, then implemented X, then fixed bug Y").
Step 2: Launch verification agent
Agent tool:
subagent_type: "Explore"
description: "Verify exported transcripts"
prompt: |
Verify that the exported session transcripts in {EXPORT_PATH}
contain the conversation from my current session.
Session fingerprint:
- Topics: {list of 3-5 topics from fingerprint}
- Files touched: {list of key files from fingerprint}
- Key actions: {list of notable things done from fingerprint}
- Flow: {brief conversation arc from fingerprint}
Verification approach:
1. Use grep to count mentions of key topics across session-*.jsonl files
(e.g., grep -c "topic_keyword" session-*.jsonl)
2. Use grep to confirm each key file path appears in the transcripts
3. Use targeted reads (head/tail) on each JSONL to verify the
conversation flow matches (start of earliest, end of latest)
4. Check line counts (wc -l) to confirm transcripts are non-trivial
Return:
PASS - with a brief summary of evidence (topic counts, files found,
line counts, flow confirmation)
WARN - if something seems off, with a brief explanation
Step 3: Handle result
Transcript verification warning:
[warning details from agent]
Continuing with devlog. Review exported files if needed.
Determine vault summary path:
<vault_path>/<processed_coding>/{YYYY-MM-DD} {Project Name} {Task Title}.md.md file, NOT a folder. NEVER create a summary.md inside a directory. The path ends in .md.Write vault summary file:
Read assets/vault-summary-template.md (in this skill directory) as your base. Substitute every {placeholder} with the resolved value. For lines marked # OPTIONAL, uncomment and substitute only when the relevant value is non-null per the rules below; drop the # OPTIONAL marker on kept lines. Write to the path from step 12 (flat .md file, never a folder).
Section semantics:
## Key Changes mirrors the daily-note bullet's outcome register (Step 7).## Technical Notes holds implementation depth (version bumps, function-level fixes, perf numbers, gotchas, config tweaks). Skip the section entirely when there's nothing to record.## Files Modified — flat list when all files live in one directory; group under ### {dir} headers when files span multiple directories (rules below).Multi-repo vault summary: When multiple repos have commits, group Key Changes by repo:
## Key Changes
### claude-config
- [Change in claude-config]
### personal-toolkit
- [Change in personal-toolkit]
If only one repo has commits (even in multi-repo mode), keep the flat format without sub-headers.
For ## Files Modified, group files by directory when they span multiple locations. For files within CWD, use relative paths (with repo prefix in multi-repo mode). For files outside CWD, show the ~-relative directory path as a sub-header. When a directory has many files (>10), summarize with count and a few examples instead of listing all.
Comment in frontmatter: Only include comment: if non-null. Wrap in quotes for YAML safety.
Cost in frontmatter: Only include cost_usd: if cost from Step 10.5 is non-null. Format to 2 decimal places (e.g., cost_usd: 4.24). Omit entirely when cost is unavailable.
Linear in frontmatter: Only include linear_issue: and linear_issue_url: when LINEAR_ISSUE from Step 4.5 is non-null. Use the identifier exactly as Linear returned it (e.g., HEN-12), and the full https URL. Omit both fields when LINEAR_ISSUE is null.
Important: Use the full absolute path (expand ~ to $HOME) for the file:// URL to work in Obsidian.
Vault summary notes:
session_path in frontmatter for programmatic accessExternal folder structure:
~/.claude/session-exports/
└── {project-path}/ # e.g., "-Users-henrybae-Files-Startup-Projects-thought-organizer"
└── {YYYY-MM-DD} {Project Name} {Task Title}/ # e.g., "2026-01-27 Personal Toolkit Devlog Fix"
├── metadata.json
├── session.jsonl # Main transcript (or session-1.jsonl, etc.)
├── plan.md # (if exists)
├── claude-md/ # (if exists - CLAUDE.md files)
│ ├── global.md
│ └── project.md
├── memory/ # (if exists - auto-memory snapshot)
│ ├── MEMORY.md
│ ├── feedback_*.md
│ ├── project_*.md
│ └── ...
└── subagents/ # (if exists - copied from session folder)
└── agent-*.jsonl
└── tool-results/ # (if exists - copied from session folder)
└── toolu_*.txt
Vault folder structure (lightweight):
<vault>/Processed/Coding/{YYYY-MM-DD} {Project Name} {Task Title}.md # Links to external session
Store the session path (e.g., ~/.claude/session-exports/-Users-henrybae-Files-Startup-Projects-thought-organizer/2026-01-22 Thought Organizer Agent Clippings Fix/) for use in Step 6.
Parse arguments: Strip --slack and --no-linear flags first, then:
/devlog 5 great pair programming session → rating=5, comment="great pair programming session"/devlog 5 worked well but got stuck on types --slack → rating=5, comment="worked well but got stuck on types", slack=true/devlog 5 --no-linear → rating=5, comment: prompt user, linear=false/devlog 5 → rating=5, comment: prompt user/devlog → prompt for rating, then prompt for commentStore the parsed flags: SLACK_ENABLED (bool), LINEAR_ENABLED (bool; default true, set false when --no-linear is present).
Get rating (if not provided inline):
Rate this session (1-7):
7 - Exceptional (exceeded expectations)
6 - Great (very helpful)
5 - Good (solid assistance)
4 - Okay (got the job done)
3 - Poor (struggled significantly)
2 - Bad (mostly unhelpful)
1 - Terrible (counterproductive)
Do NOT use AskUserQuestion for this — it caps at 4 options and produces inconsistent groupings. Wait for the user to reply with a number, then validate it is 1-7.
Get comment (if not provided inline):
nullStore both rating and comment for use in subsequent steps
This step cross-references the session with a Linear issue (creating one if needed) so the daily note bullet, vault summary frontmatter, and metadata all link to Linear. It is skipped entirely in either of these cases:
LINEAR_ENABLED is false (user invoked with --no-linear)linear_project_id field in its frontmatterWhen skipped: set LINEAR_ISSUE = null, LINEAR_STATE_TARGET = null, and continue to Step 5. The rest of the run behaves identically to v1.7.x.
Otherwise, run the substeps below.
All Linear MCP calls in the substeps below use mcp__<LINEAR_MCP>__linear_*, where <LINEAR_MCP> is resolved from PROJECT_LINEAR.workspace (set in 4.5.1):
linear_workspace |
MCP server prefix |
|---|---|
henrybae |
linear-server-henry |
monte-inc |
linear-server-monte |
Substitute the prefix when emitting the calls (e.g., for monte-inc: mcp__linear-server-monte__linear_search_issues_by_identifier(...)). If PROJECT_LINEAR.workspace is unrecognized, treat as a best-effort failure: print Linear MCP not configured for workspace "<ws>", set LINEAR_ISSUE = null, and continue to Step 5.
Read the vault project file matched in Step 2. Extract these frontmatter fields:
linear_workspace (e.g., henrybae or monte-inc)linear_team (e.g., HEN or MON — the team prefix)linear_project_id (e.g., atlas-7f939e4fd078)Store as PROJECT_LINEAR = { workspace, team, project_id }. If linear_project_id is empty, this step is skipped (see trigger above). The project URL is reconstructed as linear://<workspace>/project/<project_id> whenever needed.
Aggregate distinct issue identifiers found in these sources (de-dupe; preserve order):
git -C "$MAIN_REPO_PATH" rev-parse --abbrev-ref HEAD. Apply regex (?i)\b(HEN|MON)-(\d+)\b. Normalize matches to uppercase prefix (e.g., hen-12 → HEN-12). HEN lives in the henrybae workspace, MON lives in the monte-inc workspace.Store as CANDIDATES = [list of normalized identifiers]. Track which source matched each ID for the prompt message.
Branch on len(CANDIDATES):
0 candidates:
Linear issue for this session? Options:
1. Enter an issue ID (e.g., HEN-12)
2. Type "new" to create a retroactive Linear issue in Done state
3. Type "skip" to record this devlog without Linear
> _
1 candidate (HEN-12):
Detected Linear issue HEN-12 (from {branch | commits | plan}).
Use it? [Y]es (Enter) / different / new / skip
> _
Enter or Y → accept. different → re-prompt for an ID. new → retroactive create flow. skip → set LINEAR_ISSUE = null.
2+ candidates:
Multiple Linear issues detected:
1. HEN-12 (from branch)
2. HEN-13 (from commit abc1234)
Pick a number, type "different", "new", or "skip".
> _
Do NOT use AskUserQuestion for these prompts — they need free-form input.
When the user picks or enters an identifier like HEN-12:
mcp__<LINEAR_MCP>__linear_search_issues_by_identifier(identifier="HEN-12") (see 4.5.0 for MCP selection) to validate it exists and fetch metadata.Linear lookup failed: <error summary>, set LINEAR_ISSUE = null, and continue to Step 5. Local writes still proceed; the rest of the run behaves as if skip was chosen.LINEAR_ISSUE = {
identifier: "HEN-12",
url: <issue url>,
issue_id: <UUID>,
team_prefix: "HEN",
project_id: <UUID>,
current_state_name: "<state name>",
current_state_id: <UUID>,
}
The work is already finished. Create a fresh issue in Done state directly so Linear has the historical record.
Title? [default: <task title from Step 3 Part A item 3>]. Enter accepts the default.Priority? [N]one / [L]ow (default, Enter) / [M]edium / [H]igh / [U]rgent. Map to Linear's integer scale: None=0, Urgent=1, High=2, Medium=3, Low=4.PROJECT_LINEAR.team (see "Linear state UUID caching" below). If cache miss, run a one-shot linear_get_teams to populate.metadata.json and the daily-note bullet:mcp__<LINEAR_MCP>__linear_create_issue(
teamId=<cached team UUID>,
projectId=PROJECT_LINEAR.project_id,
title=<user title>,
priority=<integer>,
stateId=<Done state UUID>,
description=<rendered shape-B template; see Step 7.5.2>
)
The obsidian link uses VAULT_SUMMARY_FILENAME from Step 3 Part A (substep 3.5 below), which is computable from data already bound by Step 3 — substep 4.5.5 runs before Step 12 (the actual vault file write), but the filename was pinned earlier.LINEAR_ISSUE as in substep 4.5.4, with current_state_name = "Done" and current_state_id = <Done UUID>.LINEAR_STATE_TARGET = null (no transition needed; already Done). Set CREATED_RETROACTIVELY = true. Skip substep 4.5.6.Only runs when LINEAR_ISSUE was populated via substep 4.5.4 (not 4.5.5).
Mark HEN-12 (currently {current_state_name}) as:
[D]one (default, Enter)
[I]n Review
[L]eave unchanged
> _
Map response to LINEAR_STATE_TARGET:
D or Enter → "Done"I → "In Review"L → null (leave unchanged)Set CREATED_RETROACTIVELY = false.
State UUIDs are cached in <vault_path>/vault-config.yaml under a top-level linear: key, keyed by workspace:
linear:
workspaces:
henrybae:
url_base: linear://henrybae
token_op_uri: op://Private/Linear API Key/credential
teams:
HEN:
team_id: "<UUID>"
states:
Done: "<UUID>"
"In Review": "<UUID>"
monte-inc:
url_base: linear://monte-inc
token_op_uri: op://Private/Linear API Key Monte/credential
teams:
MON:
team_id: "<UUID>"
states:
Done: "<UUID>"
"In Review": "<UUID>"
states_cached_at: "YYYY-MM-DD"
Lookup procedure: When a state UUID is needed for (workspace, team, state_name):
linear.workspaces[workspace].teams[team].states[state_name]; if present, use it.mcp__<LINEAR_MCP>__linear_get_teams() (see 4.5.0 for MCP selection). The response contains each team's workflow states. Find the team by prefix (HEN/MON), pluck id for the team and id/name pairs for each state.states_cached_at to today. Subsequent lookups within the same /devlog run hit the now-updated in-memory copy.If linear: section doesn't exist yet in vault-config.yaml → create it during the first cache write.
Cache invalidation: Manual only. If a state is renamed in Linear, delete the relevant entry from vault-config.yaml and the next run repopulates.
Cross-workspace caching: Each workspace caches its own team UUIDs under linear.workspaces[workspace].teams[team] — a HEN-team Done UUID is meaningless in monte-inc and vice versa. The substep-1 lookup is keyed on (PROJECT_LINEAR.workspace, PROJECT_LINEAR.team, state_name), and substep-2's linear_get_teams call hits the workspace-scoped MCP per 4.5.0, so the response naturally returns only that workspace's teams.
Review the current conversation to identify:
Output the rating AND both sections to the terminal:
## Session Rating: X/7 (Meaning)
> "comment text here"
## What I Learned
- [Learning 1]
- [Learning 2]
## What I Shipped
- [Accomplishment 1]
- [Accomplishment 2]
Replace X with the rating number and (Meaning) with the corresponding description from the rating scale (e.g., "5/7 (Good - solid assistance)").
Only show the comment blockquote if the comment is non-null.
Keep bullet points concise (one line each).
Read the current daily note and append to the project's section under ## Progress.
Constraints:
## Technical Notes (Step 13).robocandy.henrybae.com, a published HF repo, a new endpoint, a new repo path). Avoid version bumps and internal function names.(X/7) after the top-level summary.LINEAR_ISSUE is set: ([{identifier}]({url})) placed between the rating and the Session Log link (e.g., ([HEN-12](https://linear.app/henrybae/issue/HEN-12))). Omit this segment entirely when LINEAR_ISSUE is null.([Session Log](Processed/Coding/{url-encoded-filename}.md))%20 (e.g., 2026-01-11%20Thought%20Organizer%20Feature%20Name.md); Obsidian's parser stops at unescaped spaces and creates a junk date-only notesession_path linking to the full external session.md extension (e.g., 2026-01-11 Thought Organizer Feature Name.md)- [x] for completed items- [ ] for in-progress/incomplete itemsExamples (drawn from real recent devlogs):
Good — sub-bullets describing high-level steps:
- [x] Shipped Robocandy account-rotation API end-to-end (5/7) ([Session Log](...))Tesla account creation now fully automated end-to-endExposed at robocandy.henrybae.com behind Cloudflare TunneliPhone Shortcut hits the API to rotate the active account on demandGood — specific top-line stands alone, no sub-bullets needed:
Deployed Cloudflare DDNS for PiVPN with auto-updating vpn.henrybae.com endpoint (7/7)Migrated devlog to folder-based structure with metadata.jsonBuilt CHM → Markdown converter for Infineon iLLD docs (5/7)Bad — implementation chatter (push to Session Log Technical Notes):
Bumped Unsloth 2026.4.4 → 2026.4.8; rewrote hub_publish.publish_lora_to_hub for new tokenizer APICodex sends apply_patch as type: "custom" which /v1/responses rejects on llama.cpp 9010/9020Per-character typing with random delays + bezier cursor drift in fill_signup_*.pyFSDP wrap_policy + 2 multimodal/text-only crashes in agent_loop.py + config + preprocessorLocating/Creating Sections:
## Progress section (top-level, NOT under ## Notes)## Weekly Tracking if present, otherwise at end of file)### [[Project Name]] under ## Progress (case-insensitive match on project name)## ProgressFormat to write:
## Progress
### [[Thought Organizer Agent]]
- [x] Rewrote clippings pipeline so re-imports no longer duplicate headings (5/7) ([Session Log](Processed/Coding/2026-01-11%20Thought%20Organizer%20Agent%20Clippings%20Fix.md))
- Re-import is now idempotent — running it twice on the same input produces the same vault state
- Nested headings now parse cleanly across every daily-note category
### [[Video Generation Pipeline]]
- [x] Added audio sync feature (6/7) ([Session Log](Processed/Coding/2026-01-11%20Video%20Generation%20Pipeline%20Audio%20Sync.md))
With Linear link (when LINEAR_ISSUE is set):
### [[Atlas]]
- [x] Fixed Karakeep volume mount so containers survive recreation (6/7) ([HEN-12](https://linear.app/henrybae/issue/HEN-12)) ([Session Log](Processed/Coding/2026-05-12%20Atlas%20Fix%20Karakeep%20Volume%20Mount.md))
After appending, confirm to the user that their progress has been logged.
Skip entirely if LINEAR_ISSUE is null.
This step writes back to Linear once all local vault writes have succeeded. Local writes are sacred; Linear writes are best-effort — failures here log a warning and continue. Never abort the run on a Linear API error.
Concurrency: when both 7.5.1 and 7.5.2 fire (existing-issue path with a state transition), issue them in parallel — they target the same issue but have no data dependency between them, and parallelization saves ~100-500ms. Aggregate outcomes after both return.
Skip when CREATED_RETROACTIVELY is true (issue was created in Done state by substep 4.5.5) OR when LINEAR_STATE_TARGET is null (user chose "Leave unchanged") OR when LINEAR_STATE_TARGET equals LINEAR_ISSUE.current_state_name (already in the target state — record state_after = state_before and move on; do NOT call the API).
Otherwise, resolve the target state UUID from the cache (substep 4.5.7 lookup), then:
mcp__<LINEAR_MCP>__linear_edit_issue(
issueId=LINEAR_ISSUE.issue_id,
stateId=<target state UUID>
)
Outcomes for metadata.json's linear object:
state_before = LINEAR_ISSUE.current_state_name, state_after = LINEAR_STATE_TARGET. Omit state_change_failed.Linear state transition failed: <error summary> to the terminal. Record state_before = LINEAR_ISSUE.current_state_name, state_after = LINEAR_ISSUE.current_state_name (Linear's actual state didn't change), state_change_failed = true.This is the single shared template used in two places:
description at creation time.linear_create_comment when LINEAR_ISSUE is set AND CREATED_RETROACTIVELY = false.Skip the comment call when CREATED_RETROACTIVELY = true — the description already carries this content; a comment would duplicate it.
Shape B template (Markdown):
{OUTCOME_SENTENCE}
[Session log](obsidian://open?vault={vault_name}&file={VAULT_SUMMARY_URLENCODED})
That's it. Two lines. No bullets, no rating, no cost, no scope rehash.
Composing OUTCOME_SENTENCE:
abc1234"), include it inline.Done state already conveys completion.What does NOT go to Linear:
rating and comment — stay in metadata.json and the daily-note bullet (the (X/7) parenthetical) only.cost_usd — metadata.json only.Notes:
{vault_name} is the basename of OBSIDIAN_VAULT (e.g., Henry).{VAULT_SUMMARY_URLENCODED} is the binding from Step 3 Part A item 4. Reuse it; do not re-derive.On failure (closing comment only): print Linear closing comment failed: <error summary>. Record comment_id = null.
On success (closing comment only): record comment_id = <returned comment UUID>.
For CREATED_RETROACTIVELY = true, omit comment_id from metadata entirely (no comment was attempted).
Aggregate the outcomes for Step 9's final confirmation:
Linear sync partial — see warnings above to the final confirmation.Skip this step unless: User invoked /devlog --slack
After logging is complete, send a summary to Slack:
Format the message using "What I Shipped" from Step 5:
📝 *Devlog: {PROJECT_NAME}*
{TASK_SUMMARY}
*Shipped:*
• {ACCOMPLISHMENT_1}
• {ACCOMPLISHMENT_2}
...
Invoke the slackbot skill:
Use the Skill tool to call: /slackbot #{CHANNEL} "{formatted_message}"
Use the channel from "Slack Configuration" above.
Confirm to user: "Slack notification sent to #{CHANNEL}"
After all steps complete, provide a brief final confirmation to the user that includes the session rating.
[x] for done, [ ] for incomplete--slack to send notification