The 150KB Bug That Wasn’t
For two days, Miracle Mode was broken. Every execution for WaltersFL and PropertyManagement failed. The error was clear: “Context file exceeds 150KB hard limit. Claude Code hangs on oversized stdin; refusing to start.”
We had coded the limit ourselves. And we were wrong.
The Symptom
WaltersFL issue #10, “Portfolio Truth-Up, 11 Real Projects.” The execution context assembled from the Smart Prompt analysis was 153KB. Our gate, added in a “hardening” commit, refused to start Claude Code with anything over 150KB.
❌ Context file is 153 KB, exceeds hard limit of 150 KB.
Claude Code hangs on oversized stdin; refusing to start.
The context fell through to the legacy Claude API path. That path had only 3000 max_tokens, which truncated the response. The truncated JSON failed to parse. The parser triggered the hallucination detector (fictional directories). The whole pipeline crashed.
The False Fix
We wrote a context trimmer. Stripped HTML comments, collapsible widgets, UI chrome. Saved 10KB. Still over the limit. We trimmed documentation plans. Commit strategies. Compliance checklists. Got it to 141KB. It ran, and Claude Code hung for 90 seconds, producing no output, killed by the startup timer.
We were chasing our own tail.
The Real Story
Wilson Lumber, the galaxy that ran successfully the night before, had processed context files of 124KB, 140KB, 149KB, and 154KB. All succeeded. Same CLI version. Same model. Same flags.
The 150KB limit was wrong. We had added it based on a single hang that was probably a network issue, then coded it as a permanent gate.
We asked Claude directly: “Does a 150KB file hang Claude Code?”
A 150KB file won’t hang or be rejected. You’re nowhere near the limit. The hard caps are 30MB per file. 150KB is about 0.5% of the ceiling.
The hang was a transient network issue. Our “fix” was the bug.
The Lesson
Three lessons, each more uncomfortable than the last:
-
Test your assumptions before coding them as gates. We had one data point (a hang) and coded a permanent hard limit. The limit blocked every subsequent execution.
-
Fallback paths must be tested. The legacy Claude API fallback had been broken for months (missing function parameters, 3000 max_tokens). Nobody noticed because the primary path always worked, until our gate blocked it.
-
Remove the gate. We deleted the hard limit. Replaced it with a diagnostic log. Context size is noted, not blocked. The fix was deletion.
// Before: gate that blocks
if (sizeKB >= 150) return { success: false, error: 'too large' };
// After: log that informs
if (sizeKB >= 100) this.log(`Context file is ${sizeKB} KB, large but proceeding.`);
The Cascade Pattern
This bug exposed a pattern we keep seeing in AI-assisted development: cascading assumptions.
- Something fails (network hang)
- You hypothesize a cause (context too large)
- You code a fix (hard limit)
- The fix creates a new failure path (fallback)
- The fallback has its own bugs (truncation)
- Those bugs create more failures (JSON parse, hallucination)
- You spend two days debugging bugs 4-6 instead of questioning assumption 2
The discipline: when something fails, verify the assumption before coding the fix. Wilson succeeded at 154KB was the data point that should have killed hypothesis 2 immediately.
The commit: fix(miracle): remove context hard limit