Debugging

Investigate bugs with hypothesis-driven parallel testing. Debug mode generates ranked hypotheses, spawns agents to gather evidence for each one, and attempts reproduction — all before suggesting a fix.

When to use

Run /uc:debug-mode {bug description} when you have a bug report, unexpected behavior, or a production issue. Provide as much context as you have — symptoms, reproduction steps, what changed recently.

Debug mode diagnoses and plans — it does not implement fixes. The output is a plan for /uc:plan-execution that includes regression tests.

What happens

1. Understand

Extracts symptoms, expected behavior, reproduction context, and impact. Asks for any missing information — even a detailed report gets probed for unstated assumptions and environmental factors.

2. Hypothesis generation

After initial research (including git log for recent changes), the AI generates 2–5 hypotheses ranked by likelihood and presents them to you. You confirm, reject, or refine hypotheses before any agents are spawned. This gate prevents wasting investigation effort on the wrong theories.

3. Parallel investigation

For each confirmed hypothesis, an Explore agent gathers evidence from the codebase (with file:line references). Simultaneously, a System Tester agent attempts to reproduce the bug — following exact steps, trying variations, identifying boundary conditions.

4. Fix plan

Hypotheses are ranked by evidence strength. The AI presents: root cause summary, confidence level, fix approach with blast radius assessment, and open questions. The resulting plan always includes regression tests that prove the bug exists before the fix.

How hypothesis-first differs from other debuggers

Most debugging tools jump straight to "find the problem." Debug mode forces a deliberate process:

  • Hypotheses before investigation — you and the AI agree on what to look for before spawning agents
  • Evidence-based ranking — each hypothesis is scored by the evidence found, not gut feeling
  • Blast radius assessment — every fix plan includes an analysis of what else could break
  • Mandatory reproduction — the System Tester tries to trigger the bug independently, which validates that the fix can be verified
  • Regression criteria — every fix task includes a test that would have caught the bug originally
What to provide in your bug report

The more context you give, the better the hypotheses. Include:

  • What you expected vs. what happened
  • Steps to reproduce (if known)
  • When it started (or when it was last known to work)
  • Any recent changes to the area
  • Error messages or logs
  • Environment details (if relevant)

Even with a detailed report, the AI will ask probing questions — a detailed report makes questions better, not unnecessary.

What's next

The fix plan is written to documentation/plans/. Run /uc:plan-execution {plan-name} to execute the fix with agent teams that include regression testing.