Feature Planning
Plan features with a senior tech lead who challenges your scope, researches your codebase, and produces structured plans before any code is written.
When to use
Run /uc:feature-mode {description} when you're adding new functionality, making changes that touch multiple files, or working on something where you're unsure about the right approach. Feature mode is for work that benefits from upfront planning.
Not for small fixes. For bugs, use /uc:debug-mode. For library/pattern/market research, run /uc:research.
The 4-stage process
Feature mode follows a mandatory four-stage framework. No stage can be skipped, and no files are written until the final stage.
1. Understand
The AI challenges your feature request before accepting it. It asks "why?", splits vague requests into concrete pieces, identifies hidden complexity, surfaces edge cases, and proposes its own hypotheses about the right approach. This is a dialogue — the AI brings its perspective for you to react to, not just collects requirements.
2. Research
Agents survey your codebase and documentation in parallel, finding existing patterns, potential conflicts, and relevant architecture decisions. If external libraries are involved, the cache-first /uc:research skill fetches focused documentation and writes it to the project knowledge base under documentation/technology/research/.
3. Discuss
The AI synthesizes all findings and presents its perspective. You brainstorm the approach together. If the AI identifies genuine concerns, it pushes back — respectfully but firmly. The goal is convergence toward an approach you both believe in.
4. Write
The plan is written to documentation/plans/{NNN}-{name}/README.md with structured tasks. Each task has a description, file list, success criteria, and dependencies. You review and explicitly approve before anything executes.
How scope challenge works
The AI acts as a Head of Technology who refuses to plan until scope is razor-sharp. It will:
- Ask "why?" — understanding the problem before jumping to solutions
- Split compound features — "add auth and user management" becomes two separate scoped items
- Predict implementation challenges — based on experience, where will the real complexity hide?
- Surface failure modes — what happens when things go wrong?
- Flag dependencies — what could this break? What needs to exist first?
You'll be asked at least 3 questions before research begins. Even a detailed request makes these questions better, not unnecessary.
Task structure in plans
Each task in a plan is a complete vertical slice — database through API and UI, not split by tech layer. The plan README holds a flat task heading index only; each task's authoritative content lives in its own file at documentation/plans/{plan}/tasks/task-N/task.md. Each task.md includes:
- Description — what to build
- Product context — links to relevant product docs and requirements
- Files — expected files to create or modify
- Patterns — architecture and standards files the executor must follow
- Research — per-task pointers to durable research files under
documentation/technology/research/, populated from Stage 2's/uc:researchresults. Each pointer has a one-line "why this matters for this task" gloss. - Success criteria — numbered list of how to verify it's done
- Dependencies — other tasks that must complete first
Task sizing targets ~7-20 files per task. Fewer than 7 gets absorbed into a larger task; more than 20 gets split along feature boundaries.
Why a separate file per task? During execution, every team member (Executor, Reviewer, Tester) reads its task's directory on startup. Keeping per-task content in dedicated files means spawn prompts carry no inline content, crash re-spawns reload context from disk with zero Lead intervention, and amendments are a simple file edit + broadcast instead of a mass message round-trip.
Stub plans from /uc:roadmap
If you've used /uc:roadmap to decompose your product, feature mode detects existing stub plans automatically. When a stub is found:
- The scope boundary is already defined — the AI verifies and refines rather than discovering
- Questions focus on implementation approach and technical decisions, not what to build
- The stub's Out of Scope section is a hard boundary
What's next
After approving the plan, run /uc:plan-execution {plan-name} to execute it with coordinated agent teams.