Technology Standards
Define coding standards once, have them enforced automatically on every task, and verify they hold over time. This is what separates prototyping with AI from building production systems with AI.
The loop
Standards enforcement in Ultra Claude is a three-part cycle that runs continuously:
1. Define
Write standards as documentation
2. Enforce
Reviewer checks every task
3. Verify
Audit for drift over time
Define
Standards live in documentation/technology/standards/ as markdown files. Each file covers a specific area and describes conventions (how things should be done), patterns (approved approaches with examples), and anti-patterns (what to avoid and why).
When you run /uc:migrate, standards are auto-generated from detected code patterns in your project. You review and refine them — they're a starting point, not final.
Example standards files
| File | What it guards |
|---|---|
error-handling.md | All API handlers must use typed error responses with consistent status codes. No raw throw without context. Error boundaries required in React components. |
api-design.md | REST endpoints follow /resource/:id/sub-resource convention. Pagination uses cursor-based approach. All responses wrapped in { data, error, meta } envelope. |
database.md | Migrations are reversible. No raw SQL in application code — use the ORM query builder. Foreign keys and indexes on all relationship columns. |
testing.md | Integration tests hit a real database, not mocks. Every API endpoint has at least one happy-path and one error-path test. Frontend components tested with user-event interactions, not implementation details. |
auth.md | JWT tokens expire after 15 minutes. Refresh tokens stored in httpOnly cookies. All protected routes use the auth middleware — no inline token checks. |
naming.md | Components use PascalCase. Hooks use use prefix. Database columns use snake_case. Environment variables use SCREAMING_SNAKE_CASE with app prefix. |
These are real examples of the kind of rules the Reviewer enforces. Each violation gets cited with the exact standards file and section.
Enforce
During plan execution, every task team includes a Code Reviewer agent. The Reviewer:
- Reads your standards and architecture docs before any code is written
- Gives advisory feedback on the Executor's implementation plan
- Reads files as the Executor writes them, building context continuously
- Performs formal code review after implementation, citing specific standards docs for any violations
Review verdicts are binary: PASS or FAIL. There are no "pass with reservations." Every [PATTERN] failure references the exact standards document and section. "I prefer" is an invalid review comment — enforcement is standards-based only.
The Reviewer also queries external documentation. If your task uses external libraries, the Reviewer reads the Knowledge Brief (synthesized by Lead at Phase 1.8) and the research files it points to under documentation/technology/research/libraries/. For any uncovered questions, the Reviewer sends QUERY: {question} to Lead, who answers via /uc:research.
Verify
Standards drift over time — new patterns emerge, quick fixes bypass conventions, and documentation falls behind. Verification mode catches this:
- Checker agents compare your standards docs against actual code patterns
- Discrepancies are classified: is the standard wrong, or has the code drifted?
- You decide what to fix — the system never auto-resolves ambiguous cases
Run verification after major changes, before releases, or on a regular cadence to keep standards and code in sync.
What makes this different from a linter
Linters check syntax and simple patterns. Ultra Claude's standards enforcement operates at a higher level:
- Semantic understanding — the Reviewer understands your architecture, not just syntax rules
- Context-aware — it reads the full implementation plan, related files, and your standards docs together
- Continuous — it reads files as they're being written, catching issues before they compound
- Bidirectional — verification checks that code matches docs AND that docs match code
- Human-in-the-loop — ambiguous cases are flagged for you, never auto-resolved
Testing standards
Testing configuration lives in documentation/technology/testing/. It tells the Tester agent how to run tests, what coverage to expect, and how to validate frontend behavior — including browser-based verification.
What testing standards define
- Test commands — how to run unit tests, integration tests, and the full suite
- Coverage thresholds — minimum coverage percentages per area
- Testing patterns — what to test (behavior, not implementation), how to structure test files
- Browser testing — when and how to validate UI in a real browser
Browser testing with Claude-in-Chrome
For frontend tasks, the Tester agent can launch your application in Chrome and visually verify that the UI actually renders and works. This goes beyond unit tests — it catches issues that only appear in a real browser:
- Navigation flows — clicks through routes, verifies pages load without errors
- Form interactions — fills inputs, submits forms, checks validation messages
- Visual verification — reads the page content, checks that elements are present and visible
- Console errors — monitors browser console for JavaScript errors during interaction
- Network requests — verifies API calls are made correctly and responses render
Example testing standards file
# Testing Standards
## Commands
- Unit tests: `npm test`
- Integration tests: `npm run test:integration`
- Full suite: `npm run test:all`
## Coverage
- API routes: 80% minimum
- Business logic: 90% minimum
- UI components: 70% minimum
## Patterns
- Test behavior, not implementation details
- Use `describe/it` structure with clear test names
- Integration tests hit a real database, not mocks
- Each API endpoint: at least one happy-path and one error-path test
## Browser Testing (frontend tasks)
- **Mandatory** for any task that creates or modifies UI components
- Start the dev server, navigate to the affected pages
- Verify: page renders, interactive elements work, no console errors
- For forms: test validation states, submission, and error display
- Take screenshots for visual evidence in test reports
Creating effective standards
Standards work best when they are:
- Specific — "Use try/catch with typed error responses in all API handlers" beats "Handle errors properly"
- Example-driven — include code examples for patterns and anti-patterns
- Reasoned — explain why a convention exists, so the Reviewer can judge edge cases
- Scoped — one file per area (error handling, API design, testing) rather than one giant document
The /uc:migrate skill generates initial standards from your codebase patterns. Use these as a starting point and refine based on your team's decisions.