We promised you the framework.
In Build or Die, we made the case: companies that rely on SaaS will lose to competitors who own their software. In Clean Architecture + The 7-Gate Gauntlet, we showed you how to constrain LLMs with architectural boundaries and enforce quality with 41 guardrails across 7 gates.
You asked: "Where's the scaffold?"
Here it is: npx @pqai/mcp-4-llm
One command. Production-ready. But here's what we didn't expect -- it enables self-improving AI.
Quick Recap: Why Clean Context Matters
LLMs write broken code because they have no constraints. TODO comments everywhere. as any type hacks. Layer violations. Zero test coverage. The output _looks_ right, but it cuts every corner it can find.
Clean Architecture = Clean Context. When you constrain the LLM with strict layer boundaries, it can't sprawl and it can't cheat. But knowing the theory isn't enough. You need a scaffold that enforces this from the first line of code.
The Scaffold: Clean Context in a Box
npx @pqai/mcp-4-llm gives you everything we described in the gauntlet article, wired up and ready to go:
- Clean Architecture layers: Domain, Application, Infrastructure, and MCP -- with strict dependency rules baked in
- ESLint boundaries that block layer violations at compile time, not in code review
- 41 pre-commit quality checks -- the full gauntlet, automated
- 80% test coverage minimum -- enforced, not aspirational
- Full BDD workflow with Cucumber, from Gherkin feature files to running tests
No decisions to make. No boilerplate to write. Just build.
What's in the Box
The scaffold isn't just a project template. It's an opinionated system designed to make LLM-generated code production-ready from the start:
- CLAUDE.md / AGENTS.md files that guide AI assistants through the architecture, conventions, and workflows -- the LLM reads these and understands what it's building within
- Structured errors with
suggestedFixandisRetryablefields, so failures are actionable rather than cryptic - Zod schemas enforced at the use case level -- validation isn't optional, it's structural
- Barrel exports per layer -- the LLM can't skip them, and the guardrails catch it if it tries
- Real Gherkin to Cucumber to Implementation workflow -- behaviour-driven development as the default, not an afterthought
Version 1.6.1 on npm. Battle-tested across real projects.
How It Works With LLMs
Whether you're using Claude, opencode.ai, or any MCP-compatible system, the workflow is the same:
1. The LLM reads AGENTS.md and understands the architecture, the layer boundaries, and the conventions
The LLM can't cheat. Either every check passes, or it doesn't ship. The scaffold is the constraint, and the constraint is what makes the output reliable.
The Unexpected Discovery: Self-Improving AI
Here's the part we didn't plan for.
When you pair this scaffold with an agentic harness, something remarkable happens: the AI builds its own MCP tools.
It reads AGENTS.md. It writes Gherkin for a new capability. It implements the domain entity, the use case, and the MCP tool. It passes all 41 guardrails. It extends itself.
Self-improving AI. Controlled. Safe. Real.
This isn't science fiction. It's happening now:
1. The LLM identifies a missing capability
The scaffold is the safety net. The same architectural boundaries and quality gates that make LLM-generated code production-ready also make self-extension safe. The AI can't add a tool that violates the architecture. It can't ship a capability without test coverage. It can't bypass validation. The guardrails apply to self-generated tools exactly as they apply to human-requested ones.
The Progression
The three articles in this series follow a deliberate progression:
Clean Context -> Clean Scaffold -> Self-Improving AI.
Each step depends on the one before it. You can't have safe self-improving AI without a scaffold that enforces quality. You can't have a reliable scaffold without Clean Architecture providing clean context. And none of it matters if you're renting your competitive advantage from a SaaS vendor instead of owning it.
Companies using raw LLM output without guardrails rebuild the same scaffold every project, ship broken code, and can't let AI improve itself safely. Companies with the right scaffold own their tools and their evolution.
Build or Die. And if you're going to build, build on a scaffold that grows with you.
npx @pqai/mcp-4-llm -- and start building.