The Seedwork Manifesto
For software where AI behavior is the product
When an LLM's capabilities are the core of your product—not just a tool to write code faster—the rules change. Building these systems is not construction. It is cultivation.
Values:
- Emergent capabilities over specified requirements
- Cultivation over construction
- Discovery through use over upfront design
- Native strengths over forced behaviors
That is, while the items on the right have value, the items on the left matter more.
Principles behind the Seedwork Manifesto
1. The highest priority is to discover and cultivate the unexpected value that emerges when AI meets problems.
2. Welcome non-determinism as a feature, not a bug. The same prompt yielding different responses is what makes the system valuable.
3. Ship to users early and often. Emergent capabilities cannot be discovered in a sandbox.
4. The most powerful applications emerge when working with the grain of what language models naturally do well, not against it.
5. Build the minimum scaffolding necessary. The intelligence is in the model; the job is to connect it to context and action.
6. Prompts are conversations, not specifications. They evolve through dialogue with the system, not upfront design.
7. Discovery is the primary measure of progress. "It also does this" is more valuable than "it does exactly what was specified."
8. Trust the model's latent capabilities. The potential exists before finding it—the work is exploration, not creation.
9. Evaluate outcomes in context, not against fixed metrics. Success often looks different than expected.
10. Simplicity is essential. Remove code between the model and the problem.
11. The best AI applications emerge from rapid experimentation, real-world feedback, and willingness to be surprised.
12. At regular intervals, ask: "What is this system doing that wasn't planned for?" Then cultivate the valuable surprises and prune the rest.
Why "Seedwork"
A seed contains latent potential. A plant is not built—conditions for growth are provided. Native species thrive in their environment when worked with, not against. What emerges has a life of its own.
Large language models are similar. The capabilities exist in the weights before a line of code is written. The work is cultivation: providing the right context, the right tools, the right connection to real problems. And like gardening, there must be willingness to be surprised by what grows.
Seedwork replaces the construction metaphor (building, architecting, engineering) with a cultivation metaphor (planting, nurturing, discovering, pruning).
Asymmetric Pair Programming
Seedwork assumes a specific working relationship: human and AI as collaborators, but not equals.
In traditional pair programming, roles swap. Both hold context. Both produce.
In asymmetric pair programming:
- The human architects, the AI implements
- The human holds why, the AI holds how
- The human initiates and prunes, the AI proposes and implements
- The human's time is precious; the AI multiplies it
This asymmetry is what makes cultivation possible.
The AI proposes with confidence, but the human often misreads that as certainty. The human stays critical.
The human becomes editor, not writer. Architect, not builder. Gardener, not brick-layer.
The Three Eras
| Era | Metaphor | Key Insight | Methodology |
|---|---|---|---|
| Waterfall | Blueprint | Changes are expensive; plan thoroughly | Specify → Build → Test |
| Agile | Iteration | Requirements change; embrace it | Iterate → Ship → Learn |
| Seedwork | Cultivation | The system has latent capabilities to discover | Plant → Cultivate → Discover → Prune |
What Seedwork Means in Practice
Instead of: Writing detailed specifications for AI behavior
Try: Quick experiments to discover what the model naturally does well
Instead of: Extensive upfront prompt engineering
Try: Shipping early and evolving prompts through real-world feedback
Instead of: Treating unexpected outputs as bugs
Try: Asking "Is this actually valuable?" before calling it a problem
Instead of: Measuring success against predetermined metrics
Try: Watching for emergent value and updating the definition of success
Instead of: Building complex orchestration around AI
Try: Minimal scaffolding that connects the model to real context and real action