Patterns of Domain Modeling with Language Models
/
Patterns of Prompt-Driven Domains
Introduction: Modeling Is How We Care About What Matters
Software design begins with belief — with the realization that something must be kept true.
When we write tests, define domain logic, or name a function carefully, we’re expressing care for the behaviors that matter. A system that encodes that care is more resilient, more navigable, and more aligned with its purpose.
Generative AI introduces new ways to work with these beliefs: to express them in richer ways, to check for semantic drift, to propose new forms of structure. But the tools that help humans build coherent systems also help AIs. What we do well — naming things, capturing expectations, shaping good examples — gives structure for AI to participate meaningfully.
This essay explores a set of patterns that integrate modeling, testing, and generative AI — not as a break from current practice, but as a deepening of it.
What We Already Know: Testing as Belief Capture
We often think of tests as mechanical checks. But in practice, they’re where beliefs get pinned down.
A well-written test answers a deeper question:
- What behavior matters here?
- What needs to stay visible?
- What would be confusing or dangerous to lose?
Models — in the form of shared language, functions, or domain boundaries — give shape to those beliefs. Tests verify that they still hold. Together, they create a structure where meaning persists.
Design starts not with diagrams, but with a recognition: this rule, this behavior, this relationship must hold. That belief is what drives structure. Assertions-first design reflects this — the system grows around what we’ve chosen to defend.
Modeling as an Explicit Design Discipline
In well-shaped systems, domain modeling isn’t emergent — it’s deliberate.
- Types are used to encode state and prevent illegal combinations
- Transitions are modeled as discrete functions or processes, not scattered conditionals
- Side effects are isolated, and the domain reflects only what must be known to uphold meaning
- Relationships between entities are structured around lifecycle and responsibility
Assertions then become more than just safety checks — they’re commitments to domain behavior. We write them to defend the moments that matter:
- A refund is issued
- A user transitions from trial to paid
- A feature becomes available or restricted
These moments reflect not just system behavior, but business policy, product value, and social contract. Tests are how we ensure those meanings persist.
Good systems test:
- What matters to users — visible outcomes, permissions, error states
- What matters to the business — contractual behavior, access, fraud boundaries
- What matters to developers — clarity of roles, resistance to regression, stability of intent
When modeling is sharp, these tests reinforce structure. When it’s vague, tests compensate for gaps — and often drift as a result.
Modeling well is not about layering structure for its own sake. It’s about creating the space where belief can be stated, refined, and defended across time.
The Hard Part: Defining and Sustaining Systems
Systems are difficult for two reasons: they are hard to define well at the start, and they degrade over time.
- Developers often begin without shared language or clear separation of concerns
- Behaviors and responsibilities accumulate in ad hoc structures
- Tests are shaped by failure cases, not by modeled expectations
- Over time, domain rules become fragmented, duplicated, or hidden
Generative AI helps address both of these problems. It can synthesize structure from intent, reflect latent patterns, and help teams regain clarity. It can also make drift visible and offer suggestions for re-alignment.
More importantly, it allows us to be more ambitious. When we can rely on tooling to help maintain structure and meaning, we can model more deeply — reflecting edge cases, behavioral nuance, and value-sensitive rules we might otherwise consider too volatile or subtle to encode.
Clarity benefits everyone — but ambiguity is not a blocker. AI systems can often reason in contexts where human-maintained structure has started to dissolve. But when we build models explicitly, we get stronger collaboration between developer, system, and AI.
Pattern Library: Structures That Express and Preserve Belief
1. Semantic Sidecars
Maintain human-readable domain documents alongside implementation:
- Invariants and expectations in natural language
- Descriptions of entities, workflows, and constraints
- Prompt scaffolds for AI code tools
- Commentary on examples and what they’re meant to illustrate
Sidecars are where meaning lives outside of code — for AI tools, but also for humans trying to understand fast.
2. Belief-Driven Examples
Examples work best when they reflect real expectations: product behaviors, user flows, edge cases that matter.
Good examples clarify ambiguous logic, reveal modeling gaps, and force decisions about structure. They work because they reflect belief, not just structure.
Pattern: Use examples to expose ambiguity. Use tests to commit to behavior.
3. Structural Rules
Tools like Cursor or linting frameworks use file structure and naming patterns to enforce domain boundaries. These rules can be used to:
- Prevent non-domain concerns from leaking into core logic
- Keep domain logic free of infrastructure or framework-specific artifacts
- Ensure predictable structure for AI agents and humans alike
These boundaries help protect modeling clarity over time — especially in large or multi-team systems.
4. Modeling Signals
AI systems can detect areas where modeling may need more structure:
- Test examples with excessive mocking or unrelated setup
- Logic repeated across handlers that isn’t named
- Inconsistent behavior around shared entities
- Conditionals that suggest unstated rules
These are indicators that more structure may help. Modeling creates affordances — not just constraints.
5. Hybrid Test Layers
Good systems combine multiple layers of validation:
- Deterministic assertions: fast checks of known structure
- Semantic validation: AI-based assessment of contextual alignment
- Narrative evaluation: holistic checks of expected behavior under variation
Each layer supports a different kind of guarantee. Together, they give developers both precision and perspective.
6. Domain-First Generation Loops
For systems with clear models, we can invert the usual workflow:
- Developer expresses a rule or domain behavior in sidecar or example
- AI generates the initial logic
- Tests verify behavior against the expressed belief
- The result is evaluated, refined, and stabilized
This creates tested, flexible domain models — shaped around intent, not just structure. They can be adapted for persistence, exported as APIs, or extended with new behavior while maintaining coherence.
Working With Generative AI
Generative tools are at their best when they’re in dialogue with structure.
- With clarity: they generate meaningful, intention-aligned code
- With ambiguity: they reflect back patterns that can be named and shaped
- With examples: they extrapolate edge behavior and test coverage
- With sidecars: they stay grounded in what matters
These systems don’t require perfection to be useful. But they benefit from belief, just as tests and developers do.
Toward Systems That Understand What They Defend
We already build systems that protect logic. We’re learning to build systems that protect meaning.
Beliefs aren’t always strict invariants — some are contextual, or narrative, or social. But they matter. They shape how users experience the product and how developers reason about the system.
With better modeling patterns and the right use of AI, we can encode more of that meaning into our systems — and build tools that help us keep it alive.
Conclusion
Good modeling is how we serve both users and developers. It gives structure to values, shape to behavior, and stability to change.
Generative AI expands our modeling surface — but only when we anchor it in belief. These patterns are tools to help us model with care, test with clarity, and evolve with confidence.