Back to Blog

How to Give AI Better Context: Vibe Modeling Your Domain Before You Prompt

The prompt said “add a notification system.” Claude Code generated 400 lines of notification logic — email, push, in-app, the works. All wired directly into the user service. Three days later you needed to add SMS notifications and realized the entire notification layer was coupled to user state, billing state, and subscription state in ways that made any change a game of Jenga.

The code wasn’t wrong. The context was.

Vibe modeling is the practice of visually exploring domain events, system boundaries, and user flows with AI before writing code. It gives developers structured context and shared understanding, so vibe coding starts from a clear model instead of a vague prompt.

The quality of AI-generated code is directly tied to the quality of context you provide. Here’s how domain modeling makes that context dramatically better.

The context gap

When you prompt an AI coding tool with “add a notification system,” the AI fills in every decision you didn’t make. Where do notifications live? What triggers them? Do they know about billing state? Should they retry on failure? The AI answers all of these implicitly, based on patterns from its training data, not based on your system’s actual architecture.

This is the context gap. The distance between what the AI needs to know about your domain and what your prompt actually tells it.

A vague prompt produces plausible code. A prompt with structured context — “Notifications are a separate bounded context. They receive events from Billing and User contexts but never query their state directly. These are the domain events that trigger notifications: InvoiceOverdue, SubscriptionPaused, TrialExpiring” — produces code that fits your architecture.

What structured context looks like

After a ten-minute session on a vibe modeling board, you have a visual map of your domain. Translate that into a context block for your AI coding tool:

Domain: Subscription Management
Bounded contexts: Users, Billing, Notifications

Events in Billing context:
- SubscriptionCreated
- PaymentProcessed
- PaymentFailed
- SubscriptionPaused
- SubscriptionCancelled

Notifications context listens to:
- PaymentFailed → send retry warning email
- SubscriptionPaused → send pause confirmation
- TrialExpiring → send upgrade prompt

Notifications does NOT query User or Billing
state directly. It receives events only.

That block of context, pasted before your prompt, changes what the AI generates. Instead of building notifications as a function inside userService.js, it creates a separate module with an event listener pattern. The architecture matches your model because the AI had your model.

Before and after

Here’s the same feature prompted two ways.

Without domain context: “Add a notification system that sends emails when payments fail and subscriptions are paused.”

The AI generates notification logic inside the billing module. Email sending is called synchronously after payment processing. The notification code directly reads user preferences from the user table. Everything works, everything is coupled.

With domain context: “Add a Notifications bounded context. It listens to PaymentFailed and SubscriptionPaused events from the Billing context. It does not query User or Billing tables directly — it receives all needed data through the event payload. Use an event handler pattern.”

The AI generates a separate notifications module with event handlers. Billing publishes events without knowing who consumes them. Notifications are decoupled from billing logic. When you add SMS notifications next week, you add another handler — you don’t touch billing code.

Same AI. Same coding tool. Different context, different architecture.

The ten-minute habit

You don’t need a two-hour modeling workshop to improve your prompts. The practice is lightweight:

  1. Open the board and spend five minutes placing domain events for the feature you’re about to build
  2. Notice which events cluster together — those are your bounded contexts
  3. Ask the AI consultant: “What’s missing? What events connect these contexts?”
  4. Write down the boundaries and the events that cross them
  5. Paste that as context before your coding prompt

The difference between a good prompt and a great prompt isn’t cleverness — it’s context. A domain model gives your AI the context it can’t infer from a sentence.

That’s ten minutes. The code you generate afterward will have cleaner boundaries, fewer implicit decisions, and less rework when requirements change.

When context matters most

Not every prompt needs a domain model behind it. “Add a loading spinner to the submit button” doesn’t require understanding your bounded contexts. But some situations benefit enormously from structured context:

New features that cross domain boundaries. Anything involving users and billing and notifications. The AI needs to know which pieces talk to which.

Refactoring coupled code. Before you ask the AI to “clean up the payment module,” model what the target architecture looks like. Give it the destination, not just the starting point.

Onboarding a new AI tool to your codebase. When you switch from Cursor to Claude Code or start using Copilot Workspace, structured context helps the new tool understand your system faster than reading code files.

Team handoffs. When another developer takes over a feature, a domain model is better context than a Slack message saying “the billing stuff is in services/payments.”

The pattern is consistent: wherever the AI needs to understand relationships between parts of your system, vibe modeling gives it what it can’t figure out from code alone. The architecture decisions, the boundaries, the things that should stay separate — that’s the context that turns generated code from plausible to correct.

Try it yourself

Map your domain events. Explore bounded contexts with AI. Walk away confident.

Open the Board