Back to Blog

Event Storming Meets AI: What Changes When Your Board Has a Consultant

If you’ve done event storming before — orange stickies on a wall, a room full of developers and domain experts arguing about what “OrderPlaced” really means — you know the power of making domain events visible. You also know the constraints: you need a facilitator, a room, and enough people who understand the domain to catch the gaps.

Vibe modeling is the practice of visually exploring domain events, system boundaries, and user flows with AI before writing code. It gives developers structured context and shared understanding, so vibe coding starts from a clear model instead of a vague prompt.

It’s event storming’s natural evolution for a world where developers build with AI tools. Here’s what changes when your board has an AI consultant sitting in.

What stays the same

The core mechanic doesn’t change. You place domain events on a timeline. Orange sticky notes — “UserRegistered,” “PaymentProcessed,” “OrderShipped.” You look at the flow, spot clusters, draw boundaries. The visual exploration of your domain is still the foundation.

The insight that makes event storming powerful — that placing events spatially reveals structure you can’t see in lists or documents — remains exactly true with AI in the room. The board is still the thinking surface. Events are still the unit of exploration.

If you know event storming, you already know how to do vibe modeling. The vocabulary is the same. The visual language is the same. What changes is who else is at the board.

What changes: the AI catches what you miss

In a traditional event storming session, the quality of the exploration depends on who’s in the room. A senior developer spots that “PaymentFailed” needs a retry flow. A domain expert knows that refunds have different rules for subscriptions versus one-time purchases. A facilitator notices when two people use the same term to mean different things.

An AI consultant on the board plays all three roles, partially. It sees the events you’ve placed and identifies patterns:

“You have InvoiceGenerated and PaymentProcessed, but nothing between them. What happens if the payment is pending for 48 hours?”

“These seven events all touch user state. Is ‘User’ one bounded context or are authentication and profile management separate concerns?”

“You mentioned OrderShipped but there’s no event for partial shipments. Does your system handle split orders?”

These aren’t questions a human facilitator would skip — they’re questions a solo developer working alone at 11pm would miss. The AI expands the range of what one person can explore.

What changes: you can model alone

Traditional event storming requires a group. Alberto Brandolini designed it as a collaborative workshop specifically because no single person holds enough domain knowledge to model a system alone. The diversity of perspectives is the point.

But here’s the reality for most developers building with AI coding tools: they’re working solo or in small teams. There’s no budget for a two-day workshop. There’s no room full of domain experts. There’s a developer, a product spec, and a prompt.

Vibe modeling makes solo domain exploration viable. Not because the AI replaces the domain experts — it doesn’t — but because it fills enough of the facilitation gap that a single developer can do useful exploration. It asks the probing questions. It notices the gaps. It suggests patterns from its training data that might apply to your domain.

Is this as thorough as a workshop with eight people? No. Is it dramatically better than going straight from a product spec to Claude Code? Yes.

What changes: the output is structured context

After a traditional event storming session, you have photos of sticky notes. Maybe a Miro board. The insights live in participants’ heads and in whatever documentation someone writes up afterward. Translating that into code is a manual interpretive step.

After a vibe modeling session on a board with AI, the output is structured: named events, defined bounded contexts, documented relationships between contexts. That structure maps directly to prompts for AI coding tools.

“Implement the Billing context. It publishes these events: InvoiceGenerated, PaymentProcessed, PaymentFailed. It subscribes to these events from the Subscription context: PlanUpgraded, SubscriptionCancelled.”

The AI coding tool receives architecture, not ambiguity. The code it generates reflects the boundaries you discovered because you told it about them explicitly.

What changes: iteration speed

Event storming workshops happen once per quarter, maybe. They’re expensive in calendar time, travel, and preparation. The model you create is a snapshot that ages quickly as the system evolves.

An AI-powered board is always available. After you ship a feature and discover that the billing context actually needs to know about refund state, you spend five minutes updating the model. After a retrospective reveals that the notification system is too coupled, you open the board and explore a cleaner boundary.

Event storming taught us that visual exploration reveals domain structure. AI means you don’t need a workshop to do it.

The modeling practice becomes continuous instead of periodic. Your domain model stays current because updating it takes minutes, not days of coordination.

What doesn’t change: you still need domain knowledge

The AI is not a domain expert. It can identify patterns, surface questions, and suggest common architectures from its training data. But it doesn’t know that your specific business requires refunds to be processed within 24 hours for regulatory compliance. It doesn’t know that “Order” means something different in your warehouse system than in your customer-facing app.

Domain knowledge still comes from humans — from you, from your team, from your stakeholders. The AI makes the exploration more thorough by catching structural gaps, but the substance of the model depends on what you know about your domain.

If traditional event storming is a room full of people exploring together, vibe modeling is you exploring with an AI partner that never gets tired, never forgets to ask “what happens if this fails,” and never needs a lunch break. The domain expertise is yours. The exploration capacity is multiplied.

Getting started if you know event storming

The transition is natural. Open a board. Place your domain events the same way you would with stickies. Arrange them on a timeline. When the AI asks questions, treat them like you’d treat questions from a facilitator — some are gold, some miss the mark. You decide.

The biggest adjustment is trust calibration. In a workshop, you trust the facilitator because they’re experienced. With an AI, you trust the questions but own the answers. Every boundary you draw, every context you name, every event you keep or remove — that’s your judgment. The AI helps you think. It doesn’t think for you.

If you’ve never done event storming, start here. The concepts are the same. The barrier to entry is lower. And the code you write afterward will be better for having thought about the domain first.

Try it yourself

Map your domain events. Explore bounded contexts with AI. Walk away confident.

Open the Board