Deep Engineering #4: Alessandro Colla and Alberto Acerbis on Domain-Driven Refactoring at Scale
Why understanding the domain beats applying patterns—and how to refactor without starting over
Welcome to the fourth issue of Deep Engineering.
In enterprise software systems, few challenges loom larger than refactoring legacy systems to meet modern needs. These efforts can feel like open-heart surgery on critical applications that are still running in production. Systems requiring refactoring are often business-critical, poorly modularized, and resistant to change by design.
To understand how Domain-Driven Design (DDD) can guide this process, we spoke with Alessandro Colla and Alberto Acerbis—authors of Domain-Driven Refactoring (Packt, 2025) and co-founders of the "DDD Open" and "Polenta and Deploy" communities.
Colla brings over three decades of experience in eCommerce systems, C# development, and strategic software design. Acerbis is a Microsoft MVP and backend engineer focused on building maintainable systems that deliver business value. Together, they offer a grounded, pattern-skeptical view of what DDD really looks like in legacy environments—and how teams can use it to make meaningful change without rewriting from scratch.
You can watch the full interview and read the full transcript here—or keep reading for our distilled take on the principles, pitfalls, and practical steps that shape successful DDD refactoring.
Principles over Patterns: Applying DDD to Legacy Systems with Alessandro Colla and Alberto Acerbis
Legacy systems are rarely anyone’s favorite engineering challenge. Often labeled “big balls of mud,” these aging codebases resist change by design—lacking tests, mixing concerns, and coupling business logic to infrastructure in ways that defy modular thinking. Yet they remain critical. “It’s more common to work on what we call legacy code than to start fresh,” Acerbis notes from experience. Their new book, Domain-Driven Refactoring, was born from repeatedly facing large, aging codebases that needed new features. “The idea behind the book is to bring together, in a sensible and incremental way, how we approach the evolution of complex legacy systems,” explains Colla. Rather than treat DDD as something only for new projects, Colla and Acerbis show how DDD’s concepts can guide the incremental modernization of existing systems.
They begin by reinforcing core DDD concepts—what Colla calls their “foundation”—before demonstrating how to apply patterns gradually. This approach acknowledges a hard truth: when a client asks for “a small refactor” of a legacy system, “it’s never small. It always becomes a bigger refactor,” Acerbis says with a laugh. The key is to take baby steps. “Touching a complex system is always difficult, on many levels,” Colla cautions, so the team must break down the work into manageable changes rather than trying an all-at-once overhaul.
Modular Monoliths Before Microservices
One of the first decisions in a legacy overhaul is whether to break a monolithic application into microservices. But Colla and Acerbis urge caution here—hype should not dictate architecture.
“Normally, a customer comes to us asking to transform their legacy application into a microservices system because — you know — ‘my cousin told me microservices solve all the problems,’” Acerbis jokes. The reality is that blindly carving up a legacy system into microservices can introduce as much complexity as it removes. “Once you split your system into microservices, your architecture needs to support that split,” he explains, from infrastructure and deployment to data consistency issues.
Instead, the duo advocates an interim step: first evolve the messy monolith into a well-structured modular monolith. “Using DDD terms, you should move your messy monolith into a good modular monolith,” says Acerbis. In a modular monolith, clear boundaries are drawn around business subdomains (often aligning with DDD bounded contexts), but the system still runs as a single deployable unit. This simplification and ordering within the monolith can often deliver the needed agility and clarity. “We love monoliths, OK? But modular ones,” Colla admits. With a modular monolith in place, teams can implement new features more easily and see if further decomposition is truly warranted. Only if needed—due to scale or independent deployment demands—should you “split it into microservices. But that’s a business and technical decision the whole team needs to make together,” Acerbis emphasizes.
By following this journey, teams often find full microservices unnecessary. Colla notes that many times they’ve been able to meet all business requirements just by going modular, without ever needing microservices. The lesson: choose the simplest architecture that solves the problem and avoid microservices sprawl unless your system’s scale and complexity absolutely demand it.
First Principles: DDD as a Mindset, Not a Checklist
A central theme from Colla and Acerbis is that DDD is fundamentally about understanding the problem domain, not checking off a list of patterns. “Probably the most important principle is that DDD is not just technical — it’s about principles,” says Acerbis. Both engineers stress the importance of exploration and ubiquitous language before diving into code. “Start with the strategic patterns — particularly the ubiquitous language — to understand the business and what you’re dealing with,” Colla advises. In practice, that means spending time with domain experts, clarifying terminology, and mapping out the business processes and subdomains. Only once the team shares a clear mental model of “what actually needs to be built” should they consider tactical design patterns or write any code.
Colla candidly shares that he learned this the hard way.
“When I started working with DDD, CQRS, and event sourcing, I made the mistake of jumping straight into technical modeling — creating aggregates, entities, value objects — because I’m a developer, and that’s what felt natural. But I skipped the step of understanding why I was building those classes.
I ended up with a mess.”
Now he advocates for understand the why, then the how. “We spent the first chapters of the book laying out the principles. We wanted readers to understand the why — so that once you get to the code, it comes naturally,” Colla says.
This principle-centric mindset guards against a common trap: applying DDD patterns by rote or “cloning” a solution from another project.
“I’ve seen situations where someone says, ‘I’ve already solved a similar problem using DDD — I’ll just reuse that design.’ But no, that’s not how it works,” Acerbis warns.
Every domain is different, and DDD is “about exploration. Every situation is different.” By treating DDD as a flexible approach to learning and modeling the domain—rather than a strict formula—teams can avoid over-engineering and build models that truly fit their business.
From Strategic to Tactical: Applying Patterns Incrementally
Once the team has a solid grasp of the domain, they can start to apply DDD’s tactical patterns (entities, value objects, aggregates, domain events, etc.) to reshape the code. But which pattern comes first? Colla doesn’t prescribe a one-size-fits-all sequence. “I don’t think there’s a specific pattern to apply before others,” he says. The priority is dictated by the needs of the domain and the pain points in the legacy code. However, the strategic understanding guides the tactical moves: by using the ubiquitous language and bounded contexts identified earlier, the team can decide where an aggregate boundary should be, where to introduce a value object for a concept, and so on.
Acerbis emphasizes that their book isn’t a compendium of all DDD patterns—classic texts already cover those. Instead, it shows how to practically apply a selection of patterns in a legacy refactoring context. The aim is to go from “a bad situation — a big ball of mud — to a more structured system,” he says. A big win of this structure is that new features become easier to add “without being afraid of introducing bugs or regressions,” because the code has clear separation of concerns and meaningful abstractions.
Exploring the domain comes first. Only then should the team “bring in the tactical patterns when you begin touching the code,” says Colla. In other words, let the problem guide the solution. By iteratively applying patterns in the areas that need them most, the system gradually transforms—all while continuing to run and deliver value. This incremental refactoring is core to their approach; it avoids the risky big-bang rewrite and instead evolves the architecture piece by piece, in sync with growing domain knowledge.
Balancing Refactoring with Rapid Delivery
In theory, it sounds ideal to methodically refactor a system. In reality, business stakeholders are rarely patient—they need new features yesterday. Colla acknowledges this tension:
“This is the million-dollar question. As in life, the answer is balance. You can't have everything at once — you need to balance features and refactoring.”
The solution is to weave refactoring into feature development, rather than treating it as a separate project that halts new work.
“Stakeholders want new features fast because the system has to keep generating value,” Colla notes. Completely pausing feature development for months of cleanup is usually a non-starter (“We’ve had customers say, ‘You need to fix bugs and add new features — with the same time and budget.’”). Instead, Colla’s team refactors in context: “if a new feature touches a certain area of the system, we refactor that area at the same time.” This approach may slightly slow down that feature’s delivery, but it pays off in the long run by preventing the codebase from deteriorating further. Little by little (“always baby steps,” as Colla says), they improve the design while still delivering business value.
Acerbis adds that having a solid safety net of tests is what makes this sustainable. Often, clients approach them saying it’s too risky or slow to add features because “the monolith has become a mess.” The first order of business, then, is to shore up test coverage.
“We usually start with end-to-end tests to make sure that the system behaves the same way after changes,” he explains.
Writing tests for a legacy system can be time-consuming initially, but it instills confidence.
“In the beginning, it takes time. You have to build that infrastructure and coverage. But as you move forward, you’ll see the benefits — every time you deploy a new feature, you’ll know it was worth it.”
With robust tests in place, the team can refactor aggressively within each iteration, knowing they will catch any unintended side effects before they reach users.
Aligning Architecture with Organization
Even the best technical refactoring will falter if organizational structure is at odds with the design. This is where Conway’s Law comes into play—the notion that software systems end up reflecting the communication structures of the organizations that build them.
“When introducing DDD, it’s not just about technical teams. You need involvement from domain experts, developers, stakeholders — everyone,” says Acerbis.
In practice, this means that establishing clean bounded contexts in code may eventually require realigning team responsibilities or communication paths in the company.
Of course, changing an organization chart is harder than changing code. Colla and Acerbis therefore approach it in phases. “Context mapping is where we usually begin — understanding what each team owns and how they interact,” Colla explains. They first try to fix the code boundaries while not breaking any essential communication between people or teams. For instance, if two modules should only talk via a well-defined interface, they might introduce an anti-corruption layer in code, even if the same two teams still coordinate closely as they always have. Once the code’s boundaries stabilize and prove beneficial, the case can be made to align the teams or management structure accordingly.
“The hardest part is convincing the business side that this is the right path,” Acerbis admits. Business stakeholders control budgets and priorities, so without their buy-in, deep refactoring stalls. The key is to demonstrate value early and keep them involved. Ultimately, “it only works if the business side is on board — they’re the ones funding the effort,” he says. Colla concurs: “everyone — developers, architects, business — needs to share the same understanding. Without that alignment, it doesn’t work.” DDD, done right, becomes a cross-discipline effort, bridging tech and business under a common language and vision.
Building a Safety Net: Tools and Testing Techniques
Given the complexity of legacy transformation, what tools or frameworks can help? Colla’s answer may surprise some: there is no magic DDD framework that will do it for you. “There aren’t any true ‘DDD-compliant’ frameworks,” he says. DDD isn’t something you can buy off-the-shelf; it’s an approach you must weave into how you design and code. However, there are useful libraries and techniques to smooth the journey, especially around testing and architecture fitness.
“What’s more important to me is testing — especially during refactoring. You need a strong safety net,” Colla emphasizes. His team’s rule of thumb: start by writing end-to-end tests for current behavior. “We always start with end-to-end tests. That way, we make sure the expected behavior stays the same,” Colla shares. These broad tests cover critical user flows so that if a refactoring accidentally changes something it shouldn’t, the team finds out immediately. Next, they add architectural tests (often called fitness functions) to enforce the intended module boundaries. “Sometimes, dependencies break boundaries. Architectural tests help us catch that,” he notes. For instance, a test might ensure that code in module A never calls code in module B directly, enforcing decoupling. And of course, everyday unit tests are essential for the new code being written: “unit tests, unit tests, unit tests,” Colla repeats for emphasis. “They prove your code does what it should.”
Acerbis agrees that no all-in-one DDD framework exists (and maybe that’s for the best). “DDD is like a tailor-made suit. Every time, you have to adjust how you apply the patterns depending on the problem,” he says. Instead of relying on a framework to enforce DDD, teams should rely on discipline and tooling – especially the kind of automated tests Colla describes – to keep their refactoring on track. Acerbis also offers a tip on using AI assistance carefully: tools like GitHub Copilot can be helpful for generating code, but “you don’t know how it came up with that solution.” He prefers to have developers write the code with understanding, then use AI to review or suggest improvements. This ensures that the team maintains control over design decisions rather than blindly trusting a tool.
Event-Driven Architecture: Avoiding the "Distributed Monolith"
DDD often goes hand-in-hand with event-driven architecture for decoupling. Used well, domain events can keep bounded contexts loosely coupled. But Colla and Acerbis caution that it’s easy to misuse events and end up with a distributed mess. Acerbis distinguishes two kinds of events with very different roles: domain events and integration events. “Domain events should stay within a bounded context. Don’t share them across services,” he warns. If you publish your internal domain events for other microservices to consume, you create tight coupling: “when you change the domain event — and you will — you’ll need to notify every team that relies on it. That’s tight coupling, not decoupling.”
The safer pattern is to keep domain events private to a service or bounded context, and publish separate integration events for anything that truly needs to be shared externally. That way, each service can evolve its internal model (and its domain event definitions) independently. Colla admits he’s learned this by making the mistakes himself. The temptation is to save effort by reusing an event “because it feels efficient,” but six months later, when one team changes that event’s schema, everything breaks. “We have to resist that instinct and think long-term,” he says. Even if it requires a bit more work upfront to define distinct integration events, it prevents creating what he calls a “distributed monolith that’s impossible to evolve” – a system where services are theoretically separate but so tightly coupled by data contracts that they might as well be a single unit.
Another often overlooked aspect of event-driven systems is the user experience in an eventually consistent world. Because events introduce asynchrony, UIs must be designed to handle the delay. Acerbis mentions using task-based UIs, where screens are organized around high-level business tasks rather than low-level CRUD forms, to better set user expectations and capture intent that aligns with back-end processes. The bottom line is that events are powerful, but they come with their own complexities – teams must design and version them thoughtfully, and always keep the end-to-end system behavior in mind.
The Road Ahead: DDD in the Age of AI
As the software industry evolves, so does the context in which we apply DDD. One trend on everyone’s mind is AI-assisted development. Both Acerbis and Colla have started using large language models to help with their work, but they approach it cautiously. Acerbis finds AI useful for parsing and summarizing domain knowledge. For instance, he might feed a lengthy requirements document to an LLM and ask, “Can you suggest a process model from this?” or “What are the likely events?” This can spark ideas and “help shorten the gap between developers and business stakeholders,” he says, by quickly extracting the business language and concepts. However, the goal isn’t to have AI replace human domain experts or designers. It’s an accelerant for understanding, not a substitute for the hard work of design.
Colla agrees that AI is “incredibly helpful — if you know what you’re doing.” The caveat is that AI can only remix what it’s seen before. “It can only reason about what it already knows. It doesn’t know how to reason about the unknown,” he observes. By definition, truly innovative software (and the hardest legacy refactoring issues) involve solving problems unique to your situation, which no pretrained model can fully grasp. Colla also reminds us that “AI code is trained on 40 years of existing code — half of it legacy!” So blindly trusting AI suggestions can just automate the propagation of old mistakes.
The pair’s advice is to use AI as a partner, not an oracle. Let it generate ideas or boilerplate, but use your own judgment to vet them. “AI is a great assistant, but you need to be able to evaluate what it gives you,” Colla says. If an LLM “doesn’t know the answer” it may still sound confident—so developers must spot when something is off or inapplicable (the classic AI hallucination problem). Perhaps the biggest risk is that AI makes it easier to reuse solutions that almost fit but not quite, tempting teams to bypass the essential domain exploration. Even with smarter tools, “even if a business looks similar to one you’ve seen before, it’s not the same. You can’t recycle your old models.” Colla insists. The heart of DDD—deep collaboration with domain experts and thoughtful modeling—remains as critical as ever, and that’s a human-centric activity.
What this means for you
Don’t jump in without understanding the domain: Slow down and truly grasp the domain events and logic before coding them. Focus on principles before patterns. DDD isn’t a bag of technical tricks, but a way to deeply understand the business domain before coding solutions. Align with the business. Technical architecture must reflect the domain and may require organizational buy-in and alignment (think Conway’s Law).
Beware the golden hammer: “Use DDD where it makes sense. You don’t need to apply it to your entire system,” Acerbis advises. Focus DDD efforts on the core domain (where the competitive advantage lies), and keep supporting domains simple. Modular monolith first. Instead of rushing into microservices, first untangle your “big ball of mud” into a well-structured modular monolith—often that's enough.
No “Franken-events”: If you see an “and” in an event name, that’s a red flag – it likely violates the single responsibility principle for events and will cause trouble when one part of that event changes and the other doesn’t. Refactor in baby steps. Integrate refactoring tasks into regular feature work, supported by a strong safety net of tests, to balance improvement with delivery.
Never allow invalid data by design: A subtle but dangerous practice is allowing objects or aggregates in an invalid state (for example, by using flags like isValid). “Your aggregates should always be in a valid state,” Acerbis emphasizes, meaning your constructors or factories should enforce invariants so you don’t have to constantly check validity later.
Don’t split the system before it’s ready: Microservices introduce complexity too early. “Once you split your system into microservices, your architecture needs to support that split,” Acerbis warns. Work on converting to a modular monolith first—often that's enough.
“Simple” versus “easy” code: “Simple code is not the same as easy code. Simple code takes effort. Easy code is quick, but it’s hard to maintain,” says Acerbis. What feels “easy” in the moment (quick-and-dirty hacks, copy-paste coding, skipping tests) leads to a tangled mess. Writing simple, clear code often requires more thought and discipline—but it pays off with maintainability. Evolve, don’t rewrite. Aim to evolve the system through continuous small changes rather than costly complete rewrites.
If you found Colla and Acerbis’ insights useful, their book Domain-Driven Refactoring offers a deeper, hands-on perspective—showing how to incrementally apply DDD principles in real systems under active development with substantial code examples. Here is an excerpt which covers how to integrate events within a CQRS architecture. You’ll learn how commands and events work together to manage state changes, how asynchronous messaging supports decoupling and scalability, and what it takes to design systems that remain resilient—even as complexity grows.
Expert Insight: Integrating Events with CQRS by Alessandro Colla and Alberto Acerbis
An Excerpt from “Chapter 7: Integrating Events with CQRS” in the book Domain-Driven Refactoring by Alessandro Colla and Alberto Acerbis (Packt, May 2025)
In this chapter, we will explore how to effectively integrate events into your system using the Command Query Responsibility Segregation (CQRS) pattern. As software architectures shift from monolithic designs to more modular, distributed systems, adopting event-driven communication becomes essential. This approach offers scalability, decoupling, and resilience, but also brings complexity and challenges such as eventual consistency, fault tolerance, and infrastructure management.
The primary goal of this chapter is to guide you through the implementation of event-driven mechanisms within the context of a CQRS architecture. By the end of this chapter, you will have a clear understanding of how events and commands operate in tandem to manage state changes, communicate between services, and optimize both the reading and writing of data.
(In this excerpt) you will learn about the following:
The benefits and trade-offs of transitioning from synchronous to asynchronous communication
How event-driven architectures improve system scalability and decoupling
The difference between commands (which trigger state changes) and events (which signal that something has happened)
How to apply proper message-handling patterns for both
The principles of CQRS and understanding why separating read and write models enhances performance and scalability
How to implement the separation of command and query responsibilities with a focus on read and write optimization
How to introduce a message broker for handling asynchronous communication
How to capture and replay the history of state changes with event sourcing
Domain-Driven Refactoring by Alessandro Colla and Alberto Acerbis (Packt, May 2025) is a practical guide to modernizing legacy systems using Domain-Driven Design. Through real-world C# examples, the authors show how to break down monoliths into modular architectures—whether evolving toward microservices or improving maintainability within a single deployable unit.
The book covers both strategic and tactical patterns, including bounded contexts, aggregates, and event-driven integration. You’ll learn how to align code with business domains, identify architectural pain points, and apply refactoring patterns incrementally—without halting delivery.
Use code DOMAIN20 for 20% off at packtpub.com — valid through June 16, 2025.
🛠️ Tool of the Week
Context Mapper 6.12.0 — Strategic DDD Refactoring, Visualized
Context Mapper is an open source modeling toolkit for strategic DDD, purpose-built to define and evolve bounded contexts, map interrelationships, and drive architectural refactorings. It offers a concise DSL for creating context maps and includes built-in transformations for modularizing monoliths, extracting services, and analyzing cohesion/coupling trade-offs.
The latest version (v6.12.0, released August 2024) continues its focus on reverse-engineering context maps from Spring Boot and Docker Compose projects, along with support for automated architectural refactorings—making it ideal for teams modernizing legacy systems or planning microservice transitions.
Highlights:
Iterative Refactoring: Apply “Architectural Refactorings” to improve modularity without rewriting everything.
Reverse Engineering: Extract bounded context candidates from existing codebases using the Context Map Discovery library.
Multi-Format Output: Export maps to Graphviz, PlantUML, MDSL, or Freemarker-based text formats.
IDE Integrations: Available as plugins for Eclipse and VS Code, or use it directly in Gitpod without local setup.
Whether you’re visualizing legacy architecture, decomposing a monolith, or planning new bounded contexts, Context Mapper makes strategic design tangible and testable.
📰 Tech Briefs
Architecture Refactoring Towards Service Reusability in the Context of Microservices by Daniel et al.: This paper proposes a catalog of architectural refactorings—Join API Operations with Heterogeneous Data, Introduce Metadata, and Extract Pluggable Processors—to improve service reusability in microservice architectures by reducing code duplication, decoupling data from processing logic, and supporting heterogeneous inputs, and validates these patterns through impact analysis on three real-world case studies.
DDD & LLMs - Eric Evans - DDD Europe 2024: In this keynote, Evans reflects on the transformative potential of large language models in software development, urging the community to embrace experimentation, learn through hands-on projects, and explore how DDD might evolve—or be challenged—in an era increasingly shaped by AI-assisted systems.
Domain Re-discovery Patterns for Legacy Code - Richard Groß - DDD Europe 2024: In this talk, Groß introduces domain rediscovery patterns for legacy systems—ranging from passive analysis techniques like mining repositories and activity logging to active refactoring patterns and visualization tools—all aimed at incrementally surfacing domain intent, guiding safe modernization without full rewrites, and avoiding hidden technical and organizational costs of starting from scratch.
Legacy Modernization meets GenAI by Ferri et al., Thoughtworks: This article discusses how GenAI can address core challenges in legacy system modernization—such as reverse engineering, capability mapping, and high-level system comprehension—arguing for a human-guided, evolutionary approach while showcasing Thoughtworks’ internal accelerator, CodeConcise, as one practical application of these ideas.
Refactor a monolith into microservices by the Google Cloud Architecture Center: This guide outlines how to incrementally refactor a monolithic application into microservices using DDD, bounded contexts, and asynchronous communication—emphasizing the Strangler Fig pattern, data and service decoupling strategies, and operational considerations like distributed transactions, service boundaries, and database splitting.
Anemic Domain Model: The Silent Drain on Your Software by Iterators: This article explains how Anemic Domain Models—data-centric designs lacking encapsulated business logic—undermine software maintainability, scalability, and clarity, and offers practical guidance, examples, and refactoring strategies (especially in Scala) for evolving toward Rich Domain Models aligned with DDD principles.
That’s all for today. Thank you for reading the first issue of Deep Engineering. We’re just getting started, and your feedback will help shape what comes next.
Take a moment to fill out this short survey—as a thank-you, we’ll add one Packt credit to your account, redeemable for any book of your choice.
We’ll be back next week with more expert-led content.
Stay awesome,
Divya Anne Selvaraj
Editor-in-Chief, Deep Engineering