I was about to make a mistake that would have cost me months of refactoring — and a friend’s question saved me. It was early in the 2000’s but I vividly remember a project I worked on where everything changed for me. I was deep into implementing the system, and I had a design doubt that, although I expressed it in a rudimentary way like “how do I do this?”, I knew it would condition the future of that software. I’d just devoured The Design Patterns Smalltalk Companion and Smalltalk Best Practice Patterns by Kent Beck, and I was eager for the ideas in those books, ready to fit patterns like Singleton, Command, or Strategy into every corner of the design.

At the time, we used to consult colleagues and friends for design feedback. While I enthusiastically explained to one how one pattern would solve this problem and another would fit perfectly into that other part, this friend interrupted me with a phrase that stuck in my mind: “But why, instead of trying to fit patterns into your design, don’t you focus on properly modeling what your models should do? From that, you might see known patterns emerge, but it’s from modeling correctly that they arise.”

Everything seemed to happen in slow motion at that moment.

It wasn’t about forcing preconceived technical structures from the outside in, but about capturing the essence of the real world that the software aimed to represent and modeling it modestly from its parts toward the totality of the desired behavior.

This meant refraining from overly premature bottom-up solution building and instead promoting a modest, top-down-inspired bottom-up approach.

My confused ego felt the cold bucket of water, but my inner Aristotle enthusiastically approved the insight of guiding the design down that path.

This insight not only transformed my approach to programming but also showed me how software design can be more resilient over time, flexible, and maintainable.

And it’s obvious why.

Implementation details can change like a library update, but the essence, the reasons for something’s existence, is much harder to change. If the structure successfully captures that, it will be much more long-lived and resistant to the passage of time.

This principle becomes even more relevant now that AIs have inverted the cost of execution.

When code generation is cheap, design quality becomes your competitive advantage.

We’ve always been accustomed and adapted to the idea that executing a project is the expensive part. “Everything” in the industry is optimizing for that. But with synthetic assistants, producing functional code and execution is no longer the most costly part.

Reality vs. Necessary/Unnecessary Abstractions

At its core, software is nothing more than a simulation of the real world. A digital mirror that reflects processes, entities, and relationships in the domain we’re trying to solve. When we prioritize modeling this reality—the actors, their responsibilities, and inherent behaviors—we create systems that flow naturally, like a river following its course.

Every system is necessarily an abstraction of reality that we call “works well” when it models it with high fidelity for what we wanted to manage. But if it has no bugs and its utilitarian purpose doesn’t distract us, we quickly forget that it’s a reflection of reality. That it’s still a structural and dynamic model of real-world processes and objects with their natural tendencies.

Invention vs. Discovery

Kent Beck captured it perfectly in his works: “Patterns are not invented; they are discovered.” They aren’t recipes we apply from a book to impress, but “accidental consequences of design” that appear organically when proper and complete modeling has been done. Forcing a pattern, like imposing a Singleton where it wasn’t entirely necessary, creates a restriction that will have consequences.

Speaking of Singleton, this one in particular doesn’t let you scale horizontally whatever you’re modeling. Almost everything you initially thought to solve with a Singleton was actually a way to move the project forward by referencing that object from different places of the codebase, because at the start of the project, it’s not so clear how those “different places” can be properly coupled. You solve it with a Singleton, but later you figured out it was better to instantiate it normally at startup and pass it downstream to the modules that need it.

The sequence in which your mind found the best abstractions chain was not linear and this is natural for us. But we need to be mindful now because AIs have a different nature and might not have that restriction (they have other limitations of their own).

And this philosophy transcends languages: in Smalltalk, where everything is an object, it’s easy to fall into the temptation of patterns by default. But in other non-pure languages too. You can perfectly do all the wrong Separations Of Concerns in your Rust modules.

Even in generative AI environments, the principle is not only the same but its value is renewed and amplified.

Ask your design: “Does this reflect the real domain, or is it a forced abstraction?” By doing so, you’ll see how patterns emerge on their own, making your design more future-proof—capable of evolving without breaking—and flexible to adapt to new requirements without massive refactorings.

Contrast this with a hacked design: forced patterns create high coupling and scattered logic, confusing even advanced AIs. It’s going to leave your project slower and more complicated to maintain. In a thought experiment, imagine asking an AI to add a feature to an e-commerce system. If the domain is well-modeled—products that “know” how to promote themselves, carts that calculate totals on their own—the AI generates precise and maintainable code. If not, it ends up patching hacks, perpetuating technical debt despite fitting some scattered unit tests that pass and tell you everything’s fine. Maybe in that commit, but in the next feature you want to add, you’ll feel bogged down.

Barriers

The pressure of deadlines pushes us toward quick solutions: “I couple this here from another module that has nothing to do with it and solve it today.” Not all tech debt is bad, but keeping it under control requires being attentive to its consequences.

Another barrier is technical inertia: in large teams, it’s tempting to copy existing structures, even if they don’t model the current domain. I do it this way because they solved it that way over there, “it’s safe.” And with AIs, some think “the AI will fix it,” but poor modeling only amplifies problems in automatic generations.

A particularly subtle barrier is how our ability to solve technical problems—those ingenious hacks that save the day with “clever” implementations—can negatively condition high-level design. It’s easy to fall into the trap of thinking in bottom-up solutions, where low-level code dictates the architecture, instead of top-down, where the domain guides the design. This leads to rigid systems, where the technical “how” eclipses the “what” of the real domain. The details, convenient at first, later distance you from reality.

Techniques to avoid falling into this problem

Explicit Separation of Phases: Divide the process into clear stages. First, dedicate exclusive time to modeling the domain without touching code: use whiteboards, diagrams, or even conversations with stakeholders to capture real entities and flows. From those flows, you can get an idea of which methods are essential, thus a notion of what would result in a minimal, elegant, and maintainable API. Only then, implement. This defends you from low-level details invading the high level. For AIs, ask them to generate domain models based on natural descriptions before code, and have them unblock you or give you more than one approach to the solution that solves a problem.

Competing Design Ideation: Before committing to an implementation, create an ideation phase where multiple design approaches compete for approval. This used to be costly and laborious — which explains the natural resistance to doing it — but it’s become brutally cheap now. Generate at least three different ways to model the solution to the same problem, ranging from minimalistic to comprehensive, each with its own trade-offs and domain alignment. Make these design ideas explicitly compete by evaluating them against domain fidelity, maintainability, and future flexibility. They will be fantastic conversation starters between engineers. It prevents premature commitment to a single approach that might be a forced pattern in disguise. The winning design should emerge from how well it captures reality and prepares you for the future, not from technical convenience. With AIs, ask them to generate multiple architectural approaches for the same problem, then contrast them. Have them argue for and against each approach from a domain modeling perspective. This competitive ideation surfaces hidden assumptions and forces you to justify design choices based on reality before patterns.

Test-Driven Development with Focus on Behaviors (BDD): Write tests that describe domain behaviors in natural language (using tools like Cucumber or SpecFlow). This forces you to think about “what the system does” from the user or domain perspective, not “how I implement it.” Your low-level skills are used to pass the tests, but they don’t dictate the design. AIs can generate these initial tests, reinforcing the high-level focus. Try having an AI make you a PRD and ask it to generate user stories and those tests. They usually do a great job with that part.

Analogies and Real-World Metaphors: Before designing, relate the problem to a non-technical scenario. For example, compare an order system to a real restaurant: Who handles what? This elevates thinking above code-specific hacks. It makes you think about the “universality” of a certain process, and your mind will be more eager to capture its essence. Ask AIs for metaphors, have them give you several to expand perspectives. It doesn’t matter if not all fit; the technique is that by reading them, you’ll find the one that fits best faster because you have better context than the AIs.

Retrospectives and Design Journaling: At the end of each iteration, ask yourself: “Does my design reflect the domain or my technical preferences?” Keep a journal of decisions to detect conditioning patterns. In my repos, I even use a DECISIONS.md for this. In teams with AIs, use prompts so they analyze your code and point out if the low level dominates.

Exercises in Refactoring Toward Domain: Take existing code with hacks and refactor it focusing only on moving behaviors to domain-correct actors, ignoring premature optimizations. Do it as a regular practice. Ask an AI to propose refactorings, comparing versions to internalize the insight. Try telling it that you have a preference for “One-way dependency discipline.” It will definitely propose things that improve Separation of Concerns and how modules depend on each other.

Interdisciplinary Collaboration: Involve non-programmers (like domain experts) in modeling sessions. Their inputs keep the focus on reality, diluting technical bias. If it’s hard to schedule, customized AIs can simulate these experts via role-playing in prompts.

These techniques aren’t just tricks; they’re tools to cultivate a mindset where the domain rules and technical details serve, not the other way around. By applying them, you not only avoid common traps but also prepare your systems for a future where humans and AIs co-create without friction, with designs that evolve gracefully instead of breaking under the weight of accumulated hacks. It’s like going from being a reactive coder to a visionary architect, where every decision honors the reality of the problem.

The understandability of your basecode becomes a feature.

A good design is never an accident.

It’s always deeply intentional.

And often, the best one is discovered after removing everything that’s not essential.

Does your inner Aristotle approve?

Before writing a single line of code, start your next feature by asking:

  • ‘What is the real-world process I’m modeling?’
  • ‘What are the restrictions I’ll be imposing with this necessarily incomplete model of the real-world I’m coding?’