1. Introduction: The Myth of Emergent Design
Test-Driven Development (TDD) promises a simple, seductive idea:
If you write tests first...
For further actions, you may consider blocking this person and/or reporting abuse
I think the core of the post is:
If you are not willing to break the tests, the application is not advancing.
Instead of write tests first, I have a write tests as early as possible attitude.
If I'm still exploring the domain, broken tests or throwing away loads of tests are part of the progress.
Software is not like a house, you can replace load bearing walls if you have a good strategy.
The core of the article is that tests are for testing, not for designing. It's like using a fork to eat soup - wrong tool for the job.
As long as tests are used to protect non-negotiable behaviour, they’re an excellent tool.
If they’re used to avoid or replace proper domain modelling, then they’re being misapplied.
You are right. But to be honest I never had the idea using TDD as a modeling tool.
First you model, then you write the tests that support the model. Then write the code.
When the model changes, write the tests for the updated model and write the code.
Treating tests like they are written in stone, is the worst thing you can do to an application. That is why I highlighted that part as the core.
I don't think TDD is to blame for people not willing to put in the work.
Yes, true — the fundamental goal of TDD is to fulfill the behavior defined by the tests. That’s why TDD isn’t inherently a design driver: it simply tells you to make red → green.
The problem arises when “green” is treated as a definition of done. The continuous design of a rich domain model becomes invisible in the process and is all too easily skipped. That’s why the risk of treating TDD as a false idol is very real.
In my experience, rich domain models don’t suit TDD very well. Implementing the model is part of discovering and learning about it, and there’s often no single “behavior” to capture at the start — writing lots of tests for evolving domains just isn’t practical.
Isn't that another way of saying don't change the tests?
How can you write lots of tests for an evolving domain? You can only write tests for the parts you know.
What is the chance the base functionality of an evolving domain changes?
No, it’s not another way of saying “don’t change the tests.”
What I mean is that the initial implementation is often treated as complete the moment the tests turn green. The task becomes “fulfill the user story,” not “first understand the domain.” The ongoing design of a rich domain model becomes invisible, and that’s where the risk lies.
When I say a domain is evolving, I mean our understanding of the domain is evolving. Early on you’re not just coding entities; you’re discovering invariants, boundaries, policies, and relationships. Entities are domain objects, yes — but the domain model is much more than its entities, and those other parts tend to shift significantly as insights emerge.
Because of that, early tests rarely survive long in rich models. They’re based on the first, shallowest interpretation of the domain, so they end up fossilizing assumptions that later turn out to be wrong.
Trivial or low-level tests don’t help much here. They almost never catch real bugs, but they add friction and rework whenever the model evolves. The only tests that remain valuable are the ones that protect non-negotiable, high-level business behaviour.
If by “tests” you mean the business-facing, domain-level ones, then yes — those stay stable. But classic TDD’s fine-grained, implementation-first testing simply doesn’t match how rich domain models evolve. It just multiplies the amount of work every time the domain deepens or changes.
I agree that using TDD by the letter is not a viable practice. But even the inventor of TDD says you don't have to do that.
It is true with every concept. You embrace the good parts, and drop the rules that make you do silly things.
That looks to me like a developer problem, instead of a TDD problem.
It is not because an architect created the domain, I as a developer mindlessly follow what has been modeled.
The best way of working is by getting everyone on the same page. Sometimes it is a technical issue that makes it not possible to follow the model, other times it is the architect that found a better model. It is working together to create the product that makes is good.
That is a given, if you are working with models or not. But this brings us back to not willing to do the work.
Like I mentioned before, I'm not a TDD user. But I do like TDD as writing tests as early as possible. When I have the first piece of code that feels solid enough to build on, I start writing tests.
I think this is where we see things differently.
Writing tests for the sake of writing tests quickly turns into a “must-have,” and for trivial logic those tests will never catch a real bug — they only duplicate the implementation and later become friction. As long as a process (like strict TDD) implicitly pushes you toward lots of granular tests or high coverage, it guarantees extra work every time the domain shifts.
I’m absolutely in favour of putting in the work where it matters: deep domain behaviour, non-negotiable rules, regression around complex interactions. But if extra work can be avoided by not writing tests that will never find an actual defect, that seems like a much more practical trade-off. In fact, I’ve seen this approach result in fewer defects than a large test suite. A huge number of tests often gives a false sense of security rather than real safety.
For me, that’s why rich domain modelling and strict TDD don’t pair well: one is exploratory, while the other front-loads test obligations before the model is even understood.
I agree that isn't useful.
I'm writing tests to make sure when I refactor the existing behavior keeps on working.
Of course if the behavior changes, the test have to change, but then the tests make sure the extended behavior keeps working if that hasn't changed.
Most bugs appear because there is nothing to monitor the code changes. For me it is the monitoring that is the main goal of having tests. Additionally it provides developers with code examples, but that is just a bonus.
TDD also doesn't require you to write test for test sake, that is the interpretation of the people that use the concept. Literal reading of a theory does more evil than good.
Fundamentally so true.
TDD can just verify that code modifications do not break covered existing behaviours ; if written before writing operational code, it can prove that the modification complies to given specifications ; and thus it serves as an element of proof that the job is "done". And that's already a great thing, one don't need to "believe me", the tests are there for that.
Design emerge when software engineer :
Or in short, the right design emerge when software engineer have a clear idea of the big picture, instead of a little snippet.
You’ve hit the nail on the head. I completely agree — developers are most effective when they truly understand the business domain. Treat them like assembly-line workers, implementing snippets without seeing the bigger picture, and you end up with bloated applications, technical debt, and fragile designs. Rich domain modelling requires discovery and understanding, not just executing “red → green.” In my experience, TDD and large test suites often only add to the bloat. There’s no real shortcut for ensuring developers deeply understand the business domain.
Good information provided