Why we TDD

TDD has been a hot-button topic for decades now. Some are passionate advocates; others are equally passionate critics. Few if any developers feel ambivalent about it. In this piece, I want to look specifically at what why we practice it at Sodium Skies.
What matters
People read so much into this topic. I want to be clear about what I am and am not talking about.
TDD is the practice of writing one new failing test, making it pass, then optimising your code. For some value of 'optimising'. It's often expressed as the 'TDD cycle' – some variant of:
- Write a new failing test
- Make that test pass, without breakig any of the previous tests
- Refactor your code as necessary
- Repeat…
Or as TDD's succinct mantra puts it: Red / Green / Refactor.
Key to this is:
- We're writing the test before the production code
- We're only writing one new failing case at a time
That's it. That's what matters.
What doesn't matter (at least for this discussion)
Detroit School vs London School
Yes there are two flavours of TDD. For the purposes of this article, it's something we can ignore.
Unit tests / Functional tests / System tests / etc
A lot of people expend a lot of energy debating the precise differences between these. I don't consider it terribly important. You write your test at the lowest level that will express the new functionality, because this is the most efficient (quickest) level to run the test at. Does it matter whether we call it a unit test or a functional test? Not really.
TDD and BDD
Following from the above, any technological diffreence between these evaporates. The key distinction that remains is that BDD embodies a conversation between developers and whoever is commissioning a feature from them, 'typically' a Product Manager or similar. Details about the use of Given / When / Then clauses are incidental features of a specific tool.
Is TDD really testing?
A bone of contention, particularly amongst testing professionals, is that because TDD 'tests' are written before the production code, they're not really tests of the code at all. Early BDD responded by characterising their outputs as executable specifications rather than tests.
In my view the semantics of whether or not TDD 'tests' are tests is unimportant. The practice exists and it has impacts, positive, negative or neutral.
However note that the assertion that tests must follow the thing being tested is not necessarily accurate. Many tests lead their subject. The Turing test comes to mind, formulated many decades before any kind of candidate AI.
Does TDD make manual testing irrelevant?
No.
What do we gain from TDD at Sodium Skies?
Note the emphasis here. It's what we gain from TDD. This is not an exhaustive list of TDD's benefits. Nor is it a strong claim that everyone should experience the same benefits.
Maintainability 1: Regression suite
Most famously, TDD gives you a test regression suite. This outcome is de-emphasised by a lot of TDD advocates (it doesn't on the surface differentiate from test-after-the-fact) but its value shouldn't be underestimated. It provides a vital safety net for new feature development, and even more so for refactoring and re-engineering.
Our regression suite from TDD has pulled us from the fire time and again. We have re-engineered substantial portions of the Simple Data System codebase with confidence; the tests have alerted us to both gross and subtle behaviour changes.
Maintainability 2: Code comprehensibility
Coding with TDD produces shorter, more comprehensible functions than without. Is it impossible to produce code like this without TDD? Of course not. In practice though, development without TDD tends to lead to functions that grow in length and complexity, while TDD tends to limit this. Looking at a codebase, you can literally see whether the team has been practicing TDD.
We find TDD code more readable and comprehensible. It makes future maintenance and development simpler.
TDD eliminates many bugs
Working without TDD, you write your production code then explore its inputs and outputs. When (not if) you find an edge case that doesn't meet your expectations or needs, you go back into the function to resolve it. Then the you start exploring once more…
The process of TDD is in large part about exploring and defining those edge cases up front. You expect the function to break down with a negative input? You actively code a test for that and define its behaviour.
Entire category of bugs, and the tedious process of finding them, simply vanish. This is one reason why…
TDD makes us faster
There's a low bar for this and a high bar.
The low bar is that TDD is faster than writing tests after the production code. TDD necessarily creates highly testable code – after all, the tests have already been written before the implementation. A set of functions written without TDD is much harder to test, and filling in those tests afterwards will take longer.
Of course the true likely outcome is that the team implements less test automation.
The high bar is that TDD is faster than no automated testing. Partly this is because of the safety net provided by the regression suite. Partly it is because TDD enables us to focus on much smaller pieces of functionality and let a larger solution evolve, rather than trying to keep the state of the larger solution in mind during development. Partly it's because of the up-front definition around the edges described above.
You may question whether TDD meets this high bar. But bear in mind that to sidestep it you have to forego test automation. That's a very steep price.
What would we NOT recommend TDD for?
Clearly we're TDD enthusiasts at Sodium skies. Let's generalise. Is there any domain for which we would not recommend TDD?
I was once challenged that TDD leads to "premature hill-climbing". It finds what the developer in question considered a "local maximum", ie a working code solution but not the best possible code. Well it all depends on what the best possible means.
TDD optimises for supportable, comprehensible code. That means it does not optimise for something else. It does not optimise for the fastest possible code. For most of us, maintainability is a much greater concern than raw processing speed. Are there times when our code runs too slow? Absolutely. And for those times, TDD provides the support that enables refactoring and re-engineering.
If you are truly coding in a domain where the priority is shaving off every possible CPU cycle, then TDD is not for you. Algorithm development. Low-level communications primitives. Maybe some realtime applications.
Most of us use the outputs of those domains in our work. That's certainly the case at Sodium Skies – any algorithms we use come from third party libraries. We value speed of development, maintainability, our regression suite and low bug rates. That's why we TDD.