Test driven development has become the status quo for any hot start-up. It has become part of University Computer Science curriculum and become a fairly significant movement (religion) among Software Developers.
However, in my various jobs, and through several personal projects I have found that unit testing is often more trouble than it’s worth.
What brought me to this conclusion?
First, writing tests takes time – usually far too much in my opinion. Tests should be simple and obvious to write, but somehow they almost never are. In my experience tests feel like the worst kind of hacked code. You often have to get down and dirty with low level knowledge of the language in order to fake things so that they appear to be working for the method you are testing. This could be by injecting fake implementations of classes or doing run-time modification to replace functions or methods. It’s stuff that I find counter intuitive and far more complex than simply finding ways to make user testing easier. Making the code so that it is easily testable I have found results in added complexity with logic spread all over the place. It makes it difficult to follow simple processes when everything is pulled together at run-time.
Second, tests can take a while to run. In the time it takes to run tests even if it’s just 1 minute I often find myself distracted. As a result my ability to focus on the code at hand is greatly reduced as my brain context switches between writing code, starting the test-suite, getting distracted and reading something on reddit.com and then remembering tests were running and finally getting back to the code. It’s said that the brain takes about 15 minutes to regain focus after a disruption. Test driven development rarely allows for 15 minutes of focused coding before switching to running tests.
Finally, in a productive work environment using high level languages, with knowledgeable people, good logging and notification systems, and systems in place for quick and easy deployments to production any bugs that do happen to make it into the wild should be quickly reported and usually simple to fix. This is generally the case when changes are small. Bigger changes should be user tested – even if there is a solid set of automated tests in place.
There are cases where I do find tests to be useful. Generally it’s when I find myself accidentally breaking things more than once. Usually though these tests are just superficial – do all the URLs in my web app resolve correctly? will a function run? But other cases for automated testing is usually called for when sensitive money handling code is being developed, or public APIs need to be security hardened.
Having worked with teams on big projects that relied only on user testing and ones that relied heavily on automated testing the productivity was much higher on the user tested code, while the code quality (in terms of bugs – not code organization/structure) was close to the same. But of course your mileage may vary.