< Return to Blog

TDD vs. Design

It's quite likely that you've read David's recent write up 'TDD is dead. Long live testing.' (with its follow up, Test-induced design damage and Slow database test fallacy) and the ensuing debate on Twitter.

TDD is quite certainly not dead, and while those were words David chose at the time, I can understand why it's 'irked' as many in the community as it has. There are many, who've made TDD their 'mantra' to the point a large part of their business — such as training in TDD in a professional capacity not to mention followers of Pivotal labs.

David's main point was that following TDD, as it is mean't to be followed, leads to tests driving design. Going against this, would technically not be TDD — hence the naming his article as 'TDD is dead'.

A better name for what David is proposing would be Design Driven Testing and Development or DDTD. The concept is simple and to get into the spirit of DDTD, just imagine that you're trying something out. You're building a feature but it's more a proof-of-concept.

Let's dive down to the ubuiquitous 'Unit' considering a simple PORO. Concerns here typically fall into the public vs. private API of the object, instance vs. class (singleton) methods and method visibility etc. One would also pay attention to exceptions and how these would be handled.

Sure, at the unit level testing is always simple as one's encouraged to test in isolation. Consider the surface of an object - how do we determine the best form of API for something we've yet to build?

Quite simply put, at this initital stage, you're trying to assimilate a blue-print for an object without much to go on, and it may be hard to see the forest for the trees with more involved features.

It makes sense to try and design and backfil in tests once you've managed to get thing working; while testing in isolation has its limits, one needs to cover orchestration.

In most larger projects one would see the use of Service Objects, to Decorators/Presenters, Dependency Injection, and much more. It starts to make sense at this point to shift the focus towards the system taking a more outside-in approach (as in BDD) with tools such as Capybara which even support testing JS through Selenium.

I'm looking forward to seeing what this debate ultimately leads to, until then, happy hacking.

Addendum

My post here is not in blind agreement to all of DHH's sentiments and I'd like to reflect a bit on his comments regarding Jim Weirich's talk Decoupling from Rails. First, some words from Uncle Bob,

Last October Jim gave a talk named Decoupling from Rails in which he showed how to refactor a rails app in order to decouple business logic from rails framework code. The talk is spectacular. The hour you spend watching it will pay you back many times over.

I too saw this talk and I was like, 'gosh that's just incredible!'. I had also previously watched Uncle Bob's talk on 'Architecture the Lost Years' — to which the low-point was that he didn't end showing how to achieve that sort of decoupling in Rails.

Personally, Jim's repository based-approach feels elegant.

As far as DHH's stance on TDD - the only bit that I'd raise my hand for saying, 'yup, been there, done that' is when I'm sketching out a feature (and I'm not entirely sure of its limits), I'd resort to getting the feature working and backfilling in tests.

While this gets the job done, and the tests do help to drive out certain edge cases, the backfilling nature does mean that I have to be rather careful with my test suite. The trick here is to go back to the implemented feature and simply comment out the code that first implemented it. My limus test for this stage is to cause my tests to go red. After all, TDD is red-green-refactor.

I however do not practice the whole 'backfilling' based DDTD all the time, most when I'm hacking on something to get a sense of what it'd look as a first attempt.