June 22nd, 2010 | Published in Google Testing
By Philip ZembrodIn an earlier post on trying out TDD I wrote how my mindset while coding changed from fear of bugs in the new code to eager anticipation to see the new code run through and eventually pass the already written tests. Today I want to tell about integrating components by writing integration tests first.
In a new project we decided to follow TDD from the start. We happily created components, “testing feature after feature into existence” (a phrase I love; I picked it up from a colleague), hitting a small test coverage of around 90% from the start. Obviously, when it came to integrating the components into a product, the obvious choice was to do that test-driven, too. So how did that go?
What I would have done traditionally was select a large enough set of components that, once integrated, should make up something I could play with. Since at least a minimum UI would be needed, plus something that does visible or useful things, preferably both, this something would likely have been largish, integrating quite a few components. With the playing around and tryout, I’d enter debugging, because of course it wouldn’t work at first attempt. The not-too-small number of integrated components would make tracking the cause of failures hard, and anticipating all this while coding, I’d have met the well-known fearful mindset again, slowing me down, as I described in my initial TDD post.
How did TDI change this game for me? I realized: With my unit test toolbox that can test any single component, I can also test an integration of 2 components regardless of whether they have a UI or do something visible. That was the key to a truly incremental process of small steps.
First, write the test for 2 components, run it and see it fail to make sure the integration code I’m about to write is actually executed by the test. Write that bit of code, run the test and see it succeed. If it still fails, fix what's broken and repeat. Finding what's broken in this mode is usually easy enough because the increments are small. If the test failure doesn’t make obvious what’s wrong, adding some verifications or some logging does the trick. A debugger should never be needed; automated tests are, after all, a bit like recorded debugging sessions that you can replay any time in the future.
Repeating this for the 3rd component added, the 4th, etc., I could watch my product grow, with new passing tests every day. Small steps, low risks in each, no fear of debugging; instead, continuous progress. Every day this roaring thrill: It works, it works! Something’s running that didn’t run yesterday. And tomorrow morning I’ll start another test that will run tomorrow evening, most likely. Or already at lunchtime. Imagine what kind of motivation and acceleration this would give you. Better, try it out for yourself. I hope you’ll be as amazed and excited as I am.
What are the benefits? As with plain TDD, I find this fun-factor, this replacement of dread of debugging by eagerness for writing the next test to be able to write and run the next code the most striking effect of TDI.
The process is also much more systematic. Once you have specified your expectations at each level of integration, you’ll verify them continuously in the future, just by running the tests. Compare that to how reproducible, thorough and lasting your verification of your integration would be if you’d done it manually.
And if you wrote an integration test for every function or feature that you cared about during integration, then you can make sure each of them is in shape any time by just running the tests. I suspect one can’t appreciate the level of confidence in the code that creates until one has experienced it. I find it amazing. I dare you to try it yourself!
P.S. On top of this come all the other usual benefits of well-tested code that would probably be redundant to enumerate here, so I won’t. ;-)