But it works on my machine!
September 25th, 2007 | Published in Google Testing
We thought you might be interested in another article from our internal monthly testing newsletter called CODE GREEN... Originally titled: "Opinion: But it works on my machine!"
We spent so much time hearing about "make your tests small and run fast." While this is important for quick CL verification, system level testing is important, too, and doesn't get enough air time.
You write cool features. You write lots and lots of unit tests to make sure your features work. You make sure the unit tests run as part of your project's continuous build. Yet when the QA engineer tries out a few user scenarios, she finds many defects. She logs them as bugs. You try to reproduce them, but ... you can't!
Sound familiar? It might to a lot of you who deal with complex systems that touch many other dependent systems. Want to test a simple service that just talks to a database? Simple, write a few unit tests with a mocked-out database. Want to test a service that connects to authentication to manage user accounts, talks to a risk engine, a biller, and a database? Now that's a different story!
So what are system level tests again?
System level tests to the rescue. They're also referred to as integration tests, scenario tests, and end-end tests. No matter what they're called, these tests are a vital part of any test strategy. They wait for screen responses, they punch in HTML form fields, they click on buttons and links, they verify text on the UI (sometimes in different languages and locales). Heck, sometimes they even poke open inboxes and verify email content!
But I have a gazillion unit tests and I don't need system level tests!
Sure you do. Unit tests are useful in helping you quickly decide whether your latest code changes haven't caused your existing code to regress. They are an invaluable part of the agile developers' tool kit. But when code is finally packaged and deployed, it could look and behave very differently. And no amount of unit tests can help you decide whether that awesome UI feature you designed works the way it was intended, or that one of the services your feature depended on is broken or not behaving as expected. If you think of a "testing diet," system level tests are like carbohydrates -- they are a crucial part of your diet, but only in the right amount!
System level tests provide that sense of comfort that everything works the way it should, when it lands in the customer's hands. In short, they're the closest thing to simulating your customers. And that makes them pretty darn valuable.
Wait a minute -- how stable are these tests?
Very good question. It should be pretty obvious that if you test a full blown deployment of any large, complex system you're going to run into some stability issues. Especially since large, complex systems consist of components that talk to many other components, sometimes asynchronously. And real world systems aren't perfect. Sometimes the database doesn't respond at all, sometimes the web server responds a few seconds later, and sometimes a simple confirmation message takes forever to reach an email inbox!
Automated system level tests are sensitive to such issues, and sometimes report false failures. The key is utilizing them effectively, quickly identifying and fixing false failures, and pairing them up with the right set of small, fast tests.