Survival techniques for acceptance tests of web applications (Part 1)
April 28th, 2009 | Published in Google Testing
Automated tests are often touted as a solution for software testing, and effective automated tests certainly have their place and can deliver vital confidence in the software being tested. However, many tests fail to deliver value, either now or in the future, and there are plenty of projects whose automated tests are broken, forlorn, and unloved – cluttering up projects and wasting time and resources.
How to create acceptance tests
Tests need to do something useful to survive. Automated tests should help the team to make the next move by providing justified confidence a bug has been fixed, confirming refactored code still works as intended, or demonstrating that new features have been successfully implemented. (See Alistair Cockburn’s discussion, referenced in the “More Information” section below, on intermediate work products – do they remind? inform? or inspire?) There should be sufficient tests – neither more nor less: more increase the support burden, fewer leave us open to unpleasant surprises in production.
Acceptance tests must meet the needs of several groups, including the users and the developers. Long-lived tests must be written in the language of each group, using terms users will recognize and a programming language and style in which the developers are competent.
We create tests by modelling the purpose of a test from the user’s perspective: send a message, order a book, etc. Each test is decomposed into individual actions: to send a message, a user must be logged in, select the compose message icon, specify one or more recipients, type a minimum of either a subject or a message, then select Send. From this list of actions, create a skeleton in the programming language of choice and create a method name that reflects each action. Show these to both the users and programmers and ask them to tell you what they think each step represents. Now is a great time to refine the names and decide which methods are appropriate: before you’ve invested too much time in the work. If you wait until later, your natural protective instincts will make it harder for you to accept good suggestions and make useful changes.
For each method, we need to work out how to implement it in code. How could an automated test select the compose message icon? Do alternative ways exist? An understanding of HTML, CSS, and JavaScript will help you if you plan to use browser automation tools. All the visible elements of a web application are reflected in the Document Object Model (DOM) in HTML, and they can be addressed in various ways: the directions from the root of the document to the element using xpath; unique identifiers; or characteristics possessed by the elements, such as class names, attributes, or link text. Some examples of these addressing options are shown in the Navigation Options illustration below. (Notes: navigation using xpath is much slower than using IDs; and IDs should be unique.)
Some actions can be initiated using JavaScript running in the browser. For devices such as the iPhone, changes in orientation when the phone is rotated are triggered this way (see Handling Orientation Events in the Safari Reference Library).
Typically, automated web application tests use JavaScript, either directly or indirectly, to interact with the web application being tested.
Utilities such as recording tools can help reduce the effort required to discover how to interact with the web application. The open-source test automation tool Selenium (http://seleniumhq.org/) includes a simple IDE record and playback tool that runs in the Firefox browser. Recorded scripts can help bootstrap your automated tests. However, don’t be tempted to consider the recorded scripts as automated tests: they’re unlikely to be useful for long. Instead, plan to design and implement your test code properly, using good software design techniques. Read on to learn how to use the PageObject design pattern to design your test code.
Two of the tools I find most useful are Firebug (http://getfirebug.com/), a Swiss Army knife for the Web Browser, and Wireshark (http://www.wireshark.org/), a network protocol analysis tool with a distinguished pedigree. Firebug is extremely useful when learning how to interact with a web application or debug mysterious problems with your tests when they seem to be misbehaving. I encourage you to persist when learning to use these tools – it took me a while to get used to their foibles, but I wouldn’t be without either of them these days.
Homogenous languages and toolsSeveral years of experience across multiple project teams have taught us that the tests are more likely to survive when they’re familiar and close to the developers. Use their programming language, put them in their codebase, use their test automation framework (and even their operating system). We need to reduce the effort of maintaining the tests to a minimum. Get the developers to review the automated tests (whether they write them or you do) and actively involve them when designing and implementing the tests.
Typically, our acceptance tests use the xUnit framework; for example, JUnit for Java projects (see http://www.junit.org/). A good source of inspiration for creating effective tests is Gerard Meszaros’ work (see http://www.xunitpatterns.com).
By using effective test designs, we can make tests easier to implement and maintain. The initial investment is minor compared to the benefits. One of my favourite designs is called Page Objects (see PageObjects on the Google Code site). A PageObject represents part or all of a page in a web application – something a user would interact with. A PageObject provides services to your test automation scripts and encapsulates the nitty-gritty details of how these services are performed. By encapsulating the nitty-gritty stuff, many changes to the web application, such as the reordering or renaming of elements, can be reflected in one place in your tests. A well-designed PageObject separates the ‘what’ from the ‘how’.
Another effective test design is based on three simple words: ‘given’, ‘when’, and ‘then’. As a trio they reflect the essential elements of many tests: given various preconditions and expectations, when such-and-such happens, then I expect a certain result.
// Given I have a valid user account and am at the login page,
// When I enter the account details and select the Enter button,
// Then I expect the inbox to be displayed with the most recent email selected.
The previous code consists of three programming comments that are easy for users to read. The actual programming code is entered immediately below each comment. Programming concepts such as literate programming are intended to make the code almost as readable as the textual comments.
Isolate things that change from those that don’t. For example, separate user account data from your test code. The separation makes changes easier, faster, and safer to implement, compared to making updates in the code for each test.
Gaining SkillsWriting automated tests may be easy for some of you. In my case, I started with some simple example tests and tweaked them to suit my needs. I received boosts from working with more experienced practitioners who were able to correct my course and educate me in how to use various tools effectively. I recommend pairing with one of the developers of the software to be tested when you face a new testing requirement. Their intimate knowledge of the code and your understanding of the tests can form a potent combination. For instance, by working with one of the developers on a recent project, we were able to implement bi-directional injection of JSON messages and capture the responses from the server to test a key interaction between the server and client that was causing problems in production.
I encourage you to try out examples, tweak them, experiment, and plunge in to writing your first automated tests. Learn about AJAX – it underpins the web applications. And learn from more experienced practitioners – I’ve added some links at the end of the article to some of the people I respect who write great acceptance tests, including Antony Marcano and Alan Richardson.
Part 2 of this series helps you create more specialized tests (for example, to emulate mobile web browsers) and gives advice on how to increase the utility and effectiveness of your tests.
Further Information
Intermediate work products
‘The intermediate work products have only one real purpose in life: ‘‘to help the team make their next move’’.’ ‘An intermediate work product might be measured for ‘‘sufficiency” — was it sufficient to remind, inform or inspire? Any amount of effort or detail beyond sufficiency is extraneous to the purpose of the team and the purpose of the work product.’ Cooperative game manifesto for software development (Alistair Cockburn)
Cooperative game manifesto for software development at http://alistair.cockburn.us.
JUnit info
JUnit in Action, available from Manning Publications Co. (2nd edition, early access or 1st edition)
JUnit Recipes, by J. B. Rainsberger with Scott Stirling, available from Manning Publications Co.
Firebug info
Introduction to Firebug on Estelle Weyl’s blog, "CSS, JavaScript and XHTML Explained"
Firebug tutorials in the Firebug Archive at Michael Sync's blog
Fun with Firebug Tutorial on the Google Code site
WebDriver info
webdriver on the Google Code site
AJAX resources
Bulletproof Ajax—An incredibly good book on how to write good AJAX code. It starts with the basics and builds reliably and clearly from good foundations. The DOM manipulation code is relevant for implementing your acceptance tests in tools such as WebDriver.
Building a web site with Ajax —Again, a book that starts simple and builds a simple application step by step.
Acceptance tests are more A+S than T+G (Antony Marcano, in his blog at testingReflections.com)
A+S => Activities + Specific
T+G => Tasks + General
Alan Richardson: any and everything. For example, see:
A generalised model for User Acceptance Testing and
A little abstraction when testing software with Selenium-RC and Java, both at the Evil Tester blog