An Ingredients List for Testing – Part Three
September 3rd, 2010 | Published in Google Testing
By James Whittaker
Possessing a bill of materials means that we understand the overall size of the testing problem. Unfortunately, the size of most testing problems far outstrips any reasonable level of effort to solve them. And not all of the testing surface is equally important. There are certain features that simple require more testing than others. Some prioritization must take place. What components must get tested? What features simply cannot fail? What features make up the user scenarios that simply must work?
In our experience it is the unfortunate case that no one really agrees on the answers to these questions. Talk to product planners and you may get a different assessment than if you talk to developers, sales people or executive visionaries. Even users may differ among themselves. It falls on testers to act as the user advocates and find out how to take into account all these concerns to prioritize how testing resources will be distributed across the entire testing surface.
The term commonly used for this practice is risk analysis and at Google we take information from all the projects stakeholders to come up with overall numerical risk scores for each feature. How do we get all the stakeholders involved? That's actually the easy part. All you need to do is assign numbers and then step back and have everyone tell you how wrong you are. We've found being visibly wrong is the best way to get people involved in the hopes they can influence getting the numbers right! Right now we are collecting this information in spreadsheets. By the time GTAC rolls around the tool we are using for this should be in a demonstrable form.
Possessing a bill of materials means that we understand the overall size of the testing problem. Unfortunately, the size of most testing problems far outstrips any reasonable level of effort to solve them. And not all of the testing surface is equally important. There are certain features that simple require more testing than others. Some prioritization must take place. What components must get tested? What features simply cannot fail? What features make up the user scenarios that simply must work?
In our experience it is the unfortunate case that no one really agrees on the answers to these questions. Talk to product planners and you may get a different assessment than if you talk to developers, sales people or executive visionaries. Even users may differ among themselves. It falls on testers to act as the user advocates and find out how to take into account all these concerns to prioritize how testing resources will be distributed across the entire testing surface.
The term commonly used for this practice is risk analysis and at Google we take information from all the projects stakeholders to come up with overall numerical risk scores for each feature. How do we get all the stakeholders involved? That's actually the easy part. All you need to do is assign numbers and then step back and have everyone tell you how wrong you are. We've found being visibly wrong is the best way to get people involved in the hopes they can influence getting the numbers right! Right now we are collecting this information in spreadsheets. By the time GTAC rolls around the tool we are using for this should be in a demonstrable form.