Interviewing Insights and Test Frameworks
January 5th, 2010 | Published in Google Testing
By James A. Whittaker
Google is hiring. We have openings for security testers, test tool developers, automation experts and manual testers. That's right, I said manual testers.
As a result of all this interviewing I've been reading a lot of interview feedback and wanted to pass along some insights about how these applicants approach solving the testing problems we ask in our interviews. I think the patterns I note in this post are interesting insights into the mind of the software tester, at least the ones who want to work for Google.
One of the things our interviewers like to ask is 'how would you test product xyz?' The answers help us judge a tester's instincts, but after reading many hundreds of these interviews I have noticed marked patterns in how testers approach solving such problems. It's as though testers have a default testing framework built into their thinking that guides them in choosing test cases and defines the way they approach test design.
In fact, these built-in frameworks seem to drive a tester's thinking to the extent that when I manage to identify the framework a tester is using, I can predict with a high degree of accuracy how they will answer the interviewers' questions. The framework defines what kind of tester they are. I find this intriguing and wonder if others have similar or counter examples to cite.
Here are the frameworks I have seen just in the last two weeks:
The Input Domain Framework treats software as an input-output mechanism. Subscribers of this framework think in terms of sets of inputs, rules about which inputs are more important and relationships between inputs, input sequences and outputs. This is a common model in random testing, model-based testing and the testing of protocols and APIs. An applicant who uses this framework will talk about which inputs they would use to test a specific application and try to justify why those inputs are important.
The Divide and Conquer Framework treats software as a set of features. Subscribers begin by decomposing an app into its features, prioritizing them and then working through that list in order. Often the decomposition is multi-layered creating a bunch of small testing problems out of one very large one. You don't test the feature so much as you test its constituent parts. An applicant who uses this framework is less concerned with actual test cases and more concerned with reducing the size of the problem to something manageable.
The Fishbowl Framework is a big picture approach to testing in which we manipulate the application while watching and comparing the results. Put the app in a fishbowl, swirl it around in the water and watch what happens. The emphasis is more on the watching and analyzing than it is on exactly how we manipulate the features. An applicant who uses this framework chooses tests that cause visible output and large state changes.
The Storybook Framework consists of developing specific scenarios and making sure the software does what is is supposed to do when presented with those scenarios. Stories start with the expected path and work outward. They don't always get beyond the expected. This framework tests coherence of behavior more than subtle errors. Applicants who employ this framework often take a user's point of view and talk about using the application to get real work done.
The Pessimists Framework starts with edge cases. Subscribers test erroneous input, bad data, misconfigured environments and so on. This is a common strategy on mature products where the main paths are well trodden. Applicants who use this framework like to assume that the main paths will get tested naturally as part of normal dev use and dog-fooding and that the testing challenge is concentrated on lower probability scenarios. They are quick to take credit for prior testing, assume its rationality and pound on problematic scenarios.
There are more and I am taking furious notes to try and make sense of them all. As I get to know the testers who work in my organization, it doesn't take long to see which frameworks they employ and in what order (many are driven by multiple frameworks). Indeed, after studying an applicant's first interview, I can almost always identify the framework they use to answer testing questions and can often predict how they are going to answer the questions other interviewers ask even before I read that far.
Now some interesting questions come out of this that I am still looking into. Which of these frameworks is best? Which is best suited to certain types of functionality? Which is better for getting a job at Google? Already patterns are emerging.
One thing is for sure, we're interviewing at a rate that will provide me with lots of data on this subject. Contact me if you'd like to participate in this little study!
Google is hiring. We have openings for security testers, test tool developers, automation experts and manual testers. That's right, I said manual testers.
As a result of all this interviewing I've been reading a lot of interview feedback and wanted to pass along some insights about how these applicants approach solving the testing problems we ask in our interviews. I think the patterns I note in this post are interesting insights into the mind of the software tester, at least the ones who want to work for Google.
One of the things our interviewers like to ask is 'how would you test product xyz?' The answers help us judge a tester's instincts, but after reading many hundreds of these interviews I have noticed marked patterns in how testers approach solving such problems. It's as though testers have a default testing framework built into their thinking that guides them in choosing test cases and defines the way they approach test design.
In fact, these built-in frameworks seem to drive a tester's thinking to the extent that when I manage to identify the framework a tester is using, I can predict with a high degree of accuracy how they will answer the interviewers' questions. The framework defines what kind of tester they are. I find this intriguing and wonder if others have similar or counter examples to cite.
Here are the frameworks I have seen just in the last two weeks:
The Input Domain Framework treats software as an input-output mechanism. Subscribers of this framework think in terms of sets of inputs, rules about which inputs are more important and relationships between inputs, input sequences and outputs. This is a common model in random testing, model-based testing and the testing of protocols and APIs. An applicant who uses this framework will talk about which inputs they would use to test a specific application and try to justify why those inputs are important.
The Divide and Conquer Framework treats software as a set of features. Subscribers begin by decomposing an app into its features, prioritizing them and then working through that list in order. Often the decomposition is multi-layered creating a bunch of small testing problems out of one very large one. You don't test the feature so much as you test its constituent parts. An applicant who uses this framework is less concerned with actual test cases and more concerned with reducing the size of the problem to something manageable.
The Fishbowl Framework is a big picture approach to testing in which we manipulate the application while watching and comparing the results. Put the app in a fishbowl, swirl it around in the water and watch what happens. The emphasis is more on the watching and analyzing than it is on exactly how we manipulate the features. An applicant who uses this framework chooses tests that cause visible output and large state changes.
The Storybook Framework consists of developing specific scenarios and making sure the software does what is is supposed to do when presented with those scenarios. Stories start with the expected path and work outward. They don't always get beyond the expected. This framework tests coherence of behavior more than subtle errors. Applicants who employ this framework often take a user's point of view and talk about using the application to get real work done.
The Pessimists Framework starts with edge cases. Subscribers test erroneous input, bad data, misconfigured environments and so on. This is a common strategy on mature products where the main paths are well trodden. Applicants who use this framework like to assume that the main paths will get tested naturally as part of normal dev use and dog-fooding and that the testing challenge is concentrated on lower probability scenarios. They are quick to take credit for prior testing, assume its rationality and pound on problematic scenarios.
There are more and I am taking furious notes to try and make sense of them all. As I get to know the testers who work in my organization, it doesn't take long to see which frameworks they employ and in what order (many are driven by multiple frameworks). Indeed, after studying an applicant's first interview, I can almost always identify the framework they use to answer testing questions and can often predict how they are going to answer the questions other interviewers ask even before I read that far.
Now some interesting questions come out of this that I am still looking into. Which of these frameworks is best? Which is best suited to certain types of functionality? Which is better for getting a job at Google? Already patterns are emerging.
One thing is for sure, we're interviewing at a rate that will provide me with lots of data on this subject. Contact me if you'd like to participate in this little study!