<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Google Data &#187; James Whittaker</title>
	<atom:link href="/author/james-whittaker/feed/" rel="self" type="application/rss+xml" />
	<link>https://googledata.org</link>
	<description>Everything Google: News, Products, Services, Content, Culture</description>
	<lastBuildDate>Thu, 19 Mar 2015 22:49:02 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.7.5</generator>
	<item>
		<title>Signing Off</title>
		<link>https://googledata.org/uncategorized/signing-off/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=signing-off</link>
		<comments>https://googledata.org/uncategorized/signing-off/#comments</comments>
		<pubDate>Thu, 02 Feb 2012 22:13:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=975ece36b5ad970bc05baf6027f58e38</guid>
		<description><![CDATA[By James WhittakerThis will be my last post on this blog. Tomorrow is my last day at Google. It was a great ride and a great pleasure to work alongside such brilliant engineers. I will hand over this blog to another test director and then find a new pl...]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="text-align: left;" trbidi="on">By James Whittaker<br /><br />This will be my last post on this blog. Tomorrow is my last day at Google. It was a great ride and a great pleasure to work alongside such brilliant engineers. I will hand over this blog to another test director and then find a new place for whatever blogging I do in the future.<br /><br />Follow me on Twitter (@docjamesw) if you are interested in where I land and to find my next blog outlet.<br /><br />Peace.</div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-7961603654544614438?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/signing-off/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Google Tests Software</title>
		<link>https://googledata.org/uncategorized/how-google-tests-software-2/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-google-tests-software-2</link>
		<comments>https://googledata.org/uncategorized/how-google-tests-software-2/#comments</comments>
		<pubDate>Tue, 13 Dec 2011 07:12:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=2ca79c31b62056d32325e7b7c7357ed3</guid>
		<description><![CDATA[By James WhittakerThe publishing industry may be slow, but eventually they deliver. Amazon is now listing How Google Tests Software that I wrote with Jason Arbon and Jeff Carollo for presale. Find it at: http://goo.gl/Dg3qG]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="text-align: left;" trbidi="on">By James Whittaker<br /><br />The publishing industry may be slow, but eventually they deliver. Amazon is now listing <i>How Google Tests Software</i> that I wrote with Jason Arbon and Jeff Carollo for presale. Find it at: http://goo.gl/Dg3qG<br /><br /></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-6708622040196997932?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/how-google-tests-software-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>RPF: Google&#8217;s Record Playback Framework</title>
		<link>https://googledata.org/google-testing/rpf-googles-record-playback-framework/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=rpf-googles-record-playback-framework</link>
		<comments>https://googledata.org/google-testing/rpf-googles-record-playback-framework/#comments</comments>
		<pubDate>Thu, 17 Nov 2011 13:26:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=ee7c87860d97c84a2f1b21dece6a00c0</guid>
		<description><![CDATA[<div dir="ltr">By Jason Arbon<br /><br /><br /><div><span>At </span><a href="http://www.gtac.biz/"><span>GTAC</span></a><span>, folks asked how well the Record/Playback (RPF) works in the Browser Integrated Test Environment (</span><a href="http://googletesting.blogspot.com/2011/10/take-bite-out-of-bugs-and-redundant.html"><span>BITE</span></a><span>). We were originally skeptical ourselves, but figured somebody should try. Here is some anecdotal data and some background on how we started measuring the quality of RPF.</span><br /><span>The idea is to just let users use the application in the browser, record their actions, and save them as a javascript to play back as a regression test or repro later. Like most test tools, especially code generating ones, it works most of the time but its not perfect. Po Hu had an early version working, and decided to test this out on a real world product. Po, the developer of RPF, worked with the chrome web store team to see how an early version would work for them. Why </span><a href="https://chrome.google.com/webstore/"><span>chrome web store</span></a><span>? It is a website with lots of data-driven UX, authentication, file upload, and it was changing all the time and breaking existing </span><a href="http://seleniumhq.org/"><span>Selenium</span></a><span> scripts: a pretty hard web testing problem, only targeted the chrome browser, and most importantly they were sitting 20 feet from us. </span><br /><span></span><br /><span>Before sharing with the chrome web store test developer Wensi Liu, we invested a bit of time in doing something we thought was clever: fuzzy matching and inline updating of the test scripts. Selenium rocks, but after an initial regression suite is created, many teams end up spending a lot of time simply maintaining their Selenium tests as the products constantly change. Rather than simply fail like the existing Selenium automation would do when a certain element isn&#8217;t found, and require some manual DOM inspection, updating the Java code and re-deploying, re-running, re-reviewing the test code what if the test script just kept running and updates to the code could be as simple as point and click? We would keep track of all the attributes in the element recorded, and when executing we would calculate the percent match between the recorded attributes and values and those found while running. If the match isn&#8217;t exact, but within tolerances (say only its parent node or class attribute had changed), we would log a warning and keep executing the test case. If the next test steps appeared to be working as well, the tests would keep executing during test passes only log warnings, or if in debug mode, they would pause and allow for a quick update of the matching rule with point and click via the BITE UI. We figured this might reduce the number of false-positive test failures and make updating them much quicker.</span><br /><span></span><br /><span>We were wrong, but in a good way!</span><br /><span></span><br /><span>We talked to the tester after a few days of leaving him alone with RPF. He&#8217;d already re-created most of his Selenium suite of tests in RPF, and the tests were already breaking because of product changes (its a tough life for a tester at google to keep up with the developers rate of change). He seemed happy, so we asked him how this new fuzzy matching fanciness was working, or not. Wensi was like &#8220;oh yeah, that? Don&#8217;t know. Didn&#8217;t really use it...&#8221;. We started to think how our update UX could have been confusing or not discoverable, or broken. Instead, Wensi said that when a test broke, it was just far easier to re-record the script. He had to re-test the product anyway, so why not turn recording on when he manually verified things were still working, remove the old test and save this newly recorded script for replay later? </span><br /><span></span><br /><span>During that first week of trying out RPF, Wensi found:</span><br /><ul><li><span>77% of the features in Webstore were testable by RPF</span></li><li><span>Generating regression test scripts via this early version of RPF was about 8X faster than building them via Selenium/WebDriver</span></li><li><span>The RPF scripts caught 6 functional regressions and many more intermittent server failures.</span></li><li><span>Common setup routines like login should be saved as modules for reuse (a crude version of this was working soon after)</span></li><li><span>RPF worked on Chrome OS, where Selenium by definition could never run as it required client-side binaries. RPF worked because it was a pure cloud solution, running entirely within the browser, communicating with a backend on the web.</span></li><li><span>Bugs filed via bite, provided a simple link, which would install BITE on the developers machine and re-execute the repros on their side. No need for manually crafted repro steps. This was cool.</span></li><li><span>Wensi wished RPF was cross browser. It only worked in Chrome, but people did occasionally visit the site with a non-Chrome browser.</span></li></ul><span></span><span>So, we knew we were onto something interesting and continued development. In the near term though, chrome web store testing went back to using Selenium because that final 23% of features required some local Java code to handle file upload and secure checkout scenarios. In hindsight, a little testability work on the server could have solved this with some AJAX calls from the client.</span><br /><span></span><br /><span>We performed a check of how RPF faired on some of the top sites of the web. This is shared on the </span><a href="https://docs.google.com/spreadsheet/ccc?key=0AsbIZrIYVyF0dEJGQV91WW9McW1fMjItRmhzcWkyanc"><span>BITE project wiki</span></a><span>. This is now a little bit out of date, with lots more fixes, but it gives you a feel for what doesn&#8217;t work. Consider it Alpha quality at this point. It works for most scenarios, but there are still some serious corner cases. </span><br /><span></span><br /><span>Joe Muharsky drove a lot of the UX (user experience) design for BITE to turn our original and clunky developer and functional-centric UX into something intuitive. Joe&#8217;s key focus was to keep the UX out of the way until it is needed, and make things as self-discoverable and findable as possible. We&#8217;ve haven't done formal usability studies yet, but have done several experiments with external crowd testers using these tools, with minimal instructions, as well as internal dogfooders filing bugs against Google Maps with little confusion. Some of the fancier parts of RPF have some hidden easter eggs of awkwardness, but the basic record and playback scenarios seem to be obvious to folks.</span><br /><span></span><br /><span>RPF has graduated from the experimental centralized test team to be a formal part of the Chrome team, and used regularly for regression test passes. The team also has an eye on enabling non-coding crowd sourced testers generate regression scripts via BITE/RPF.</span><br /><span></span><br /><span>Please join us in maintaining </span><a href="http://code.google.com/p/bite-project/"><span>BITE/RPF</span></a><span>, and be nice to Po Hu and Joel Hynoski who are driving this work forward within Google. </span></div></div>]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="text-align: left;" trbidi="on">By Jason Arbon<br /><br /><br /><div style="background-color: transparent;"><span id="internal-source-marker_0.25297700078226626" style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">At </span><a href="http://www.gtac.biz/"><span style="background-color: transparent; color: #000099; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">GTAC</span></a><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, folks asked how well the Record/Playback (RPF) works in the Browser Integrated Test Environment (</span><a href="http://googletesting.blogspot.com/2011/10/take-bite-out-of-bugs-and-redundant.html"><span style="background-color: transparent; color: #000099; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">BITE</span></a><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">). We were originally skeptical ourselves, but figured somebody should try. Here is some anecdotal data and some background on how we started measuring the quality of RPF.</span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The idea is to just let users use the application in the browser, record their actions, and save them as a javascript to play back as a regression test or repro later. Like most test tools, especially code generating ones, it works most of the time but its not perfect. Po Hu had an early version working, and decided to test this out on a real world product. Po, the developer of RPF, worked with the chrome web store team to see how an early version would work for them. Why </span><a href="https://chrome.google.com/webstore/"><span style="background-color: transparent; color: #000099; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">chrome web store</span></a><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">? It is a website with lots of data-driven UX, authentication, file upload, and it was changing all the time and breaking existing </span><a href="http://seleniumhq.org/"><span style="background-color: transparent; color: #000099; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">Selenium</span></a><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> scripts: a pretty hard web testing problem, only targeted the chrome browser, and most importantly they were sitting 20 feet from us. </span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Before sharing with the chrome web store test developer Wensi Liu, we invested a bit of time in doing something we thought was clever: fuzzy matching and inline updating of the test scripts. Selenium rocks, but after an initial regression suite is created, many teams end up spending a lot of time simply maintaining their Selenium tests as the products constantly change. Rather than simply fail like the existing Selenium automation would do when a certain element isn’t found, and require some manual DOM inspection, updating the Java code and re-deploying, re-running, re-reviewing the test code what if the test script just kept running and updates to the code could be as simple as point and click? We would keep track of all the attributes in the element recorded, and when executing we would calculate the percent match between the recorded attributes and values and those found while running. If the match isn’t exact, but within tolerances (say only its parent node or class attribute had changed), we would log a warning and keep executing the test case. If the next test steps appeared to be working as well, the tests would keep executing during test passes only log warnings, or if in debug mode, they would pause and allow for a quick update of the matching rule with point and click via the BITE UI. We figured this might reduce the number of false-positive test failures and make updating them much quicker.</span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">We were wrong, but in a good way!</span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">We talked to the tester after a few days of leaving him alone with RPF. He’d already re-created most of his Selenium suite of tests in RPF, and the tests were already breaking because of product changes (its a tough life for a tester at google to keep up with the developers rate of change). He seemed happy, so we asked him how this new fuzzy matching fanciness was working, or not. Wensi was like “oh yeah, that? Don’t know. Didn’t really use it...”. We started to think how our update UX could have been confusing or not discoverable, or broken. Instead, Wensi said that when a test broke, it was just far easier to re-record the script. He had to re-test the product anyway, so why not turn recording on when he manually verified things were still working, remove the old test and save this newly recorded script for replay later? </span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">During that first week of trying out RPF, Wensi found:</span><br /><ul><li style="background-color: transparent; font-family: Arial; font-size: 15px; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">77% of the features in Webstore were testable by RPF</span></li><li style="background-color: transparent; font-family: Arial; font-size: 15px; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Generating regression test scripts via this early version of RPF was about 8X faster than building them via Selenium/WebDriver</span></li><li style="background-color: transparent; font-family: Arial; font-size: 15px; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The RPF scripts caught 6 functional regressions and many more intermittent server failures.</span></li><li style="background-color: transparent; font-family: Arial; font-size: 15px; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Common setup routines like login should be saved as modules for reuse (a crude version of this was working soon after)</span></li><li style="background-color: transparent; font-family: Arial; font-size: 15px; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">RPF worked on Chrome OS, where Selenium by definition could never run as it required client-side binaries. RPF worked because it was a pure cloud solution, running entirely within the browser, communicating with a backend on the web.</span></li><li style="background-color: transparent; font-family: Arial; font-size: 15px; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Bugs filed via bite, provided a simple link, which would install BITE on the developers machine and re-execute the repros on their side. No need for manually crafted repro steps. This was cool.</span></li><li style="background-color: transparent; font-family: Arial; font-size: 15px; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Wensi wished RPF was cross browser. It only worked in Chrome, but people did occasionally visit the site with a non-Chrome browser.</span></li></ul><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">So, we knew we were onto something interesting and continued development. In the near term though, chrome web store testing went back to using Selenium because that final 23% of features required some local Java code to handle file upload and secure checkout scenarios. In hindsight, a little testability work on the server could have solved this with some AJAX calls from the client.</span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">We performed a check of how RPF faired on some of the top sites of the web. This is shared on the </span><a href="https://docs.google.com/spreadsheet/ccc?key=0AsbIZrIYVyF0dEJGQV91WW9McW1fMjItRmhzcWkyanc"><span style="background-color: transparent; color: #000099; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">BITE project wiki</span></a><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">. This is now a little bit out of date, with lots more fixes, but it gives you a feel for what doesn’t work. Consider it Alpha quality at this point. It works for most scenarios, but there are still some serious corner cases. </span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Joe Muharsky drove a lot of the UX (user experience) design for BITE to turn our original and clunky developer and functional-centric UX into something intuitive. Joe’s key focus was to keep the UX out of the way until it is needed, and make things as self-discoverable and findable as possible. We’ve haven't done formal usability studies yet, but have done several experiments with external crowd testers using these tools, with minimal instructions, as well as internal dogfooders filing bugs against Google Maps with little confusion. Some of the fancier parts of RPF have some hidden easter eggs of awkwardness, but the basic record and playback scenarios seem to be obvious to folks.</span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">RPF has graduated from the experimental centralized test team to be a formal part of the Chrome team, and used regularly for regression test passes. The team also has an eye on enabling non-coding crowd sourced testers generate regression scripts via BITE/RPF.</span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Please join us in maintaining </span><a href="http://code.google.com/p/bite-project/"><span style="background-color: transparent; color: #000099; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">BITE/RPF</span></a><span style="background-color: transparent; font-family: Arial; font-size: 15px; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, and be nice to Po Hu and Joel Hynoski who are driving this work forward within Google. </span></div></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/rpf-googles-record-playback-framework/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>GTAC Videos Now Available</title>
		<link>https://googledata.org/google-testing/gtac-videos-now-available/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gtac-videos-now-available</link>
		<comments>https://googledata.org/google-testing/gtac-videos-now-available/#comments</comments>
		<pubDate>Tue, 15 Nov 2011 15:30:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=483f5895b61de82842a15d428f6ede46</guid>
		<description><![CDATA[By James WhittakerAll the GTAC 2011 talks are now available at&#160;http://www.gtac.biz/talks&#160;and also up on You Tube. A hearty thanks to all the speakers who helped make this the best GTAC ever.&#160;Enjoy!]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="text-align: left;" trbidi="on">By James Whittaker<br /><br />All the GTAC 2011 talks are now available at&nbsp;<a href="http://www.gtac.biz/talks" style="background-color: rgba(255, 255, 255, 0.917969); color: #1155cc; font-family: arial, sans-serif; font-size: 14px; text-align: -webkit-auto;" >http://www.gtac.biz/talks</a><span class="Apple-style-span" style="background-color: rgba(255, 255, 255, 0.917969); color: #222222; font-family: arial, sans-serif; font-size: 14px; text-align: -webkit-auto;">&nbsp;and also up on You Tube. A hearty thanks to all the speakers who helped make this the best GTAC ever.&nbsp;</span><br /><span class="Apple-style-span" style="background-color: rgba(255, 255, 255, 0.917969); color: #222222; font-family: arial, sans-serif; font-size: 14px; text-align: -webkit-auto;"><br /></span><br /><span class="Apple-style-span" style="background-color: rgba(255, 255, 255, 0.917969); color: #222222; font-family: arial, sans-serif; font-size: 14px; text-align: -webkit-auto;">Enjoy!</span></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/gtac-videos-now-available/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Google Test Analytics &#8211; Now in Open Source</title>
		<link>https://googledata.org/google-testing/google-test-analytics-now-in-open-source/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-test-analytics-now-in-open-source</link>
		<comments>https://googledata.org/google-testing/google-test-analytics-now-in-open-source/#comments</comments>
		<pubDate>Wed, 19 Oct 2011 20:03:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=0eafd5dee52c9a0a15da90c187df68fb</guid>
		<description><![CDATA[<div dir="ltr"><br /><div><span>By </span><span>Jim Reardon</span><br /><span></span><span></span><br /><span>The test plan is dead!</span><br /><span></span><br /><span>Well, hopefully. &#160;At a STAR West session this past week, </span><span>J</span><span>ames Whittaker</span><span> asked a group of test professionals about test plans. &#160;His first question: &#8220;How many people here write test plans?&#8221; &#160;About 80 hands shot up instantly, a vast majority of the room. &#160;&#8220;How many of you get value or refer to them again after a week?&#8221; &#160;Exactly three people raised their hands.</span><br /><span></span><br /><span>That&#8217;s a lot of time being spent writing documents that are often long-winded, full of paragraphs of details on a project everyone already knows to get abandoned so quickly.</span><br /><span></span><br /><span>A group of us at Google set about creating a methodology that can replace a test plan -- it needed to be comprehensive, quick, actionable, and have sustained value to a project. &#160;In the past few weeks, James has posted a few blogs about this methodology, which we&#8217;ve called ACC. &#160;It's a tool to break down a software product into its constituent parts, and the method by which we created "10 Minute Test Plans" (that only take 30 minutes!)</span><br /><span></span><br /><span>Comprehensive</span><br /><span>The ACC methodology creates a matrix that describes your project completely; several projects that have used it internally at Google have found coverage areas that were missing in their conventional test plans.</span><br /><span></span><br /><span>Quick</span><br /><span>The ACC methodology is fast; we&#8217;ve created ACC breakdowns for complex projects in under half an hour. &#160;Far faster than writing a conventional test plan.</span><br /><span></span><br /><span>Actionable</span><br /><span>As part of your ACC breakdown, risk is assessed to the capabilities of your appliciation. &#160;Using these values, you get a heat map of your project, showing the areas with the highest risk -- great places to spend some quality time testing.</span><br /><span></span><br /><span>Sustained Value</span><br /><span>We&#8217;ve built in some experimental features that bring your ACC test plan to life by importing data signals like bugs and test coverage that quantify the risk across your project.</span><br /><span></span><br /><span>Today, I'm happy to announce we're </span><a href="http://code.google.com/p/test-analytics"><span>open sourcing Test Analytics</span></a><span>, a tool built at Google to make generating an ACC simple -- and which brings some experimental ideas we had around the field of risk-based testing that work hand-in-hand with the ACC breakdown.</span><br /><div><a href="http://1.bp.blogspot.com/-t873f8PpQaY/UBwK4O8uvrI/AAAAAAAAAMQ/dwVICl1RimE/s1600/1-Attributes.png"><img border="0" height="254" src="http://1.bp.blogspot.com/-t873f8PpQaY/UBwK4O8uvrI/AAAAAAAAAMQ/dwVICl1RimE/s320/1-Attributes.png" width="320"></a></div><span><br /></span><br /><span></span><br /><span></span><br /><div dir="ltr"><span>Defining a project&#8217;s ACC model.</span></div><span></span><br /><span>Test Analytics has two main parts: first and foremost, it's a step-by-step tool to create an ACC matrix that's faster and much simpler than the Google Spreadsheets we used before the tool existed. &#160;It also provides visualizations of the matrix and risks associated with your ACC Capabilities that were difficult or impossible to do in a simple spreadsheet.</span><br /><div><a href="http://2.bp.blogspot.com/-BZtUDMLdwG4/UBwLCBDN1jI/AAAAAAAAAMY/ZdTmW_xYNjA/s1600/2-Capabilities.png"><img border="0" height="253" src="http://2.bp.blogspot.com/-BZtUDMLdwG4/UBwLCBDN1jI/AAAAAAAAAMY/ZdTmW_xYNjA/s320/2-Capabilities.png" width="320"></a></div><span><br /></span><br /><span></span><br /><span></span><br /><div dir="ltr"><span>A project&#8217;s Capabilities grid.</span><span></span></div><span></span><br /><span>The second part is taking the ACC plan and making it a living, automatic-updating risk matrix. &#160;Test Analytics does this by importing quality signals from your project: Bugs, Test Cases, Test Results, and Code Changes. &#160;By importing these data, Test Analytics lets you visualize risk that isn't just estimated or guessed, but based on quantitative values. &#160;If a Component or Capability in your project has had a lot of code change or many bugs are still open or not verified as working, the risk in that area is higher. &#160;Test Results can provide a mitigation to those risks -- if you run tests and import passing results, the risk in an area gets lower as you test.</span><br /><div><a href="http://3.bp.blogspot.com/-HzZdQvRghvs/UBwLOBeoy-I/AAAAAAAAAMg/7RoxQlFMtfo/s1600/3-Risk.png"><img border="0" height="256" src="http://3.bp.blogspot.com/-HzZdQvRghvs/UBwLOBeoy-I/AAAAAAAAAMg/7RoxQlFMtfo/s320/3-Risk.png" width="320"></a></div><span><br /></span><br /><span></span><br /><span></span><br /><div dir="ltr"><span>A project&#8217;s risk, calculated as a factor of inherent risk as well as imported quality signals.</span></div><span></span><br /><span>This part's still experimental; we're playing around with how we calculate risk based on these signals to best determine risk. &#160;However, we wanted to release this functionality early so we can get feedback from the testing community on how well it works for teams so we can iterate and make the tool even more useful. &#160;It'd also be great to import even more quality signals: code complexity, static code analysis, code coverage, external user feedback and more are all ideas we've had that could add an even higher level of dynamic data to your test plan.</span><br /><div><a href="http://1.bp.blogspot.com/-tk56hDpZrX4/UBwLVwCkUHI/AAAAAAAAAMo/s5Qq0tQfprQ/s1600/4-Imported+Data+Summary.png"><img border="0" height="320" src="http://1.bp.blogspot.com/-tk56hDpZrX4/UBwLVwCkUHI/AAAAAAAAAMo/s5Qq0tQfprQ/s320/4-Imported+Data+Summary.png" width="296"></a></div><span><br /></span><br /><span></span><br /><span></span><br /><div dir="ltr"><span>An overview of test results, bugs, and code changes attributed to a project&#8217;s capability. &#160;The Capability&#8217;s total risk is affected by these factors. </span></div><span></span><br /><span>You can check out a </span><a href="https://test-analytics.appspot.com/"><span>live hosted version</span></a><span>, browse or check out </span><a href="http://code.google.com/p/test-analytics"><span>the code</span></a><span> along with </span><a href="http://code.google.com/p/test-analytics/wiki/AccExplained"><span>documentation</span></a><span>, and of course if you have any feedback let us know - there's a </span><a href="http://groups.google.com/group/test-analytics-discuss"><span>Google Group set up for discussion</span></a><span>, where we'll be active in responding to questions and sharing our experiences with Test Analytics so far.</span><br /><span></span><br /><span>Long live the test plan!</span></div></div>]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="text-align: left;" trbidi="on"><br /><div style="background-color: transparent;"><span id="internal-source-marker_0.7736793777439743" style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">By </span><span style="background-color: transparent; font-size: 12pt; vertical-align: baseline; white-space: pre-wrap;">Jim Reardon</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The test plan is dead!</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Well, hopefully. &nbsp;At a STAR West session this past week, </span><span style="background-color: transparent; color: #000099; font-size: 12pt; vertical-align: baseline; white-space: pre-wrap;">J</span><span style="background-color: transparent; font-size: 12pt; vertical-align: baseline; white-space: pre-wrap;">ames Whittaker</span><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> asked a group of test professionals about test plans. &nbsp;His first question: “How many people here write test plans?” &nbsp;About 80 hands shot up instantly, a vast majority of the room. &nbsp;“How many of you get value or refer to them again after a week?” &nbsp;Exactly three people raised their hands.</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">That’s a lot of time being spent writing documents that are often long-winded, full of paragraphs of details on a project everyone already knows to get abandoned so quickly.</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">A group of us at Google set about creating a methodology that can replace a test plan -- it needed to be comprehensive, quick, actionable, and have sustained value to a project. &nbsp;In the past few weeks, James has posted a few blogs about this methodology, which we’ve called ACC. &nbsp;It's a tool to break down a software product into its constituent parts, and the method by which we created "10 Minute Test Plans" (that only take 30 minutes!)</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Comprehensive</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The ACC methodology creates a matrix that describes your project completely; several projects that have used it internally at Google have found coverage areas that were missing in their conventional test plans.</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Quick</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The ACC methodology is fast; we’ve created ACC breakdowns for complex projects in under half an hour. &nbsp;Far faster than writing a conventional test plan.</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Actionable</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">As part of your ACC breakdown, risk is assessed to the capabilities of your appliciation. &nbsp;Using these values, you get a heat map of your project, showing the areas with the highest risk -- great places to spend some quality time testing.</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Sustained Value</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">We’ve built in some experimental features that bring your ACC test plan to life by importing data signals like bugs and test coverage that quantify the risk across your project.</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Today, I'm happy to announce we're </span><a href="http://code.google.com/p/test-analytics"><span style="background-color: transparent; color: #000099; font-size: 12pt; vertical-align: baseline; white-space: pre-wrap;">open sourcing Test Analytics</span></a><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, a tool built at Google to make generating an ACC simple -- and which brings some experimental ideas we had around the field of risk-based testing that work hand-in-hand with the ACC breakdown.</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-t873f8PpQaY/UBwK4O8uvrI/AAAAAAAAAMQ/dwVICl1RimE/s1600/1-Attributes.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="254" src="http://1.bp.blogspot.com/-t873f8PpQaY/UBwK4O8uvrI/AAAAAAAAAMQ/dwVICl1RimE/s320/1-Attributes.png" width="320" /></a></div><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><div dir="ltr" style="margin-bottom: 0pt; margin-top: 0pt; text-align: center;"><span style="background-color: transparent; font-size: 10pt; font-style: italic; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Defining a project’s ACC model.</span></div><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Test Analytics has two main parts: first and foremost, it's a step-by-step tool to create an ACC matrix that's faster and much simpler than the Google Spreadsheets we used before the tool existed. &nbsp;It also provides visualizations of the matrix and risks associated with your ACC Capabilities that were difficult or impossible to do in a simple spreadsheet.</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-BZtUDMLdwG4/UBwLCBDN1jI/AAAAAAAAAMY/ZdTmW_xYNjA/s1600/2-Capabilities.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="253" src="http://2.bp.blogspot.com/-BZtUDMLdwG4/UBwLCBDN1jI/AAAAAAAAAMY/ZdTmW_xYNjA/s320/2-Capabilities.png" width="320" /></a></div><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><div dir="ltr" style="margin-bottom: 0pt; margin-top: 0pt; text-align: center;"><span style="background-color: transparent; font-size: 10pt; font-style: italic; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">A project’s Capabilities grid.</span><span style="background-color: transparent; font-size: 10pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></div><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The second part is taking the ACC plan and making it a living, automatic-updating risk matrix. &nbsp;Test Analytics does this by importing quality signals from your project: Bugs, Test Cases, Test Results, and Code Changes. &nbsp;By importing these data, Test Analytics lets you visualize risk that isn't just estimated or guessed, but based on quantitative values. &nbsp;If a Component or Capability in your project has had a lot of code change or many bugs are still open or not verified as working, the risk in that area is higher. &nbsp;Test Results can provide a mitigation to those risks -- if you run tests and import passing results, the risk in an area gets lower as you test.</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-HzZdQvRghvs/UBwLOBeoy-I/AAAAAAAAAMg/7RoxQlFMtfo/s1600/3-Risk.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="256" src="http://3.bp.blogspot.com/-HzZdQvRghvs/UBwLOBeoy-I/AAAAAAAAAMg/7RoxQlFMtfo/s320/3-Risk.png" width="320" /></a></div><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><div dir="ltr" style="margin-bottom: 0pt; margin-top: 0pt; text-align: center;"><span style="background-color: transparent; font-size: 10pt; font-style: italic; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">A project’s risk, calculated as a factor of inherent risk as well as imported quality signals.</span></div><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">This part's still experimental; we're playing around with how we calculate risk based on these signals to best determine risk. &nbsp;However, we wanted to release this functionality early so we can get feedback from the testing community on how well it works for teams so we can iterate and make the tool even more useful. &nbsp;It'd also be great to import even more quality signals: code complexity, static code analysis, code coverage, external user feedback and more are all ideas we've had that could add an even higher level of dynamic data to your test plan.</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-tk56hDpZrX4/UBwLVwCkUHI/AAAAAAAAAMo/s5Qq0tQfprQ/s1600/4-Imported+Data+Summary.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-tk56hDpZrX4/UBwLVwCkUHI/AAAAAAAAAMo/s5Qq0tQfprQ/s320/4-Imported+Data+Summary.png" width="296" /></a></div><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><div dir="ltr" style="margin-bottom: 0pt; margin-top: 0pt; text-align: center;"><span style="background-color: transparent; font-size: 10pt; font-style: italic; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">An overview of test results, bugs, and code changes attributed to a project’s capability. &nbsp;The Capability’s total risk is affected by these factors. </span></div><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">You can check out a </span><a href="https://test-analytics.appspot.com/"><span style="background-color: transparent; color: #000099; font-size: 12pt; vertical-align: baseline; white-space: pre-wrap;">live hosted version</span></a><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, browse or check out </span><a href="http://code.google.com/p/test-analytics"><span style="background-color: transparent; color: #000099; font-size: 12pt; vertical-align: baseline; white-space: pre-wrap;">the code</span></a><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> along with </span><a href="http://code.google.com/p/test-analytics/wiki/AccExplained"><span style="background-color: transparent; color: #000099; font-size: 12pt; vertical-align: baseline; white-space: pre-wrap;">documentation</span></a><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, and of course if you have any feedback let us know - there's a </span><a href="http://groups.google.com/group/test-analytics-discuss"><span style="background-color: transparent; color: #000099; font-size: 12pt; vertical-align: baseline; white-space: pre-wrap;">Google Group set up for discussion</span></a><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, where we'll be active in responding to questions and sharing our experiences with Test Analytics so far.</span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br /><span style="background-color: transparent; font-size: 12pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Long live the test plan!</span></div></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/google-test-analytics-now-in-open-source/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Google JS Test, now in Open Source</title>
		<link>https://googledata.org/google-testing/google-js-test-now-in-open-source/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-js-test-now-in-open-source</link>
		<comments>https://googledata.org/google-testing/google-js-test-now-in-open-source/#comments</comments>
		<pubDate>Mon, 17 Oct 2011 17:38:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=7e94a72643427d00c341bbef146983ea</guid>
		<description><![CDATA[<div dir="ltr"><span></span><br /><div>By Aaron Jacobs</div><div><br /></div><div><a href="http://code.google.com/p/google-js-test/" target="blank">Google JS Test</a>&#160;is a JavaScript unit testing framework that runs on the&#160;<a href="http://code.google.com/p/v8/" target="blank">V8 JavaScript Engine</a>, the same open source project that is responsible for Google Chrome&#8217;s super-fast JS execution speed. Google JS Test is used internally by several Google projects, and we&#8217;re pleased to announce that it has been released as an open source project.<br /><br />Features of Google JS Test include:</div><ul><li>Extremely fast startup and execution time, without needing to run a browser.</li></ul><ul><li>Clean, readable output in the case of both passing and failing tests.</li></ul><ul><li>An optional browser-based test runner that can simply be refreshed whenever JS is changed.</li></ul><ul><li>Style and semantics that resemble&#160;<a href="http://code.google.com/p/googletest/" target="blank">Google Test</a>&#160;for C++.</li></ul><ul><li>A built-in mocking framework that requires minimal boilerplate code (e.g. no&#160;<span>$tearDown</span>&#160;or<span>$verifyAll</span>&#160;calls), with style and semantics based on the&#160;<a href="http://code.google.com/p/googletest/" target="blank">Google C++ Mocking Framework</a>.</li></ul><ul><li>A system of&#160;<a href="http://code.google.com/p/google-js-test/wiki/Matchers" target="blank">matchers</a>&#160;allowing for expressive tests and easy to read failure output, with many built-in matchers and the ability for the user to add their own.<br /><br /><a href="http://3.bp.blogspot.com/-k6633PjXL0I/ToSssXafqgI/AAAAAAAAAQI/t5Ux_dHrA6E/s1600/image00.png"><img alt="" border="0" src="http://3.bp.blogspot.com/-k6633PjXL0I/ToSssXafqgI/AAAAAAAAAQI/t5Ux_dHrA6E/s320/image00.png"></a></li></ul>See the Google JS Test&#160;<a href="http://code.google.com/p/google-js-test/" target="blank">project home page</a>&#160;for a quick introduction, and the&#160;<a href="http://code.google.com/p/google-js-test/wiki/GettingStarted" target="blank">getting started</a>&#160;page for a tutorial that will teach you the basics in just a few minutes.</div>]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="text-align: left;" trbidi="on"><span class="Apple-style-span" style="background-color: white; font-family: Arial, Helvetica, sans-serif; font-size: 12px; line-height: 15px;"></span><br /><div style="padding-bottom: 0px;">By Aaron Jacobs</div><div style="padding-bottom: 0px;"><br /></div><div style="padding-bottom: 0px;"><a href="http://code.google.com/p/google-js-test/" >Google JS Test</a>&nbsp;is a JavaScript unit testing framework that runs on the&nbsp;<a href="http://code.google.com/p/v8/" >V8 JavaScript Engine</a>, the same open source project that is responsible for Google Chrome’s super-fast JS execution speed. Google JS Test is used internally by several Google projects, and we’re pleased to announce that it has been released as an open source project.<br /><br />Features of Google JS Test include:</div><ul><li>Extremely fast startup and execution time, without needing to run a browser.</li></ul><ul><li>Clean, readable output in the case of both passing and failing tests.</li></ul><ul><li>An optional browser-based test runner that can simply be refreshed whenever JS is changed.</li></ul><ul><li>Style and semantics that resemble&nbsp;<a href="http://code.google.com/p/googletest/" >Google Test</a>&nbsp;for C++.</li></ul><ul><li>A built-in mocking framework that requires minimal boilerplate code (e.g. no&nbsp;<span style="font-family: 'courier new';">$tearDown</span>&nbsp;or<span style="font-family: 'courier new';">$verifyAll</span>&nbsp;calls), with style and semantics based on the&nbsp;<a href="http://code.google.com/p/googletest/" >Google C++ Mocking Framework</a>.</li></ul><ul><li>A system of&nbsp;<a href="http://code.google.com/p/google-js-test/wiki/Matchers" >matchers</a>&nbsp;allowing for expressive tests and easy to read failure output, with many built-in matchers and the ability for the user to add their own.<br /><br /><a href="http://3.bp.blogspot.com/-k6633PjXL0I/ToSssXafqgI/AAAAAAAAAQI/t5Ux_dHrA6E/s1600/image00.png"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5657836910083549698" src="http://3.bp.blogspot.com/-k6633PjXL0I/ToSssXafqgI/AAAAAAAAAQI/t5Ux_dHrA6E/s320/image00.png" style="border-bottom-color: rgb(204, 204, 204); border-bottom-style: solid; border-bottom-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; border-left-width: 1px; border-right-color: rgb(204, 204, 204); border-right-style: solid; border-right-width: 1px; border-top-color: rgb(204, 204, 204); border-top-style: solid; border-top-width: 1px; cursor: pointer; display: block; height: 320px; margin-bottom: 10px; margin-left: auto; margin-right: auto; margin-top: 0px; padding-bottom: 4px; padding-left: 4px; padding-right: 4px; padding-top: 4px; text-align: center; width: 214px;" /></a></li></ul>See the Google JS Test&nbsp;<a href="http://code.google.com/p/google-js-test/" >project home page</a>&nbsp;for a quick introduction, and the&nbsp;<a href="http://code.google.com/p/google-js-test/wiki/GettingStarted" >getting started</a>&nbsp;page for a tutorial that will teach you the basics in just a few minutes.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/google-js-test-now-in-open-source/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>From Google Dev Day to STAR West</title>
		<link>https://googledata.org/google-testing/from-google-dev-day-to-star-west/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=from-google-dev-day-to-star-west</link>
		<comments>https://googledata.org/google-testing/from-google-dev-day-to-star-west/#comments</comments>
		<pubDate>Mon, 26 Sep 2011 16:41:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=0f0af1037305069c6b94506fde609262</guid>
		<description><![CDATA[By James WhittakerGoogle Dev Days in Brazil and Argentina are over (sigh) and now I turn my attention to STAR West in Anaheim. Unfortunately, it is too late to register for my tutorials as I was informed both are sold out.If you attend STAR, please tak...]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="text-align: left;" trbidi="on">By James Whittaker<br /><br />Google Dev Days in Brazil and Argentina are over (sigh) and now I turn my attention to STAR West in Anaheim. Unfortunately, it is too late to register for my tutorials as I was informed both are sold out.<br /><br />If you attend STAR, please take the time to say hello.</div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-5666384732399561218?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/from-google-dev-day-to-star-west/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Announcing the Final GTAC Agenda</title>
		<link>https://googledata.org/google-testing/announcing-the-final-gtac-agenda/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=announcing-the-final-gtac-agenda</link>
		<comments>https://googledata.org/google-testing/announcing-the-final-gtac-agenda/#comments</comments>
		<pubDate>Mon, 12 Sep 2011 14:46:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=7004c8dabefa9c913dbe8fabd26e58d2</guid>
		<description><![CDATA[By James WhittakerThe GTAC agenda is now finalized and available at:&#160;http://www.gtac.biz/agenda. Looking forward to seeing everyone there. Stay tuned to this blog for updates to any pre- and post- events.]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="text-align: left;" trbidi="on">By James Whittaker<br /><br />The GTAC agenda is now finalized and available at:&nbsp;<a href="http://www.gtac.biz/agenda">http://www.gtac.biz/agenda</a>. Looking forward to seeing everyone there. Stay tuned to this blog for updates to any pre- and post- events.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/announcing-the-final-gtac-agenda/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The 10 Minute Test Plan</title>
		<link>https://googledata.org/google-testing/the-10-minute-test-plan/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-10-minute-test-plan</link>
		<comments>https://googledata.org/google-testing/the-10-minute-test-plan/#comments</comments>
		<pubDate>Thu, 01 Sep 2011 21:28:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=0b449cb7f45b6dfa2e02a17aa5c1f793</guid>
		<description><![CDATA[By James WhittakerAnything in software development that takes ten minutes or less to perform is either trivial or is not worth doing in the first place. If you take this rule of thumb at face value, where do you place test planning? Certainly it takes ...]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="text-align: left;" trbidi="on"><br />By James Whittaker<br /><br />Anything in software development that takes ten minutes or less to perform is either trivial or is not worth doing in the first place. If you take this rule of thumb at face value, where do you place test planning? Certainly it takes more than 10 minutes. In my capacity as Test Director at Google I presided over teams that wrote a large number of test plans and every time I asked how long one would take I was told “tomorrow” or “the end of the week” and a few times, early in the day, I was promised one “by the end of the day.” So I’ll establish the task of test planning to be of the hours-to-days duration.<br /><br />As to whether it is worth doing, well, that is another story entirely. Every time I look at any of the dozens of test plans my teams have written, I see dead test plans. Plans written, reviewed, referred to a few times and then cast aside as the project moves in directions not documented in the plan. This begs the question: if a plan isn’t worth bothering to update, is it worth creating in the first place?<br /><br />Other times a plan is discarded because it went into too much detail or too little; still others because it provided value only in starting a test effort and not in the ongoing work. Again, if this is the case, was the plan worth the cost of creating it given its limited and diminishing value?<br /><br />Some test plans document simple truths that likely didn’t really need documenting at all or provide detailed information that isn’t relevant to the day to day job of a software tester. In all these cases we are wasting effort. Let’s face facts here: there is a problem with the process and content of test plans.<br /><br />To combat this, I came up with a simple task for my teams: write a test plan in 10 minutes. The idea is simple, if test plans have any value at all then let’s get to that value as quickly as possible.<br /><br />Given ten minutes, there is clearly no room for fluff. It is a time period so compressed that every second must be spent doing something useful or any hope you have of actually finishing the task is gone. This was the entire intent behind the exercise from my point of view: boil test planning down to only the essentials and cut all fat and fluff. Do only what is absolutely necessary and leave the details to the test executors as opposed to the test planners. If I wanted to end the practice of writing test plans that don’t stand the test of time, this seemed a worthwhile exercise.<br /><br />However, I didn’t tell the people in the experiment any of this. I told them only: here is an app, create a test plan in 10 minutes or less. Remember that these people work for me and, technically, are paid to do as I tell them. And, again&nbsp;<i>technically</i>&nbsp;I am uniquely positioned to begin termination procedures with respect to their Google employment. On top of that I am presuming they have some measure of respect for me, which means they were likely convinced I actually thought they could do it. This was important to me. I wanted them to expect to succeed!<br /><br />As preparation they could spend some time with the app in question and familiarize themselves with it. However, since many of the apps we used (Google Docs, App Engine, Talk Video, etc.) were tools they used every week, this time was short.<br /><br />So here's how the task progressed:<br /><br />They started, did some work and when ten minutes passed I interrupted them. They stated they weren't done yet. I responded by telling them they were out of time, nice try, here's a different problem to work on. 10 minutes later, the same thing happened and I changed the problem again. They began working faster and trying different angles, things that were too time consuming or not worth the effort got jettisoned really quick!<br /><br />In each case, the teams came up with techniques that helped speed things along. They chose to jot down lists and create grids over writing long paragraphs of prose. Sentences … yes, paragraphs … no. They wasted little time on formatting and explanations and chose instead to document capabilities. Indeed, capabilities or&nbsp;<i>what the software actually does</i>, were the one commonality of all the plans. Capabilities were the one thing that all the teams gravitated toward as the most useful way to spend the little time they were given.<br /><br />The three things that emerged as most important:<br /><br />1.&nbsp;<b>Attributes</b>&nbsp;the adverbs and adjectives that describe the high level concepts testing is meant to ensure. Attributes such as fast, usable, secure, accessible and so forth.<br /><br />2.&nbsp;<b>Components</b>&nbsp;the nouns that define the major code chunks that comprise the product. These are classes, module names and features of the application.<br /><br />3.&nbsp;<b>Capabilities</b>&nbsp;the verbs that describe user actions and activities.<br /><br />None of the teams finished the experiment in the 10 minutes allotted. However, in 10 minutes they were all able to get through both the Attributes and Components (or things that served a similar purpose) and begin documenting Capabilities. At the end of an additional 20 minutes most of the experiments had a large enough set of Capabilities that it would have been a useful starting point for creating user stories or test cases.<br /><br />Which, at least to me, made the experiment a success. I gave them 10 minutes and hoped for an hour. They had 80% of the work complete in 30 minutes. And really isn’t 80% enough? We know full well that we are not going to test everything so why document everything? We know full well that as we start testing, things (schedules, requirements, architecture, etc.) are going to change so insisting on planning precision when nothing else obeys such a calling for completeness seems out of touch with reality.<br /><br />80% complete in 30 minutes or less. Now that’s what I call a 10 minute test plan!<br /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/the-10-minute-test-plan/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Google Developer Day 2011</title>
		<link>https://googledata.org/google-testing/google-developer-day-2011/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-developer-day-2011</link>
		<comments>https://googledata.org/google-testing/google-developer-day-2011/#comments</comments>
		<pubDate>Fri, 19 Aug 2011 20:26:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=dfe553d5e45a0f7c7fb11d5c42378973</guid>
		<description><![CDATA[By James WhittakerGoogle Developer Day is gearing up for a fantastic fall season of tours that crawl the continents. And a surprise this year ... yours truly will be the keynote for the Developer Day in Sao Paulo Brazil and Buenos Aires Argentina in Se...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />Google Developer Day is gearing up for a fantastic fall season of tours that crawl the continents. And a surprise this year ... <i>yours truly</i> will be the keynote for the Developer Day in Sao Paulo Brazil and Buenos Aires Argentina in September. <br /><br />Google Developer Day is a deep dive into the future of Web, Mobile and Cloud technologies crafted specifically for software engineering professionals. And this year we are adding the element of Social to tie it all together. Google+ is only the start. <br /><br />If you are attending, please stop by and say hello!<br /><br />Click <a href="http://www.google.com/events/developerday/2011/">here</a> for more information about dates and agenda.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-5167430069525587592?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/google-developer-day-2011/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GTAC Speakers and Attendees Finalized</title>
		<link>https://googledata.org/google-testing/gtac-speakers-and-attendees-finalized/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gtac-speakers-and-attendees-finalized</link>
		<comments>https://googledata.org/google-testing/gtac-speakers-and-attendees-finalized/#comments</comments>
		<pubDate>Thu, 18 Aug 2011 20:29:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=1ae5fe845732a575edaacb45aea6fe00</guid>
		<description><![CDATA[We've completed the agenda for GTAC 2011 and are in the process of notifying accepted speakers and attendees. Once we have firm accepts we'll be publicizing the agenda.]]></description>
				<content:encoded><![CDATA[We've completed the agenda for GTAC 2011 and are in the process of notifying accepted speakers and attendees. Once we have firm accepts we'll be publicizing the agenda.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-8283774690424929138?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/gtac-speakers-and-attendees-finalized/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Keynote Lineup for GTAC 2011</title>
		<link>https://googledata.org/google-testing/keynote-lineup-for-gtac-2011/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=keynote-lineup-for-gtac-2011</link>
		<comments>https://googledata.org/google-testing/keynote-lineup-for-gtac-2011/#comments</comments>
		<pubDate>Mon, 01 Aug 2011 18:39:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=f82aca8c3a1de517e983bcbcb5aced9e</guid>
		<description><![CDATA[By James WhittakerThe call for proposals and participation is now closed. Over the next few weeks we will be announcing the full agenda and notifying accepted participants. In the meantime, the keynote lineup is now locked. It consists of two famous Go...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />The call for proposals and participation is now closed. Over the next few weeks we will be announcing the full agenda and notifying accepted participants. In the meantime, the keynote lineup is now locked. It consists of two famous Googlers and two famous external speakers that I am very pleased to have join us.<br /><br />Opening Keynote: <b>Test is Dead</b> by <b>Alberto Savoia</b><br /><br />The way most software is designed, developed and launched has changed dramatically over the last decade – but what about testing?  Alberto Savoia believes that software testing as we knew it is dead – or at least moribund – in which case we should stick a fork in it and proactively take it out of its misery for good.  In this opening keynote of biblical scope, Alberto will cast stones at the old test-mentality and will try his darnedest to agitate you and convince you that these days most testers should follow a new test-mentality, one which includes shifting their focus and priority from “Are we building it right?” to “Are we building the right it?”  The subtitle of this year’s GTAC is “cloudy with a chance of tests,” and if anyone can gather the clouds into a hurricane, it's Alberto – it might be wise to bring your umbrella.<br /><br /><i>Alberto Savoia is Director of Engineering and Innovation Agitator at Google. In addition to leading several major product development efforts (including the launch of Google AdWords), Alberto has been a lifelong believer, champion, innovator and entrepreneur in the area of developer testing and test automation tools.  He is a frequent keynote speaker and the author of many articles on testing, including the classic booklet “The Way of Testivus” and “Beautiful Tests” in O’Reilly’s Beautiful Code.  His work in software development tools has won him several awards including the 2005 Wall Street Journal Technical Innovator Award, InfoWorld’s Technology of the Year award, and no less than four Software Development Magazine Jolt Awards.<br /></i><br />Day 1 Closer: <b>Redefining Security Vulnerabilities: How Attackers See Bugs</b> by <b>Herbert H. Thompson</b><br /><br />Developers see features, testers see bugs, and attackers see “opportunities.” Those opportunities are expanding beyond buffer overflows, cross site scripting, etc. into logical bugs (and features) that allow attackers to use the information they find to exploit trusting users. For example, attackers can leverage a small information disclosure issue in an elaborate phishing attempt. When you add people in the mix, we need to reevaluate which “bugs” are actual security vulnerabilities. This talk is loaded with real world examples of how attackers are using software “features” and information tidbits (many of which come from bugs) to exploit the biggest weakness of all: trusting users.<br /><br /><i>Dr. Herbert H. Thompson is Chief Security Strategist at People Security and a world-renown expert in application security. He has co-authored four books on the topic including, How to Break Software Security: Effective Techniques for Security Testing (with Dr. James Whittaker) and The Software Vulnerability Guide (with Scott Chase). In 2006 he was named one of the “Top 5 Most Influential Thinkers in IT Security” by SC Magazine. Thompson continually lends his perspective and expertise on secure software development and has been interviewed by top news organizations including CNN, MSNBC, BusinessWeek, Forbes, Associated Press, and the Washington Post.  He is also Program Committee Chair for RSA Conference, the world’s leading information security gathering. He holds a Ph.D. in Applied Mathematics from Florida Institute of Technology, and is an adjunct professor in the Computer Science department at Columbia University in New York.<br /></i><br />Day 2 Opener: <b>Engineering Productivity: Accelerating Google Since 2006</b> by <b>Patrick Copeland</b><br /><br />Patrick Copeland is the founder and architect of Google's testing and productivity strategy and in this "mini keynote" he tells the story and relates the pain of taking a company from ad hoc testing practices to the pinnacle of what can be accomplished with a well oiled test engineering discipline. <br /><br />Conference Closer: <b>Secrets of World-Class Software Organizations</b> by <b>Steve McConnell</b><br /><br />Construx consultants work with literally hundreds of software organizations each year. Among these organizations a few stand out as being truly world class. They are exceptional in their ability to meet their software development goals and exceptional in the contribution they make to their companies' overall business success. Do world class software organizations operate differently than average organizations? In Construx's experience, the answer is a resounding "YES." In this talk, award-winning author Steve McConnell reveals the technical, management, business, and cultural secrets that make a software organization world class.<br /><br /><i>Steve McConnell is CEO and Chief Software Engineer at Construx Software where he consults to a broad range of industries, teaches seminars, and oversees Construx’s software engineering practices. Steve is the author of Software Estimation: Demystifying the Black Art (2006), Code Complete (1993, 2004), Rapid Development (1996), Software Project Survival Guide (1998), and Professional Software Development (2004), as well as numerous technical articles. His books have won numerous awards for "Best Book of the Year," and readers of Software Development magazine named him one of the three most influential people in the software industry along with Bill Gates and Linus Torvalds.<br /></i><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-3742772058603210030?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/keynote-lineup-for-gtac-2011/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How We Tested Google Instant Pages</title>
		<link>https://googledata.org/google-testing/how-we-tested-google-instant-pages/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-we-tested-google-instant-pages</link>
		<comments>https://googledata.org/google-testing/how-we-tested-google-instant-pages/#comments</comments>
		<pubDate>Wed, 27 Jul 2011 22:34:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=71e8c941526f0b39324f06fe7516ce3c</guid>
		<description><![CDATA[By Jason Arbon and Tejas ShahGoogle Instant Pages are a cool new way that Google speeds up your search experience.  When Google thinks it knows which result you are likely to click, it preloads that page in the background, so when you click the page it...]]></description>
				<content:encoded><![CDATA[By Jason Arbon and Tejas Shah<br /><br /><a href="http://googlewebmastercentral.blogspot.com/2011/06/announcing-instant-pages.html">Google Instant Pages</a> are a cool new way that Google speeds up your search experience.  When Google thinks it knows which result you are likely to click, it preloads that page in the background, so when you click the page it renders instantly, saving the user about 5 seconds.  5 seconds is significant when you think of how many searches are performed each day--and especially when you consider that the rest of the search experience is optimized for sub-second performance.<br /><br />The testing problem here is interesting. This feature requires client and server coordination, and since we are pre-loading and rendering the pages in an invisible background page, we wanted to make sure that nothing major was broken with the page rendering.<br /><br />The original idea was for developers to test out a few pages as they went.But, this doesn’t scale to a large number of sites and is very expensive to repeat. Also, how do you know what the pages should look like? To write Selenium tests to functionally validate thousands of sites would take forever--the product would ship first. The solution was to perform automated test runs that load these pages from search results with Instant Pages turned on, and another run with Instant Pages turned off. The page renderings from each run were then compared.<br /><br />How did we compare the two runs? How to compare pages when content and ads on web pages are constantly changing and we don't know what the expected behavior is? We could have used cached versions of these pages, but that wouldn’t be the realworld experience we were testing and would take time setting up, and the timing would have been different. We opted to leverage some other work that compares pages using the Document Object Model (DOM). We automatically scan each page, pixel by pixel, but look at what element is visible at the point on the page, not the color/RGB values. We then do a simple measure of how closely these pixel measurements match. These so-called "quality bots" generate a score of 0-100%, where 100% means all measurements were identical.<br /><br />When we performed the runs, the vast majority (~95%) of all comparisons were almost identical, like we hoped. Where the pages where different we built a web page that showed the differences between the two pages by rendering both images and highlighting the difference. It was quick and easy for the developers to visually verify that the differences were only due to content or other non-structural differences in the rendering.  Anytime test automation scales, is repeatable, quantified, and developers can validate the results without us is a good thing!<br /><br />How did this testing get organized? As with many things in testing at Google, it came down to people chatting and realizing their work can be helpful for other engineers.  This was bottom up, not top down. Tejas Shah was working on a general quality bot solution for compatibility (more on that in later posts) between Chrome and other browsers. He chatted with the Instant Pages developers when he was visiting their building and they agreed his bot might be able to help. He then spend the next couple of weeks pulling it all together and sharing the results with the team. <br /><br />And now more applications of the quality bot are surfacing. What if we kept the browser version fixed, and only varied the version of the application? Could this help validate web applications independent of a functional spec and without custom validation script development and maintenance?  Stay tuned...<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-5950025234592171231?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/how-we-tested-google-instant-pages/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GTAC: Call for Team Attendance</title>
		<link>https://googledata.org/uncategorized/gtac-call-for-team-attendance/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gtac-call-for-team-attendance</link>
		<comments>https://googledata.org/uncategorized/gtac-call-for-team-attendance/#comments</comments>
		<pubDate>Thu, 07 Jul 2011 19:54:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=a5826d8614cb609b8172f2df892829d2</guid>
		<description><![CDATA[Attending conferences can be a great way to network and learn new concepts. However, taking those concepts back to your office and trying to convince your team apply them can be daunting. In order to make GTAC attendees more successful at implementing ...]]></description>
				<content:encoded><![CDATA[Attending conferences can be a great way to network and learn new concepts. However, taking those concepts back to your office and trying to convince your team apply them can be daunting. In order to make GTAC attendees more successful at implementing what they learn at this conference we are going to give preference to teammates from the same company applying for attendance. Bring another developer or tester (or two or three) and attend as a team so you can discuss what you learn and experience, hopefully increasing your chances of putting it into practice when you return to work.<br /><br />We're extending the deadline for attendees until the end of July to give you a chance to round up some teammates.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-7880166314041083040?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/gtac-call-for-team-attendance/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Google at STAR West 2011</title>
		<link>https://googledata.org/uncategorized/google-at-star-west-2011/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-at-star-west-2011</link>
		<comments>https://googledata.org/uncategorized/google-at-star-west-2011/#comments</comments>
		<pubDate>Tue, 28 Jun 2011 21:29:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=ae88428c533dd8243f9422de3ac7956b</guid>
		<description><![CDATA[By James WhittakerSTAR West will feature something unprecedented this year: back-to-back tutorials by Googlers plus a keynote and track session.The tutorials will be Monday October 3. I have the morning session on "How Google Tests Software" and my col...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />STAR West will feature something unprecedented this year: back-to-back tutorials by Googlers plus a keynote and track session.<br /><br />The tutorials will be Monday October 3. I have the morning session on "How Google Tests Software" and my colleague Ankit Mehta has the afternoon session on "Testing Rich Internet AJAX-based Applications." You can spend the whole day in Google Test Land.<br /><br />I highly recommend Ankit's tutorial. He is one of our top test managers and has spent years minding Gmail as it grew up from a simple cloud-based email system into the mass-scale, ubiquitous rich web app that it is today. Ankit now leads all testing efforts around our social offerings (which are already starting to appear). Anyone struggling to automate the testing of rich web apps will have plenty to absorb in his session. He's not spouting conjecture and generalities; he's speaking from the position of actual accomplishment. Bring a laptop. <br /><br />Jason Arbon and Sebastian Schiavone are presenting a track talk on "Google's New Methodology for Risk Driven Testing" and will be demonstrating some of the latest tools coming out of Google Test Labs. Tools that were born of real need built to serve that need. I am expecting free samples! Jason was test lead for Chrome and Chrome OS before taking over Google Test Labs where incredibly clever code is woven into useful test tools. Sebastian is none other than my TPM (technical program manager) who is well known for taking my vague ideas about how things should be done and making them real. <br /><br />Oh and the keynote, well that's me again, something about testing getting in the way of quality. I wrote this talk while I was in an especially melancholy mood about my place in the universe. It's a wake-up call to testers: the world is changing and your relevance is calling ... will you answer the call or ignore it and pretend that yesterday is still today?<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-6544260587721157512?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/google-at-star-west-2011/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Lessons in a 21st Century Tech Career: Failing Fast, 20% Time and Project Mobility</title>
		<link>https://googledata.org/google-testing/lessons-in-a-21st-century-tech-career-failing-fast-20-time-and-project-mobility/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=lessons-in-a-21st-century-tech-career-failing-fast-20-time-and-project-mobility</link>
		<comments>https://googledata.org/google-testing/lessons-in-a-21st-century-tech-career-failing-fast-20-time-and-project-mobility/#comments</comments>
		<pubDate>Thu, 23 Jun 2011 21:05:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=5345d201d1a96a69f0223aebb7953b08</guid>
		<description><![CDATA[By James WhittakerIf your name is Larry Page, stop reading this now. Let me first admit that as I write this I am sitting in a company lounge reminiscent of a gathering room in a luxury hotel with my belly full of free gourmet food waiting for a meetin...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />If your name is Larry Page, stop reading this now. <br /><br />Let me first admit that as I write this I am sitting in a company lounge reminiscent of a gathering room in a luxury hotel with my belly full of free gourmet food waiting for a meeting with the lighthearted title "Beer and Demos" to start. <br /><br />Let me secondly admit that none of this matters. It's all very nice, and I hope it continues in perpetuity, but it doesn't matter. Engineers don't need to be spoiled rotten to be happy. The spoiling of engineers has little to do with the essence of a 21st century tech career. <br /><br />Now, what exactly does matter? What is the essence of a 21st century tech career that keeps employees loyal and engaged with productivity that would shame the most seasoned agile-ist? I don't yet have the complete story, but here are three important ingredients:<br /><br /><b>Failing Fast</b>. Nothing destroys morale more than a death march. Projects going nowhere <i>should do so with the utmost haste</i>. The ability of a company to implode pet projects quickly correlates directly to a great place to work. Engineers working on these project gain not only valuable engineering experience, they experience first-hand the company's perception of what is important (and, in the case of their project, what is not important). It's a built-in lesson on company priorities and it ensures good engineers don't get monopolized by purposeless projects. You gotta like a company willing to experiment. You have to love a company willing to laugh at itself when the experiments don't pan out.<br /><br /><b>20% Time</b>. Any company worth working for has any number of projects that are worth working on. It's frustrating for many super-sharding engineers to see cool work going on down the hall or in the next building and not being part of it. A day job that takes all day is tiresome. Enter 20% time, a concept meant to send a strong message to all engineers: <i>you always have a spare day</i>. Use it wisely.<br /><br /><b>Project Mobility</b>. Staying fresh by changing projects is part of mobility. Continuous cycling of fresh ideas from new project members to existing projects is another part. The downside here is obviously projects with a steep learning curve but I scoff in the general direction of this idea. Whose fault is it when a wicked smart engineer can't learn the system fast enough to be useful in some (even a small) context? Only the weakest organization with the poorest documentation can use that excuse. The only good reason for keeping people on a project is because they have no desire to leave. <br /><br />These three concepts are better than all the lounges and free food any company can provide. Here's an example, a real example, of how it worked recently for an employee I'll call Paul (because that happens to be his name!). <br /><br />Paul joined Google a little over a year ago and spent two months on a project that was then cancelled. He learned enough to be useful anywhere but was new enough that he really didn't have great context on what project he wanted next. Solution: I assigned him to a project that was a good skill set match.<br /><br />Less than a year later, his new project ships. He played an important role in making this happen but in that time he also realized that the role was leaning toward feature development and he was more interested in a pure test development role. However, he was steeped in post-ship duties and working on the next release. A cycle that, happily, can be broken pretty easily here. <br /><br />Another project had a test developer opening that suited Paul perfectly. He immediately signed up for 20% on this new project and spent his 80% ramping down in his old project. At some point these percentages will trade places and he'll spend 20% of his time training his replacement on the old project. This is a friction-less process. His manager cannot deny him <i>his day</i> to do as he pleases and now he can spend his time getting off the critical path of his old project and onto the critical path of his new project. <br /><br />Mobility means a constant stream of openings on projects inside Google. It also creates a population of engineering talent with an array of project experiences and a breadth of expertise to fill those positions. 20% time is a mechanism for moving onto and off of projects without formal permissions, interviews and other make-work processes engineers deplore. <br /><br />Let's face it, most benefits are transient. I enjoy a good meal for the time it is in front of me. I enjoy great medical when I am sick. I appreciate luxury when I have time for it. Even my paycheck comes with such monotonous regularity that it is an expectation that brings little joy apart from the brief moment my bank balance takes that joyful upward tick. But if I am unhappy the rest of the day, none of those islands of pampering mean squat. Empower me as an engineer during the much larger blocks of my time when I am doing engineering. Feed my creativity. Remove the barriers that prevent me from working on the things I want to work on. <br /><br />Do these things and you have me. Do these things and you make my entire work day better. This is the essence of a 21st century tech career: <b>make the hours I spend working better</b>. Anything more is so dot com. <br /><br />Ok, Larry you can start reading again.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-9007507255851290164?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/lessons-in-a-21st-century-tech-career-failing-fast-20-time-and-project-mobility/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GTAC 2011 Keynotes</title>
		<link>https://googledata.org/google-testing/gtac-2011-keynotes/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gtac-2011-keynotes</link>
		<comments>https://googledata.org/google-testing/gtac-2011-keynotes/#comments</comments>
		<pubDate>Thu, 16 Jun 2011 18:56:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=f6bfa10eba1aa4dc2986c9cbef906021</guid>
		<description><![CDATA[By James WhittakerI am pleased to confirm 3 of our keynote speakers for GTAC 2011 at the Computer History Museum in Mountain View CA.Google's own Alberto Savoia, aka Testivus.Steve McConnell the best selling author of Code Complete and CEO of Construx ...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />I am pleased to confirm 3 of our keynote speakers for GTAC 2011 at the Computer History Museum in Mountain View CA.<br /><br />Google's own <b>Alberto Savoia</b>, aka Testivus.<br /><br /><b>Steve McConnell</b> the best selling author of <i>Code Complete</i> and CEO of Construx Software.<br /><br />Award winning speaker ("the Jon Stewart of Software Security") <b>Hugh Thompson</b>.<br /><br />This is the start of an incredible lineup. Stay tuned for updates concerning their talks and continue to nominate additional speakers and keynotes. We're not done yet and we're taking nominations through mid July.<br /><br />In addition to the keynotes, we're going to be giving updates on How Google Tests Software from teams across the company including Android, Chrome, Gmail, You Tube and many more.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-799441657065972719?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/gtac-2011-keynotes/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Google Tests Software &#8211; Part Seven</title>
		<link>https://googledata.org/uncategorized/how-google-tests-software-part-seven/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-google-tests-software-part-seven</link>
		<comments>https://googledata.org/uncategorized/how-google-tests-software-part-seven/#comments</comments>
		<pubDate>Thu, 26 May 2011 20:48:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=7422042e45df103f7e7462d697798104</guid>
		<description><![CDATA[By James WhittakerThe Life of a TEThe Test Engineer is a newer role within Google than either SWEs or SETs. As such, it is a role still in the process of being defined. The current generation of Google TEs are blazing a trail which will guide the next ...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br /><span style="font-weight:bold;">The Life of a TE</span><br /><br />The Test Engineer is a newer role within Google than either SWEs or SETs. As such, it is a role still in the process of being defined. The current generation of Google TEs are blazing a trail which will guide the next generation of new hires for this role. It is the process that is emerging as the best within Google that we present here.<br /><br />Not all products require the services of a TE. Experimental efforts and early stage products without a well-defined mission or user story are certainly projects that won’t get a lot of TE attention. If the product stands a good chance of being cancelled (in the sense that as a proof of concept it fails to pass muster) or has yet to engage users or have a well defined set of features, testing it is largely something that should be done by the people developing it.<br /><br />Even if it is clear that a product is going to get shipped, Test Engineers have little to do early in the development cycle when features are still in flux and the final feature list and scope is undetermined. Overinvesting in testing too early can mean a lot of things get thrown away. Likewise, early testing planning requires fewer test engineers than later cycle exploratory testing when the product is close to final form and the hunt for missed bugs has a greater urgency.<br /><br />The trick in staffing a project with Test Engineers has to do with risk and return on investment. Risk to the customer and to the enterprise means more testing effort and requires more TEs. But that effort needs to be in proportion to the potential return. We need the right number of TEs and we need them to engage at the right time and with the right impact.<br /><br />Once engaged, TEs do not have to start from scratch. There is a great deal of test engineering and quality-oriented work performed by SWEs and SETs which is the starting point for additional TE work. The initial engagement of the TE is to decide things such as:<br /><br />·       Where are the weak points in the software?<br /><br />·       What are the security, privacy, performance and reliability concerns?<br /><br />·       Do all the primary user scenarios work as expected? For all international audiences?<br /><br />·       Does the product interoperate with other products (hardware and software)?<br /><br />·       In the event of a problem, how good are the diagnostics?<br /><br />All of this combines to speak to the risk profile of releasing the software in question. TEs don’t necessarily do all of this work, but they ensure that it gets done and they leverage the work of others is assessing where additional work is required. Ultimately, test engineers are paid to protect users and the business from bad design, confusing UX, functional bugs, security and privacy issues and so forth. At Google, TEs are the only people on a team whose full-time job is to look at the product or service holistically for weak points. As such, the life of a Test Engineer is much less prescriptive and formalized than that of an SET. TE’s are asked to help on projects in all stages of readiness: everything from the idea stage to version 8, or even watching over a deprecated or “mothballed” project. Often, a single TE will even span multiple projects particularly those with specialty type skills like security.<br /><br />Obviously, the work of a TE varies greatly depending on the project. Some TE’s spend much of their time programming, much like an SET, but with more of a focus on end-to-end user scenarios. Other TE's take existing code and designs determine failure modes and look for errors that will cause those failures. In such a role a TE might modify code but not create it from scratch. TE's must be more systematic and thorough in their test planning and completeness with a focus on the actual usage and system experience. TE's excel at dealing with ambiguity in requirements and at reasoning and communicating about fuzzy problems.<br /><br />Successful TEs accomplish all this while navigating the sensitivities and sometimes strong personalities of the development and product team members. When weak points are found, test engineers happily break the software, and drive to get these issues resolved with the SWEs, PMs, and SETs.<br /><br />Such a job description is a frightening prospect given the mix of technical skill, leadership, and deep product understanding and without proper guidance it is a role in which many would expect to fail. But at Google a strong community of test engineers has emerged to counter this. Of all job functions, the TE role is perhaps the best peer supported role in the company and the insight and leadership required to perform it successfully means that many of the top test managers in the company come from the TE ranks.<br /><br />There is a fluidity to the work of a Google Test Engineer that belies any prescriptive process for engagement. TE’s can enter a project at any point and must assess the state of the project, code, design, and users quickly and decide what to focus on first. If the project is just getting started, test planning is often the first order of business. Sometimes TEs are pulled in late in the cycle to evaluate whether a project is ready for ship or if there are any major issues before an early ‘beta’ goes out. If they are brought into a newly acquired application or one in which they have little prior experience, they will often start doing some exploratory testing with little to no planning. Sometimes projects haven’t been released for quite a while and just need some touchups/security fixes, or UX updates—calling for an even different approach. One size rarely fits all for TEs at Google.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-8835282394141014874?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/how-google-tests-software-part-seven/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GTAC 2011 Open for Submission</title>
		<link>https://googledata.org/uncategorized/gtac-2011-open-for-submission/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gtac-2011-open-for-submission</link>
		<comments>https://googledata.org/uncategorized/gtac-2011-open-for-submission/#comments</comments>
		<pubDate>Fri, 20 May 2011 21:58:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=023569d2b3f1b2bcf0b62484ff879772</guid>
		<description><![CDATA[By James WhittakerI am happy to announce that GTAC 2011 is now open for nominations. We're going to try and have an executive session, depending on interest, the afternoon/evening of October 25th at the Googleplex in Mountain View. This session is inte...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />I am happy to announce that <a href="http://www.gtac.biz">GTAC 2011</a> is now <a href="https://spreadsheets0.google.com/a/google.com/spreadsheet/viewform?formkey=dGYzU3V6d25LamhNeU5NYjdxMTVfOUE6MA">open for nominations</a>. We're going to try and have an executive session, depending on interest, the afternoon/evening of October 25th at the Googleplex in Mountain View. This session is intended for top testing decision makers at top web, technology and software companies worldwide. It will be a chance for frank and open discussion about ours and the industry's collective challenges. It's intended to be a meeting of key decision makers and budget owners to share information, ideas and with a little luck spur some collaborations that will be good for the testing industry overall. Nominate your executive <a href="https://spreadsheets.google.com/a/google.com/spreadsheet/viewform?formkey=dFp0Q3hnR1BweUpkTkJqbGltN25TMFE6MQ">here</a>. <br /><br />The general session is by invitation only and prospective attendees and speakers must register and be selected. Speaker nominees are encouraged to point us to videos of prior presentations and any other material to help make our decision easier. <br /><br />Please leave comments if there is some technology, tool or product you want to hear about so we end up with the best possible agenda. <br /><br />I hope to see a lot of our readers in Mountain View in October!<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-3376970712867282598?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/gtac-2011-open-for-submission/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Google Tests Software &#8211; A Break for Q&amp;A</title>
		<link>https://googledata.org/uncategorized/how-google-tests-software-a-break-for-qa/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-google-tests-software-a-break-for-qa</link>
		<comments>https://googledata.org/uncategorized/how-google-tests-software-a-break-for-qa/#comments</comments>
		<pubDate>Wed, 04 May 2011 21:27:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=b738452d68460051769f72451df81917</guid>
		<description><![CDATA[By James WhittakerNew material for the this series is coming more slowly. I am beginning to get into areas where I want to start posting screen shots of internal Google tools and describe how our infrastructure works. This is material that takes longer...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />New material for the this series is coming more slowly. I am beginning to get into areas where I want to start posting screen shots of internal Google tools and describe how our infrastructure works. This is material that takes longer to develop and also requires some scrutiny before being published externally. So in the meantime, I am pausing to answer some of the questions you've posted in the comments. <br /><br />I am going to start with <a href="http://www.blogger.com/profile/10077447057975752582">Lilia</a> (because she likes Neil Young mainly, but also because she can run further than me and those two things combine to impress me to no small end) who asks about SET-SWE conversion and vice-versa and which I have seen the most. There is also the broader question of whether there is a ceiling on the SET career path. <br /><br />SETs and SWEs are on the same pay scale and virtually the same job ladder. Both roles are essentially 100% coding roles with the former writing test code and the latter doing feature development. From a coding perspective the skill set is a dead match. From a testing perspective we expect a lot more from SETs. But the overlap on coding makes SETs a great fit for SWE positions and vice versa. Personally I think it is a very healthy situation to have conversions. Since I have both roles reporting to me I can speak from first hand experience that many of my best coders are former SETs and some of my best testers are former SWEs. Each is excellent training ground for the other. On my specific team I am even on the conversions from one role to the other. But I suspect that Google-wide there are more SETs who become SWEs. <br /><br />Why convert in the first place? Well at Google it isn't for the money. It also isn't for the prestige as we have a lot more SWEs than SETs and it is a lot harder to standout. The scarcity of our SETs creates somewhat of a mystique about these folk. Who are these rare creatures who keep our code bases healthy and make our development process run so smoothly? Actually, most SWEs care more about making the SETs happy so they continue doing what they do. Why would any dev team force a conversion of a great developer from SET to SWE when finding a suitable SET replacement is so much harder than adding another feature developer? SWEs ain't that stupid. <br /><br />Now pausing before I take another hit of the corp kool-aid, let me be honest and say that there are far more senior SWEs than SETs. Percentage wise we test folk are more outnumbered at the top of the org than at the middle and bottom. But keep in mind that developers have had a large head start on us. We have developers who have been at Google since our founding and testers ... well ... less time than that. <br /><br />Where do TEs fit into this mix? TE is an even newer role than SET but already we have a number climbing to the Staff ranks and pushing on the senior most positions in the company. There is no ceiling, but the journey to the top takes some time. <br /><br /><a href="http://www.blogger.com/profile/15848882648778799408">Raghev</a> among others has asked about the career path and whether remaining an IC (individual contributor) is an option over becoming a manager. I have mixed feelings about answering this. As a manager myself, I see the role as one with much honor and yet I hear in your collective voices a hint of <span style="font-style:italic;">why do I have to become a manager?</span> Ok, I admit, Dilbert is funny. <br /><br />For me, being a manager is a chance to impart some of my experience and old-guy judgement on less experienced but more technically gifted ICs. The combination of an experienced manager's vision and an ICs technical skill can be a fighting force of incredible power. And yet, why should someone who does not want to manage be forced to do so in order to continue their career advancement? <br /><br />Well, fortunately, Google does not make us choose. Our managers are expected to have IC tasks they perform. They are expected to be engaged technically and lead as opposed to just manage. And our ICs are expected to have influence beyond their personal work area. When you get to the senior/staff positions here you are a leader, period. Some leaders lead more than they manage and some leaders manage more than they lead. <br /><br />But either way, the view from the top means that a lot of people are looking to you for direction ... whether you manage them or not.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-7478318051965497241?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/how-google-tests-software-a-break-for-qa/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Google Tests Software &#8211; Part Six</title>
		<link>https://googledata.org/uncategorized/how-google-tests-software-part-six/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-google-tests-software-part-six</link>
		<comments>https://googledata.org/uncategorized/how-google-tests-software-part-six/#comments</comments>
		<pubDate>Mon, 02 May 2011 19:05:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=faa7dc5543860dfa3a540b1157094f2a</guid>
		<description><![CDATA[By James WhittakerThe Life of an SETSETs are Software Engineers in Test. They are software engineers who happen to write testing functionality. First and foremost, SETs are developers and the role is touted as a 100% coding role in our recruiting liter...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br /><span style="font-weight:bold;">The Life of an SET</span><br /><br />SETs are Software Engineers in Test. They are software engineers who happen to write testing functionality. First and foremost, SETs are developers and the role is touted as a 100% coding role in our recruiting literature and internal job promotion ladders. When SET candidates are interviewed, the “coding bar” is nearly identical to the SWE role with more emphasis that SETs know how to test the code they create. In other words, both SWEs and SETs answer coding questions. SETs are expected to nail a set of testing questions as well.<br /><br />As you might imagine, it is a difficult role to fill and it is entirely possible that the low numbers of SETs isn’t because Google has created a magic formula for productivity but more of a result of adapting our engineering practice around the reality that the SET skill set is really hard to find. We optimize on this very important task and build processes around the people who do it. <br /><br />It is usually the case that SETs are not involved early in the design phase. Their exclusion is not so much purposeful as it is a by-product of how a lot of Google projects are born. A common scenario for new project creation is that some informal 20% effort takes a life of its own as an actual Google branded product. Gmail and Chrome OS are both projects that started out as ideas that were not formally mandated by Google but over time grew into shipping products with teams of developers and testers working on them. In such cases early development is not about quality, it is about proving out a concept and working on things like scale and performance that must be right before quality could even be an issue. If you can't build a web service that scales, testing is not your biggest problem! <br /><br />Once it is clear that a product can and will be built and shipped, that's when the development team seeks out test involvement.<br /><br />You can imagine a process like this: someone has an idea, they think about it, write experimental code, seek out opinions of others, write some more code, get others involved, write even more code, realize they are onto something important, write more code to mold the idea into something that they can present to others to get feedback ... somewhere in all this an actual project is created in Google's project database and the project becomes real. Testers don't get involved until it becomes real. <br /><br />Do all real projects get testers? Not by default. Smaller projects and those meant for limited users often get tested exclusively by the people who build it. Others that are riskier to our users or the enterprise (much more about risk later) get testing attention.<br /><br />The onus is on the development teams to solicit help from testers and convince them that their project is exciting and full of potential. Dev Directors explain their project, progress and ship schedule to Test Directors who then discuss how the testing burden is to be shared and agree on things like SWE involvement in testing, expected unit testing levels and how the duties of the release process are going to be shared. SETs may not be involved at project inception, but once the project becomes real we have vast influence over how it is to be executed.<br /><br />And when I say "testing" I don't just mean exercising code paths. Testers might not be involved from the beginning ... but <span style="font-style:italic;">testing</span> is. In fact, an SET's impact is felt even before a developer manages to check code into the build. Stay tuned to understand what I am talking about.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-1629192441684012507?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/how-google-tests-software-part-six/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GTAC 2011: Cloudy with a Chance of Tests</title>
		<link>https://googledata.org/uncategorized/gtac-2011-cloudy-with-a-chance-of-tests/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gtac-2011-cloudy-with-a-chance-of-tests</link>
		<comments>https://googledata.org/uncategorized/gtac-2011-cloudy-with-a-chance-of-tests/#comments</comments>
		<pubDate>Tue, 26 Apr 2011 19:31:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=f85d709b973168544fd52f1526fa7657</guid>
		<description><![CDATA[By James WhittakerHas it only been 179 days since the last GTAC? My how time flies when you have lots of testing to do!It is my pleasure to announce that not only is our collective attention being drawn back to this most intriguing test conference but ...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />Has it only been 179 days since the last GTAC? My how time flies when you have lots of testing to do!<br /><br />It is my pleasure to announce that not only is our collective attention being drawn back to this most intriguing test conference but yours truly, along with "Shoeless" Brad Green, have been asked to plan it. Given that Brad and I both spend a lot of time with our heads in the Cloud, this year's theme is <span style="font-style:italic;">cloudy with a chance of tests</span>. <br /><br />The composite meaning of this theme is purposeful: Testing apps that reside in the Cloud is itself a cloudy, as in opaque, process. Clouds on the horizon often signal change and testing in the Cloud certainly changes things. The Cloud breaks old testing paradigms and tools requiring that even the tried-and-true be rethought and recast. And, yes, the future of testing itself is cloudy and the need for testers and testing as it exists today is unclear. There are clouds gathering on the horizons of the discipline and this conference will be dedicated to interpreting their meaning and planning for their arrival. As always, GTAC will attempt to bring together people who have thought deeply about these subjects and are responsible for actual progress, technology and insights that will benefit others in the community. <br /><br />We are still in the very earliest stages of planning, but here are the details as we know them:<br /><br /><span style="font-weight:bold;">GTAC 2011</span> will be in <span style="font-weight:bold;">Mountain View, CA</span> the week of October 25th and will be held at the <a href="http://www.computerhistory.org/">Computer History Museum</a>. GTAC or no GTAC, this place is well worth a visit and as an actual conference venue just might be the coolest location for GTAC ever. <br /><br />As in the past, GTAC 2011 will feature a single track, all-keynote format with both internal and external speakers. We will be soliciting feedback from potential attendees about what topics and speakers are the most interesting. Our opening keynote has been determined already and it is none other than our most famous Alberto Savoia, translator of the ancient tome <span style="font-style:italic;"><a href="http://www.agitar.com/downloads/TheWayOfTestivus.pdf">The Way of Testivus</a></span>, and agitator extraordinaire with quotes like "<a href="http://pretotyping.blogspot.com/">Building the right 'it' is more important than building 'it' right.</a>" If anyone can gather the clouds into a hurricane, it's Alberto. You can also expect updates on our open source test tools strategy. <br /><br />One important addition we seek to make this year is to have a <span style="font-weight:bold;">Test Executive Session</span> sometime during the event. The idea is to gather top decision makers and budget owners at the biggest/best/most influential web companies on the planet. It will be a discussion about testing culture, organizational structure, technology deployment, innovation and so forth by the very people who can make change happen within their company. Our readers will be asked to nominate their Directors, VPs, SVPs and so forth who own their companies' testing charter.<br /><br />Stay tuned!<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-5586833814754184673?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/gtac-2011-cloudy-with-a-chance-of-tests/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The SET Career Path</title>
		<link>https://googledata.org/uncategorized/the-set-career-path/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-set-career-path</link>
		<comments>https://googledata.org/uncategorized/the-set-career-path/#comments</comments>
		<pubDate>Tue, 05 Apr 2011 16:42:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=469c3f9ab5c5a962582514a9d87cf365</guid>
		<description><![CDATA[By James WhittakerI've had a number of questions about the SET role and it seems I have confused folks when I say that the SWE is a tester and the SET is a tester and at the same time the SWE is a developer and the SET is a developer. What could possib...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />I've had a number of questions about the SET role and it seems I have confused folks when I say that the SWE is a tester and the SET is a tester and at the same time the SWE is a developer and the SET is a developer. What could possibly be confusing about that?<br /><br />Oh, yeah. Right.<br /><br />My next series of posts are going to detail the role of the SET and all will eventually be clear but some clarification on career path seems worthwhile. <br /><br />SETs are developers who write test code and automation as their primary task. They are in every sense of the word a developer. When we interview SETs, SWEs are on the interview loop and SWE questions are asked. They are not all of the interview, but they are part of it. <br /><br />This means that the skill set that our SETs possess makes them perfect candidates for switching to the SWE role. There is neither incentive nor deterrent to do so. SETs and SWEs are on the same pay scale and bonus structure (I have both roles reporting to me so I have real visibility into salary data) and their promotion velocity (again based on actual data) is roughly equivalent. This means that SETs have no outside influences to prompt them one way or the other. <br /><br />The key factor is really the type of work you are doing. SETs who find themselves involved in SWE work usually convert to SWE. SWEs are also drawn in the opposite direction. Much of this happens through our 20% time work. Any SET interested in SWE work can take on a 20% task doing feature development. Any SWE interested in automation can find a group and sign up for a 20%. Right now I have both SWEs and SETs involved in such cross pollination. <br /><br />The ideal situation is that the title reflects the actual work that you are involved in. So if an SET starts doing more feature dev work than automation, he or she should convert, same for SWEs doing automation work. In my time here, conversions in both directions have happened, but it is not all that common. The work of both roles is engaging, interesting and intense. Few Googlers are walking around bored. <br /><br />Bottom line: do the work you are passionate about and capable of and the right job title will find you.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-1278831857312867460?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/uncategorized/the-set-career-path/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Google Tests Software &#8211; Part Five</title>
		<link>https://googledata.org/google-testing/how-google-tests-software-part-five/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-google-tests-software-part-five</link>
		<comments>https://googledata.org/google-testing/how-google-tests-software-part-five/#comments</comments>
		<pubDate>Thu, 24 Mar 2011 03:27:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=8e8af13b82a69c11668feccf46c8aae8</guid>
		<description><![CDATA[By James WhittakerInstead of distinguishing between code, integration and system testing, Google uses the language of small, medium and large tests emphasizing scope over form. Small tests cover small amounts of code and so on. Each of the three engine...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />Instead of distinguishing between code, integration and system testing, Google uses the language of small, medium and large tests emphasizing scope over form. Small tests cover small amounts of code and so on. Each of the three engineering roles may execute any of these types of tests and they may be performed as automated or manual tests. <br /><br /><span style="font-weight:bold;">Small Tests</span> are mostly (but not always) automated and exercise the code within a single function or module. They are most likely written by a SWE or an SET and may require mocks and faked environments to run but TEs often pick these tests up when they are trying to diagnose a particular failure. For small tests the focus is on typical functional issues such as data corruption, error conditions and off by one errors. The question a small test attempts to answer is does this code do what it is supposed to do?<br /><br /><span style="font-weight:bold;">Medium Tests</span> can be automated or manual and involve two or more features and specifically cover the interaction between those features. I've heard any number of SETs describe this as "testing a function and its nearest neighbors." SETs drive the development of these tests early in the product cycle as individual features are completed and SWEs are heavily involved in writing, debugging and maintaining the actual tests. If a test fails or breaks, the developer takes care of it autonomously. Later in the development cycle TEs may perform medium tests either manually (in the event the test is difficult or prohibitively expensive to automate) or with automation. The question a medium test attempts to answer is does a set of near neighbor functions interoperate with each other the way they are supposed to?<br /><br /><span style="font-weight:bold;">Large Tests</span> cover three or more (usually more) features and represent real user scenarios to the extent possible. There is some concern with overall integration of the features but large tests tend to be more results driven, i.e., did the software do what the user expects? All three roles are involved in writing large tests and everything from automation to exploratory testing can be the vehicle to accomplish accomplish it. The question a large test attempts to answer is does the product operate the way a user would expect?<br /><br />The actual language of small, medium and large isn’t important. Call them whatever you want. The important thing is that Google testers share a common language to talk about what is getting tested and how those tests are scoped. When some enterprising testers began talking about a fourth class they dubbed <span style="font-style:italic;">enormous</span> every other tester in the company could imagine a system-wide test covering nearly every feature and that ran for a very long time. No additional explanation was necessary. <br /><br />The primary driver of what gets tested and how much is a very dynamic process and varies wildly from product to product. Google prefers to release often and leans toward getting a product out to users so we can get feedback and iterate. The general idea is that if we have developed some product or a new feature of an existing product we want to get it out to users as early as possible so they may benefit from it. This requires that we involve users and external developers early in the process so we have a good handle on whether what we are delivering is hitting the mark. <br /><br />Finally, the mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated. Only those problems, in any of the above categories, which specifically require human judgment, such as the beauty of a user interface or whether exposing some piece of data constitutes a privacy concern, should remain in the realm of manual testing. <br /><br />Having said that, it is important to note that Google performs a great deal of manual testing, both scripted and exploratory, but even this testing is done under the watchful eye of automation. Industry leading recording technology converts manual tests to automated tests to be re-executed build after build to ensure minimal regressions and to keep manual testers always focusing on new issues. We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors and files a bug. The ongoing effort to automate to within the “last inch of the human mind” is currently the design spec for the next generation of test engineering tools Google is building. <br /><br />Those tools will be highlighted in future posts. However, my next target is going to revolve around The Life of an SET. I hope you keep reading.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-6528247807328420736?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/how-google-tests-software-part-five/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Google Tests Software &#8211; Part Four</title>
		<link>https://googledata.org/google-testing/how-google-tests-software-part-four/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-google-tests-software-part-four</link>
		<comments>https://googledata.org/google-testing/how-google-tests-software-part-four/#comments</comments>
		<pubDate>Wed, 02 Mar 2011 18:11:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=9c99a0eede90a94786e8d5c47bc3fd35</guid>
		<description><![CDATA[By James WhittakerCrawl, walk, run.One of the key ways Google achieves good results with fewer testers than many companies is that we rarely attempt to ship a large set of features at once. In fact, the exact opposite is often the goal: build the core ...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />Crawl, walk, run.<br /><br />One of the key ways Google achieves good results with fewer testers than many companies is that we rarely attempt to ship a large set of features at once. In fact, the exact opposite is often the goal: build the core of a product and release it the <span style="font-style:italic;">moment it is useful</span> to as large a crowd as feasible, then get their feedback and iterate. This is what we did with Gmail, a product that kept its beta tag for four years. That tag was our warning to users that it was still being perfected. We removed the beta tag only when we reached our goal of 99.99% uptime for a real user’s email data. Obviously, quality is a work in progress!<br /><br />It’s not as cowboy a process as I make it out to be. In fact, in order to make it to what we call the beta channel release, a product must go through a number of other channels and prove its worth. For Chrome, a product I spent my first two years at Google working on, multiple channels were used depending on our confidence in the product’s quality and the extent of feedback we were looking for. The sequence looked something like this:<br /><br /><span style="font-weight:bold;">Canary Channel</span> is used for code we suspect isn’t fit for release. Like a canary in a coalmine, if it failed to survive then we had work to do. Canary channel builds are only for the ultra tolerant user running experiments and not depending on the application to get real work done.<br /><br /><span style="font-weight:bold;">Dev Channel</span> is what developers use on their day-to-day work. All engineers on a product are expected to pick this build and use it for real work.<br /><br /><span style="font-weight:bold;">Test Channel</span> is the build used for internal dog food and represents a candidate beta channel build given good sustained performance.<br /><br />The <span style="font-weight:bold;">Beta Channel</span> or <span style="font-weight:bold;">Release Channel</span> builds are the first ones that get external exposure. A build only gets to the release channel after spending enough time in the prior channels that is gets a chance to prove itself against a barrage of both tests and real usage. <br /><br />This crawl, walk, run approach gives us the chance to run tests and experiment on our applications early and obtain feedback from real human beings in addition to all the automation we run in each of these channels every day. <br /><br />There are analytical benefits to this process as well. If a bug is found in the field a tester can create a test that reproduces it and run it against builds in each channel to determine if a fix has already been implemented.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-451564545891578874?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/how-google-tests-software-part-four/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Google Tests Software &#8211; A Brief Interlude</title>
		<link>https://googledata.org/google-testing/how-google-tests-software-a-brief-interlude/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-google-tests-software-a-brief-interlude</link>
		<comments>https://googledata.org/google-testing/how-google-tests-software-a-brief-interlude/#comments</comments>
		<pubDate>Tue, 22 Feb 2011 21:28:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=527b3586ce8bcb4d01df3bd4891d2c48</guid>
		<description><![CDATA[By James WhittakerThese posts have garnered a number of interesting comments. I want to address two of the negative ones in this post. Both are of the same general opinion that I am abandoning testers and that Google is not a nice place to ply this tra...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />These posts have garnered a number of interesting comments. I want to address two of the negative ones in this post. Both are of the same general opinion that I am abandoning testers and that Google is not a nice place to ply this trade. I am puzzled by these comments because nothing could be further from the truth. One such negative comment I can take as a one-off but two smart people (hey they are reading this blog, right?) having this impression requires a rebuttal. Here are the comments:<br /><br />"A sad day for testers around the world. Our own spokesman has turned his back on us. What happened to 'devs can't test'?" by Gengodo<br /><br />"I am a test engineer and Google has been one of my dream companies. Reading your blog I feel that Testers are so unimportant at Google and can be easily laid off. It's sad." by Maggi<br /><br />First of all, I don't know of any tester or developer for that matter being laid off from Google. We're hiring at a rapid pace right now. However, we do change projects a lot so perhaps you read 'taken off a project' to mean something far worse than the reality of just moving to another project. A tester here may move every couple of years or so and it is a badge of honor to get to the point where you've worked yourself out of a job by building robust test frameworks for others to contribute tests to or to pass off what you've done to a junior tester and move on to a bigger challenge. Maggi, please keep the dream alive. If Google was a hostile place for testers, I would be working somewhere else. <br /><br />Second, I am going to dodge the negative undertones of the developer vs tester debate.  Whether developers can test or testers can code seems downright combative. Both types of engineers share the common goal of shipping a product that will be successful. There is enough negativity in this world and testers hating developers seems so 2001.<br /><br />In fact, I feel a confession coming on. I have had sharp words with developers in the past. I have publicly decried the lack of testing rigor in commercial products. If you've seen me present you've probably witnessed me showing colorful bugs, pointing to the screen and shouting "you missed a spot!" I will admit, that was fun. <br /><br />Here are some other quotes I have directed at developers:<br /><br />"You must be smarter than me because I couldn't write this bug if I was trying to."<br /><br />"What happened, did the compiler get your eye?"<br /><br />"What do you say to a developer with two black eyes? Nothing, he's already been told twice." <br /><br />"Did you hear about the developer who locked himself in his car?"<br /><br />Ah, those were the good old days! But it's 2011 now and I am objective enough to give developers credit when they step up to the plate and do their job. At Google, many have and they are helping to shame the rest into following suit. And this is making bugs harder to find. I waste so little time on low hanging fruit that I get to dig deeper to find the really subtle, really critical bugs. The signal to noise ratio is just a whole lot stronger now. Yes there are fewer developer jokes but this is progress. I have to make myself feel good knowing how many bugs have been prevented instead of how many laughs I can get on stage demonstrating their miserable failures. <br /><br />This is progress. <br /><br />And, incidentally developers can test. In some cases far better than testers. Modern testing is about optimizing the places where developers test and where testers test. Getting that mix right means a great product. Getting it wrong puts us back in 2001 where my presentations were a heck of a lot funnier.<br /><br />In what cases are developers better testers that we are? In what cases are they not only poor testers but we're better off not having them touch the product at all? Well, that's the subject of my next couple of posts. In the meantime...<br /><br />...Peace.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-4778929911106027914?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/how-google-tests-software-a-brief-interlude/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Google Tests Software &#8211; Part Two</title>
		<link>https://googledata.org/google-testing/how-google-tests-software-part-two/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-google-tests-software-part-two</link>
		<comments>https://googledata.org/google-testing/how-google-tests-software-part-two/#comments</comments>
		<pubDate>Thu, 10 Feb 2011 02:36:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerIn order for the “you build it, you break it” motto to be real, there are roles beyond the traditional developer that are necessary. Specifically, engineering roles that enable developers to do testing efficiently and effectively ...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />In order for the “you build it, you break it” motto to be real, there are roles beyond the traditional developer that are necessary. Specifically, engineering roles that enable developers to do testing efficiently and effectively have to exist. At Google we have created roles in which some engineers are responsible for making others more productive. These engineers often identify themselves as testers but their actual mission is one of productivity. They exist to make developers more productive and quality is a large part of that productivity. Here's a summary of those roles:<br /> <br />The <span style="font-weight:bold;">SWE</span> or <span style="font-weight:bold;">Software Engineer</span> is the traditional developer role. SWEs write functional code that ships to users. They create design documentation, design data structures and overall architecture and spend the vast majority of their time writing and reviewing code. SWEs write a lot of test code including test driven design, unit tests and, as we explain in future posts, participate in the construction of small, medium and large tests. SWEs own quality for everything they touch whether they wrote it, fixed it or modified it.  <br /><br />The <span style="font-weight:bold;">SET</span> or <span style="font-weight:bold;">Software Engineer in Test</span> is also a developer role except their focus is on testability. They review designs and look closely at code quality and risk. They refactor code to make it more testable. SETs write unit testing frameworks and automation. They are a partner in the SWE code base but are more concerned with increasing quality and test coverage than adding new features or increasing performance. <br /><br />The <span style="font-weight:bold;">TE</span> or <span style="font-weight:bold;">Test Engineer</span> is the exact reverse of the SET. It is a a role that puts testing first and development second. Many Google TEs spend a good deal of their time writing code in the form of automation scripts and code that drives usage scenarios and even mimics a user. They also organize the testing work of SWEs and SETs, interpret test results and drive test execution, particular in the late stages of a project as the push toward release intensifies. TEs are product experts, quality advisers and analyzers of risk. <br /><br />From a quality standpoint, SWEs own features and the quality of those features in isolation. They are responsible for fault tolerant designs, failure recovery, TDD, unit tests and in working with the SET to write tests that exercise the code for their feature. <br /><br />SETs are developers that provide testing features. A framework that can isolate newly developed code by simulating its dependencies with stubs, mocks and fakes and submit queues for managing code check-ins. In other words, SETs write code that allows SWEs to test their features. Much of the actual testing is performed by the SWEs, SETs are there to ensure that features are testable and that the SWEs are actively involved in writing test cases. <br /><br />Clearly SETs primary focus is on the developer. Individual feature quality is the target and enabling developers to easily test the code they write is the primary focus of the SET. This development focus leaves one large hole which I am sure is already evident to the reader: what about the user?<br /><br />User focused testing is the job of the Google TE. Assuming that the SWEs and SETs performed module and feature level testing adequately, the next task is to understand how well this collection of executable code and data works together to satisfy the needs of the user. TEs act as a double-check on the diligence of the developers. Any obvious bugs are an indication that early cycle developer testing was inadequate or sloppy. When such bugs are rare, TEs can turn to their primary task of ensuring that the software runs common user scenarios, is performant and secure, is internationalized and so forth. TEs perform a lot of testing and test coordination tasks among TEs, contract testers, crowd sourced testers, dog fooders, beta users, early adopters. They communicate among all parties the risks inherent in the basic design, feature complexity and failure avoidance methods. Once TEs get engaged, there is no end to their mission. <br /><br />Ok, now that the roles are better understood, I'll dig into more details on how we choreograph the work items among them. Until next time...thanks for your interest.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-7753843789033272801?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/how-google-tests-software-part-two/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Google Tests Software</title>
		<link>https://googledata.org/google-testing/how-google-tests-software/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-google-tests-software</link>
		<comments>https://googledata.org/google-testing/how-google-tests-software/#comments</comments>
		<pubDate>Tue, 25 Jan 2011 17:08:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerThis is the first in a series of posts on this topic.The one question I get more than any other is "How does Google test?" It's been explained in bits and pieces on this blog but the explanation is due an update. The Google testing st...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br /><i>This is the first in a series of posts on this topic.</i><br /><br />The one question I get more than any other is "How does Google test?" It's been explained in bits and pieces on this blog but the explanation is due an update. The Google testing strategy has never changed but the tactical ways we execute it has evolved as the company has evolved. We're now a search, apps, ads, mobile, operating system, <i>and so on and so forth</i> company. Each of these Focus Areas (as we call them) have to do things that make sense for their problem domain. As we add new FAs and grow the existing ones, our testing has to expand and improve. What I am documenting in this series of posts is a combination of what we are doing today and the direction we are trending toward in the foreseeable future. <br /><br />Let's begin with organizational structure and it's one that might surprise you. There isn't an actual testing organization at Google. Test exists within a Focus Area called Engineering Productivity. Eng Prod owns any number of horizontal and vertical engineering disciplines, Test is the biggest. In a nutshell, Eng Prod is made of:<br /><br />1. A <b>product team</b> that produces internal and open source productivity tools that are consumed by all walks of engineers across the company. We build and maintain code analyzers, IDEs, test case management systems, automated testing tools, build systems, source control systems, code review schedulers, bug databases... The idea is to make the tools that make engineers more productive. Tools are a very large part of the strategic goal of prevention over detection. <br /><br />2. A <b>services team</b> that provides expertise to Google product teams on a wide array of topics including tools, documentation, testing, release management, training and so forth. Our expertise covers reliability, security, internationalization, etc., as well as product-specific functional issues that Google product teams might face. Every other FA has access to Eng Prod expertise. <br /><br />3. <b>Embedded engineers</b> that are effectively loaned out to Google product teams on an as-needed basis. Some of these engineers might sit with the same product teams for years, others cycle through teams wherever they are needed most. Google encourages all its engineers to change product teams often to stay fresh, engaged and objective. Testers are no different but the cadence of changing teams is left to the individual. I have testers on Chrome that have been there for several years and others who join for 18 months and cycle off. Keeping a healthy balance between product knowledge and fresh eyes is something a test manager has to pay close attention to. <br /><br />So this means that testers report to Eng Prod managers but identify themselves with a product team, like Search, Gmail or Chrome. Organizationally they are part of both teams. They sit with the product teams, participate in their planning, go to lunch with them, share in ship bonuses and get treated like full members of the team. The benefit of the separate reporting structure is that it provides a forum for testers to share information. Good testing ideas migrate easily within Eng Prod giving all testers, no matter their product ties, access to the best technology within the company. <br /><br />This separation of project and reporting structures has its challenges. By far the biggest is that testers are an external resource. Product teams can't place too big a bet on them and must keep their quality house in order. Yes, that's right: at Google it's the product teams that own quality, not testers. Every developer is expected to do their own testing. The job of the tester is to make sure they have the automation infrastructure and enabling processes that support this self reliance. Testers enable developers to test. <br /><br />What I like about this strategy is that it puts developers and testers on equal footing. It makes us true partners in quality and puts the biggest quality burden where it belongs: on the developers who are responsible for getting the product right. Another side effect is that it allows us a many-to-one dev-to-test ratio. Developers outnumber testers. The better they are at testing the more they outnumber us. Product teams should be proud of a high ratio!  <br /><br />Ok, now we're all friends here right? You see the hole in this strategy I am sure. It's big enough to drive a bug through. Developers can't test! Well, who am I to deny that? No amount of corporate kool-aid could get me to deny it, especially coming off <a href="http://www.youtube.com/watch?v=cqwXUTjcabs">my GTAC talk</a> last year where I pretty much made a game of developer vs. tester (spoiler alert: the tester wins).<br /><br />Google's answer is to split the role. We solve this problem by having two types of testing roles at Google to solve two very different testing problems. In my next post, I'll talk about these roles and how we split the testing problem into two parts.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-180999612361986638?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/how-google-tests-software/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>New Year&#8217;s Resolutions</title>
		<link>https://googledata.org/google-testing/new-years-resolutions/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=new-years-resolutions</link>
		<comments>https://googledata.org/google-testing/new-years-resolutions/#comments</comments>
		<pubDate>Tue, 04 Jan 2011 16:10:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerI know many people who laugh at the concept of resolutions, easily made and easily broken. All true. However, I am a runner now because of a resolution I made about a decade ago and my personality has undergone a successful renovation...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />I know many people who laugh at the concept of resolutions, easily made and easily broken. All true. However, I am a runner now because of a resolution I made about a decade ago and my personality has undergone a successful renovation or two over the years as well. When they stick, resolutions can become habits and the emergence of the occasional butterfly makes them a worthwhile exercise. With the optimism of a new year, I present my Google Testing resolutions for 2011 which I hereby declare the <span style="font-style:italic;">Year of the User</span>. <br /><br /><span style="font-weight:bold;">1. I will listen to users more and developers less.<br /></span><br />Developers, by definition, are engineers lost in the details of implementation. When it comes to testing concerns, such implementation details clog a tester's neural pathways with issues that simply should not be relevant. I resolve to take the high road as often as possible and consider user scenarios, integration issues and end-to-end uses of the system above all other concerns. And, yes, that will mean telling developers "sorry, dude, your broken build simply is not my concern."<br /><br /><span style="font-weight:bold;">2. I will push all implementation testing issues to developers. <br /></span><br />My first resolution may lead readers to believe that testing implementation details isn't important. Let me be clear. Testing implementation details <i>is</i> important. When they go untested they create enough noise that user-oriented testing is compromised by the constant emergence of silly bugs. Silly bugs mask important ones. Find them at the source: ensure that proper unit testing and automated smoke tests are present and owned by the people most qualified to write and maintain them: developers.  I resolve not to be sidetracked by silly bugs but to push back hard on the developers who are happy to write the bug but neglect to write the test for it.  <br /><br /><span style="font-weight:bold;">3. I will endeavor to tie together all user-oriented testing. <br /></span><br />In the run up to releasing Chrome OS for pilot last year it was clear that many of the bugs found during dogfood (internal testing), crowd-sourced and out-sourced testing had already been found by my test team. Not only is there is a lot of repetitive and wasteful testing being performed, my team isn't getting enough credit for finding these important issues early. I resolve to introduce technology that will allow all testers to share testing tactics and see each other's work, ultimately erasing the boundaries between these common phases and allowing testers who join a project late to build upon the work of those who've been there for a while. <br /><br />Finally, I resolve to expose more information about how Google tests internally. I am going to return to the conference circuit this year and talk frankly about what we are doing, the good, the bad and the downright embarrassing in the hopes that other testers at other companies do the same. I am also going to push more Google testers to post their experiences on this blog and join me at industry events to discuss these things with anyone struggling with the same issues.<br /><br />Happy New Year! May it be one that sees a higher level of quality from every corner of our industry.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-5445487344537363890?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/new-years-resolutions/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Webinar and GTAC Followup</title>
		<link>https://googledata.org/google-testing/webinar-and-gtac-followup/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=webinar-and-gtac-followup</link>
		<comments>https://googledata.org/google-testing/webinar-and-gtac-followup/#comments</comments>
		<pubDate>Wed, 22 Dec 2010 18:30:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerI've given all the talks I am going to give and said all that I am going to say for 2010. Breath a sigh of relief and raise your glasses to the sweet sound of silence. The aftermath of the uTest webinar is here. Thanks to uTest for ho...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />I've given all the talks I am going to give and said all that I am going to say for 2010. Breath a sigh of relief and raise your glasses to the sweet sound of silence. <br /><br />The aftermath of the uTest webinar is <a href="http://blog.utest.com/more-bang-for-your-testing-buck-follow-up-qa-with-james-whittaker/2010/12/">here</a>. Thanks to uTest for hosting and putting up with me. <br /><br />My GTAC 2010 talk is <a href="http://www.youtube.com/watch?v=cqwXUTjcabs">here</a>. But far better is my <a href="http://www.youtube.com/watch?v=ARQTqCpcMlE">introduction by Testivus</a>. If imitation is the sincerest form of flattery, I hereby declare insults to be out-and-out envy. To our female readers, you may need to have some understanding of the male penchant for funny insults - it's a weird guy thing. But I don't care who you are ... this stuff is funny! And to answer your question: NO I did not see Alberto's video of me before I spoke. It was all new to me. <br /><br />Hope to see you next year in the blog-o-sphere and at a couple of conferences. I'll be (at least) at GTAC 2011 which is in Mountain View and Euro STAR in Manchester, England (will lecture for a ticket to a Premier League football match).<br /><br />Much more to come in the new year. Peace to all.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-4356851086910564375?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/webinar-and-gtac-followup/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Webinar Tomorrow</title>
		<link>https://googledata.org/google-testing/webinar-tomorrow/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=webinar-tomorrow</link>
		<comments>https://googledata.org/google-testing/webinar-tomorrow/#comments</comments>
		<pubDate>Thu, 09 Dec 2010 23:38:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakeruTest informs me that the planned webinar tomorrow (see my post below) is a record setter for pre-registration. I hope everyone shows up. Last time I set a uTest record was the number of people who read my interview on their blog, onl...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />uTest informs me that the planned webinar tomorrow (see my post below) is a record setter for pre-registration. I hope everyone shows up. <br /><br />Last time I set a uTest record was the number of people who read my interview on their blog, only to have that record broken by Pat Copeland. I hate losing to Pat Copeland.<br /><br />So I am upping the ante with a bribe. I am offering signed books and totally wicked Google testing t-shirts to the folks who ask the best questions at the end of the talk. <br /><br />I know what you're thinking. Ain't it sad that Kentucky boy has to resort to bribes to beat Pat Copeland. <br /><br />You're right and I don't care. This is about WINNING.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-8691433279393749597?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/webinar-tomorrow/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Chrome OS Pilot Program Announced</title>
		<link>https://googledata.org/google-testing/chrome-os-pilot-program-announced/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=chrome-os-pilot-program-announced</link>
		<comments>https://googledata.org/google-testing/chrome-os-pilot-program-announced/#comments</comments>
		<pubDate>Tue, 07 Dec 2010 21:51:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerIf you've heard me speak anytime over the past year you have heard me talk about Chrome OS and how we are testing it. Well, we're not done testing it but we are announcing a pilot where you, yes you, can get one of the initial sets of...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />If you've heard me speak anytime over the past year you have heard me talk about Chrome OS and how we are testing it. Well, we're not done testing it but we are announcing a pilot where you, yes you, can get one of the initial sets of new hardware with Chrome OS pre-installed. The hardware is called Cr-48 (a chromium isotope, how nerdy is that?) and is available in very limited numbers. <br /><br />How do you get one? Glad you asked. Go <a href="http://youtube.com/googlechrome">here</a> and tell us why you are an ideal candidate to give it a test drive and provide us with feedback! It's that simple/hard and I would love to see some of these in the hands of the many hardcore testers who read this blog.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-4924231802596753796?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/chrome-os-pilot-program-announced/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>More bang for your testing buck</title>
		<link>https://googledata.org/google-testing/more-bang-for-your-testing-buck/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=more-bang-for-your-testing-buck</link>
		<comments>https://googledata.org/google-testing/more-bang-for-your-testing-buck/#comments</comments>
		<pubDate>Tue, 30 Nov 2010 17:26:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerI am giving a webinar for uTest that may be of interest to some of you. Date: Friday Dec 10 at 8am PST. I tend to be pretty grouchy about that time of the morning and I have some pretty exhausting plans for the evening before so it mi...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />I am giving a webinar for uTest that may be of interest to some of you. Date: Friday Dec 10 at 8am PST. I tend to be pretty grouchy about that time of the morning and I have some pretty exhausting plans for the evening before so it might be extra fun. <br /><br />Here's the <a href="https://www2.gotomeeting.com/register/706045986">link</a> to register. <br /><br />And here's the abstract:<br /><br />"If you were going to invest more money in testing, where would you place those bets? Testing early in the cycle? Automation? Manual testing? Better requirements and planning? Better documentation? Torture devices for your devs? James Whittaker takes a critical look at such an investment and draws some very counterintuitive conclusions about maximizing such an investment. He then outlines a set of tools and practices that will help maximize the overall investment and make testing a happier place."<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-3261101204371062023?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/more-bang-for-your-testing-buck/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>An Ingredients List for Testing &#8211; Part Seven (of Seven)</title>
		<link>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-seven-of-seven/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=an-ingredients-list-for-testing-part-seven-of-seven</link>
		<comments>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-seven-of-seven/#comments</comments>
		<pubDate>Fri, 12 Nov 2010 15:53:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerWhen to stop testing? It’s the age old testing question that many researchers have tried to quantify. In fact, the best answer requires no science whatsoever: never. Since testing is infinite, we can never really stop. A more practi...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />When to stop testing? It’s the age old testing question that many researchers have tried to quantify. In fact, the best answer requires no science whatsoever: never. Since testing is infinite, we can never really stop. A more practical answer also surfaces in the real world: when the software ships, you’re done. Of course this is only true for the duration of the ship party, after that testing continues on the next version. <br /><br />At Google we are experimenting with test completeness measures that describe how well actual testing covers the <span style="font-style:italic;">risk landscape</span>. In other words, we are measuring the extent to which our testing covers the things that require the most testing. Tests that cover the high risk areas well count for more than tests that cover lower risk features. Testing is, after all, a business of risk mitigation.<br /><br />The set of tools necessary to accomplish this were described in my GTAC 2010 talk which should appear on YouTube soon and are collectively being called<span style="font-style:italic;"> Google Test Analytics</span>. More about these tools in future posts.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-7618750830884366215?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-seven-of-seven/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>An Ingredients List for Testing &#8211; Part Six</title>
		<link>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-six/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=an-ingredients-list-for-testing-part-six</link>
		<comments>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-six/#comments</comments>
		<pubDate>Fri, 05 Nov 2010 16:06:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerThe sixth ingredient is variation. Tests often get stale (i.e., they stop finding bugs) as they are run over and over on build after build as a product is being constructed. On the one hand, it is important to continue running tests t...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />The sixth ingredient is variation. Tests often get stale (i.e., they stop finding bugs) as they are run over and over on build after build as a product is being constructed. On the one hand, it is important to continue running tests to ensure the product still operates as specified. Indeed, I hesitate to throw any test away. However, becoming reliant on stale tests is too risky. Adding variation to existing tests can range from straightforward reordering of the sequence in which tests are run to more involved solutions of either modifying tests or adding new ones. Hopefully new tests will increase overall coverage and add to our confidence that we’ve tested the software in all the ways it needs to be tested.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-8194318712956336952?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-six/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>India Withdrawals</title>
		<link>https://googledata.org/google-testing/india-withdrawals/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=india-withdrawals</link>
		<comments>https://googledata.org/google-testing/india-withdrawals/#comments</comments>
		<pubDate>Sun, 31 Oct 2010 13:04:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerI told the crowd at GTAC during my talk: "I wasn't sure what to expect from India. I was not disappointed." Be careful how you quote me on this statement as getting it even a little wrong can make it seem like an insult. It is no such...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />I told the crowd at GTAC during my talk: "I wasn't sure what to expect from India. I was not disappointed." Be careful how you quote me on this statement as getting it even a little wrong can make it seem like an insult. It is no such thing. <br /><br />After spending a week there, I still am not sure what to expect. It's a country of such contrasts with extremes on both ends of pretty much every scale you can come up with. India must remain a mystery to me as I have seen so little of it.<br /><br />The Indian people, on the other hand, I think I understand a little better now. Their hunger to contribute. Their hope for the future. Their determination to be part of the solution in every way, shape and form. This is no simple case of outsourcing we have here. That attitude is so last decade. <br /><br />This was the best GTAC yet and the credit must go to the people who ran it and contributed the most to its success. This is a case of India stepping up and doing what London, New York, Seattle, Zurich (and next year Mountain View) did and then raising the bar that much more. Toe-to-toe with the world. <br /><br />There are individuals who can take a bow for GTAC, but the credit has be be far more dispersed. India ... you nailed this one.<br /><br />And I meant what I said at the end of my talk. I am very eager to return. Perhaps one day I will know India well enough to know what to expect. I am very sure I will not be disappointed.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-533556928153549682?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/india-withdrawals/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GTAC starts tomorrow!</title>
		<link>https://googledata.org/google-testing/gtac-starts-tomorrow/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gtac-starts-tomorrow</link>
		<comments>https://googledata.org/google-testing/gtac-starts-tomorrow/#comments</comments>
		<pubDate>Wed, 27 Oct 2010 08:31:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerWow, India is a lot different than I expected. Visited Golconda fort today and was totally blown away. Plumbing and "telephones" all the way back then. No wonder IIT is such a good university. All this history on this trip, first Engl...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />Wow, India is a lot different than I expected. Visited Golconda fort today and was totally blown away. Plumbing and "telephones" all the way back then. No wonder IIT is such a good university. All this history on this trip, first England and now India makes a poor old American wish he had some Native American roots. I feel so ... imported. <br /><br />I'm almost finished with my GTAC talk. Trial run today in Hyderabad for anyone local who wants to come. Tomorrow I close the conference. I hear they've decided where GTAC 2011 is going to be held but they are holding it secret until the end of this one. I only assume that since I don't know that it is not going to be in the Pacific Northwest but I do expect a return to America.<br /><br />On to GTAC!<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-4599688689153685157?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/gtac-starts-tomorrow/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>An Ingredients List for Testing &#8211; Part Five</title>
		<link>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-five/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=an-ingredients-list-for-testing-part-five</link>
		<comments>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-five/#comments</comments>
		<pubDate>Fri, 01 Oct 2010 18:15:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerOne of the problems with testing is that testers don’t possess a common vocabulary for the techniques they apply to actually perform testing. Some testers talk about partitioning the input domain and others gravitate toward boundary...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />One of the problems with testing is that testers don’t possess a common vocabulary for the techniques they apply to actually perform testing. Some testers talk about partitioning the input domain and others gravitate toward boundary values but in general there are no catalogues of testing techniques that would allow a conversation such as: run the Landmark Tour on the bookmark sync feature of Chrome. Everyone understands that Chrome is a web browser and that it allows users to sync bookmarks, but how does one test it with a Landmark Tour?<br /><br />“Tours” are the metaphor we use at Google to name and describe testing techniques. Every tour encapsulates past testing knowledge, i.e., stuff that worked for other testers on other projects, and can be reused and improved upon. Over time testers get a feel for which tours apply to what type of functionality. It’s a way to identify and store tribal knowledge of the overall team. That's the fifth ingredient: test guidance.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-3496503341835906188?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-five/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Aftermath of the Google NY event</title>
		<link>https://googledata.org/google-testing/aftermath-of-the-google-ny-event/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=aftermath-of-the-google-ny-event</link>
		<comments>https://googledata.org/google-testing/aftermath-of-the-google-ny-event/#comments</comments>
		<pubDate>Fri, 17 Sep 2010 16:49:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerFirst and foremost, apologies to all of those trying to get to our NY event who weren't able to do so. It was an absolutely packed house, frankly the popularity of it overwhelmed us! Clearly the mixture of a Google tour, Google goodie...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />First and foremost, apologies to all of those trying to get to our NY event who weren't able to do so. It was an absolutely packed house, frankly the popularity of it overwhelmed us! Clearly the mixture of a Google tour, Google goodies, food, drink and testing is an intoxicating cocktail. <br /><br />The event was not taped but GTAC will be and I'll likely not have been part of a two hour party before that talk! Some things, I think, are better off unrecorded and off the record...<br /><br />We will be having more of these events in the future. We'll learn from this and make sure you have plenty of warning. <br /><br />Thanks for understanding and if any rumors emerge from this event about things I may have said on stage...you can't <span style="font-style:italic;">prove</span> anything!<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-8376309627588068931?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/aftermath-of-the-google-ny-event/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>An Ingredients List for Testing &#8211; Part Four</title>
		<link>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-four/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=an-ingredients-list-for-testing-part-four</link>
		<comments>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-four/#comments</comments>
		<pubDate>Fri, 17 Sep 2010 16:43:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerEver look at a testing problem and wonder how to solve it? If so you know what it feels like to lack domain expertise. Sometimes this is user-oriented knowledge. Testing a flight simulator requires knowledge of how to fly a plane. Tes...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />Ever look at a testing problem and wonder how to solve it? If so you know what it feels like to lack domain expertise. Sometimes this is user-oriented knowledge. Testing a flight simulator requires knowledge of how to fly a plane. Testing tax preparation software requires knowledge of accounting. Other times the knowledge is more problem-oriented. Testing a mobile operating system means understand how Wi-Fi and device drivers work. Whenever the bill of materials contains a testing problem that the risk analysis identifies as important, the expertise needed to test it needs to be on the testing team. Hire it, contract it, outsource it. Whatever it takes to ensure that people who know what they are doing and have experience doing it are on staff for the duration of the project. There is no technological substitution for expertise.<br /><br />It doesn't matter how good you think you are at exploratory testing, if you don't understand how something works find someone who does.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-251912022662229176?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-four/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Test Open House in New York</title>
		<link>https://googledata.org/google-testing/test-open-house-in-new-york/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=test-open-house-in-new-york</link>
		<comments>https://googledata.org/google-testing/test-open-house-in-new-york/#comments</comments>
		<pubDate>Fri, 10 Sep 2010 16:05:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Google is holding a testing event in our NY office Wednesday, September 15 at 5:30pm. This includes a tour of our local offices and a live talk on how Google does testing by our own James Whittaker. Rumor has it he's using an early version of his GTAC ...]]></description>
				<content:encoded><![CDATA[Google is holding a testing event in our NY office Wednesday, September 15 at 5:30pm. This includes a tour of our local offices and a live talk on how Google does testing by our own James Whittaker. Rumor has it he's using an early version of his GTAC talk. Lots of food, drink and Google giveaways.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-2324499587977586472?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/test-open-house-in-new-york/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>An Ingredients List for Testing &#8211; Part Three</title>
		<link>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-three/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=an-ingredients-list-for-testing-part-three</link>
		<comments>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-three/#comments</comments>
		<pubDate>Fri, 03 Sep 2010 16:26:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerPossessing a bill of materials means that we understand the overall size of the testing problem. Unfortunately, the size of most testing problems far outstrips any reasonable level of effort to solve them. And not all of the testing s...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />Possessing a bill of materials means that we understand the overall size of the testing problem. Unfortunately, the size of most testing problems far outstrips any reasonable level of effort to solve them. And not all of the testing surface is equally important. There are certain features that simple require more testing than others. Some prioritization must take place. What components must get tested? What features simply cannot fail? What features make up the user scenarios that simply must work?<br /><br />In our experience it is the unfortunate case that no one really agrees on the answers to these questions. Talk to product planners and you may get a different assessment than if you talk to developers, sales people or executive visionaries. Even users may differ among themselves. It falls on testers to act as the user advocates and find out how to take into account all these concerns to prioritize how testing resources will be distributed across the entire testing surface.<br /><br />The term commonly used for this practice is <span style="font-style:italic;">risk analysis</span> and at Google we take information from all the projects stakeholders to come up with overall numerical risk scores for each feature. How do we get all the stakeholders involved? That's actually the easy part. All you need to do is assign numbers and then step back and have everyone tell you how wrong you are. We've found being visibly wrong is the best way to get people involved in the hopes they can influence getting the numbers right! Right now we are collecting this information in spreadsheets. By the time GTAC rolls around the tool we are using for this should be in a demonstrable form.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-1340311743111567469?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-three/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>An Ingredients List for Testing &#8211; Part Two</title>
		<link>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-two/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=an-ingredients-list-for-testing-part-two</link>
		<comments>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-two/#comments</comments>
		<pubDate>Fri, 27 Aug 2010 17:23:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerWhen are you finished testing? It’s the age old quality question and one that has never been adequately answered (other than the unhelpful answer of never). I argue it never will be answered until we have a definition of the size of...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />When are you finished testing? It’s the age old quality question and one that has never been adequately answered (other than the unhelpful answer of <span style="font-style:italic;">never</span>). I argue it never will be answered until we have a definition of the size of the testing problem. How can you know you are finished if you don’t fully understand the task at hand? <br /><br />Answers that deal with coverage of inputs or coverage of code are unhelpful. Testers can apply every input and cover every line of code in test cases and still the software can have very serious bugs. In fact, it’s actually likely to have serious bugs because inputs and code cannot be easily associated with what’s important in the software. What we need is a way to identify what parts of the product can be tested, a bill of materials if you will, and then map our actual testing back to each part so that we can measure progress against the overall testing goal.<br /><br />This bill of materials represents everything that can be tested. We need it in a format that can be compared with actual testing so we know which parts have received enough testing and which parts are suspect. <br /><br />We have a candidate format for this bill of materials we are experimenting with at Google and will be unveiling at GTAC this year.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-1924308070013217561?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-two/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>An Ingredients List for Testing &#8211; Part One</title>
		<link>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-one/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=an-ingredients-list-for-testing-part-one</link>
		<comments>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-one/#comments</comments>
		<pubDate>Fri, 20 Aug 2010 15:29:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerEach year, about this time, we say goodbye to our summer interns and bid them success in the upcoming school year. Every year they come knowing very little about testing and leave, hopefully, knowing much more. This is not yet-another...]]></description>
				<content:encoded><![CDATA[By James Whittaker<br /><br />Each year, about this time, we say goodbye to our summer interns and bid them success in the upcoming school year. Every year they come knowing very little about testing and leave, hopefully, knowing much more. This is not yet-another-plea to universities to teach more testing, instead it is a reflection on how we teach ourselves.<br /><br />I like to experiment with metaphors that help people "get it." From attacks to tools to tours to the apocalypse, I've seen my fair share. This summer, I got a lot of aha moments from various interns and new hires likening testing to cooking. We're chefs with no recipes, just a list of ingredients. We may all end up making a different version of Testing Cake, but we better at least be using the same set of ingredients. <br /><br />What are the ingredients? I'll list them here over the next couple of weeks. Please feel free to add your own and I'll hope you don't steal my thunder by getting them in faster than I. Right now I have a list of 7.<br /><br /><span style="font-weight:bold;">Ingredient 1: Product expertise<br /></span><br />Developers grow trees, testers manage forests. The level of focus of an individual developer should be on the low level concerns of building reliable and secure components. Developers must maintain intellectual mastery from the UI to low level APIs and memory usage of the features they code. We don’t need them distracted and overwhelmed with system wide product expertise duties as well. <br /><br />Testers manage system wide issues and rarely have deep component knowledge. As a manager of the forest, we can treat any individual tree abstractly. Testers should know the entire landscape understanding the technologies and components involved but not actually taking part in their construction. This breadth of knowledge and independence of insight is a crucial complement to the developer’s low level insights because testers must work across components and tie together the work of many developers when they assess overall system quality.<br /><br />Another way to think about this is that developers are the domain experts who understand the problem the software is solving and how it is being solved. Testers are the product experts who focus on the breadth of technologies used across the entire product.<br /><br />Testers should develop this product expertise to the extent that they cannot be stumped when asked questions like "how would I do this?" with their product. If I asked one of my Chrome testers any question about how to do anything with Chrome concerning installation, configuration, extensions, performance, rendering ... anything at all ... I expect an answer right away. An immediate, authoritative and correct answer. I would not expect the same of a developer. If I can stump a tester with such a question then I have cause for concern. If there is a feature none of us know about or don't know completely then we have a feature that might escape testing scrutiny. No, not on our watch!<br /><br />Product expertise is one ingredient that must be liberally used when mixing Testing Cake.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-452068016091574288?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/an-ingredients-list-for-testing-part-one/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>There, but for the grace of testing, go I</title>
		<link>https://googledata.org/google-testing/there-but-for-the-grace-of-testing-go-i/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=there-but-for-the-grace-of-testing-go-i</link>
		<comments>https://googledata.org/google-testing/there-but-for-the-grace-of-testing-go-i/#comments</comments>
		<pubDate>Sat, 17 Jul 2010 22:34:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerI've had more than a few emails about "antenna-gate" asking me to comment and suggesting clever, stabbing rebukes to a fallen competitor. I might aim a few of those at my own team in the future, some were genuinely funny, but none ...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<br /><br />I've had more than a few emails about "antenna-gate" asking me to comment and suggesting clever, stabbing rebukes to a fallen competitor. I might aim a few of those at my own team in the future, some were genuinely funny, but none of them will appear here. Instead I offer first a word of caution and second a reflection that my Mom used to intone whenever disaster occurred around her. It's called "counting your blessings." <br /><br />First, a caution that those of us who live in glass houses really should keep stones at arms length. The only way anyone can rebuke Apple, without risk of waking up one morning sucking on their own foot, is if they write no software or have no users. Apple does a lot of the former and they enjoy many of the latter. Bugs like this make me sick when they are mine and nervous when they aren't. If any tester in the industry isn't taking stock right now then they either aren't producing any software or aren't in possession of any users, at least ones they wish to keep. <br /><br />Second, taking stock has made me realize that I enjoy some important blessings that make the infinite task of testing so much more manageable. Indeed, the three blessings I count here are really the reason that testing doesn't fail more often than it does.  <br /><br /><span style="font-weight:bold;">The Blessing of Unit Testing</span><br /><br />I am thankful for early cycle testing thinning out the bug herd. In late cycle testing major bugs are often masked by minor bugs and too many of the latter can hamper the search for the former. Every bug that requires a bug report means lost time. There is the time spent to find the bug; time spent to reproduce and report it; time to investigate its cause and ensure it is not a duplicate; time to fix it, or to argue about whether it should be fixed; time to build the new version and push it to the test lab; time to verify the fix; time to test that the fix introduced no additional bugs. Clearly the smaller the population to begin with, the easier the task becomes. Solid unit testing is a tester's best friend.<br /><br /><span style="font-weight:bold;">The Blessing of Rarity</span><br /><br />I am thankful that the vast majority of bugs that affect entire user populations are generally nuisance-class issues. These are typically bugs concerning awkward UI elements or the occasional misfiring of some feature or another where workarounds and alternatives will suffice until a minor update can be made. Serious bugs tend to have a more localized effect. True recall class bugs, serious failures that affect large populations of users, are far less common. Testers can take advantage of the fact that not all bugs are equally damaging and prioritize their effort to find bugs in the order of their seriousness. The futility of finding every bug can be replaced by an investigation based on risk.<br /><br />Risk analysis is so important that we've built an internal tool to help guide testers in performing it. Code-named "Testify" this tool streamlines the process of risk analysis, at least the way we do it at Google. We're working on open-sourcing an early prototype in time for GTAC 2010 (I can hear my team cringing now ... "you promised it <span style="font-style:italic;">when</span>?"). <br /><br /><span style="font-weight:bold;">The Blessing of Repetition</span><br /><br />I am thankful that user behavior is highly repetitive. There are features that enjoy heavy usage across user populations and features that are far less popular. Mobile phones are a good example of this. The phone is constantly establishing connections to networks. Certain features like making and receiving calls, texting and so forth are used more often than taking pictures or searching maps. The popularity of user applications is a matter of hard data, not guesswork. Knowing what users do most often, less often and least often means testing resources can be applied with a commensurate amount of force and that testing itself can be patterned after actual usage profiles.<br /><br />Testers can gain a great deal from taking the user’s point of view and weaving usage concerns into the software testing process. Focusing on the user ensures that high impact bugs are found early and software revisions that break key user scenarios are identified quickly and not allowed to persist.<br /><br />Apple may be the company in the news today, who knows who it will be tomorrow. Every company that produces software people care about has either been there or will be there. The job is simply too big for perfection to be an option. But there are key advantages we have that make the job manageable. <br /><br />Put down the stones and make sure that what few blessing we testers possess are being exploited for everything they are worth. Hopefully, your company will be spared and the next time a company suffers such a bug you won't be the one making excuses. Perhaps you'll be lucky enough to be the one saying, "there but for the grace of testing go I."<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-6967360528368916848?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/there-but-for-the-grace-of-testing-go-i/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GTAC!</title>
		<link>https://googledata.org/google-testing/gtac/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gtac</link>
		<comments>https://googledata.org/google-testing/gtac/#comments</comments>
		<pubDate>Fri, 30 Apr 2010 14:51:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerYes I know, I've been quiet. Seriously heads down shipping products and developing what I think are some pretty cool new testing ideas and tools. Perhaps GTAC will be the chance for you to judge that for yourselves. Perhaps it will...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<br /><br />Yes I know, I've been quiet. Seriously heads down shipping products and developing what I think are some pretty cool new testing ideas and tools. Perhaps GTAC will be the chance for you to judge that for yourselves. Perhaps it will be worth the wait. <br /><br />I hope I am invited to speak at GTAC. (Is this an appropriate forum for such lobbying? Should I open it up to a vote on whether I should or should not be there? I am happy not going as India is a long trip, but this is GTAC we are talking about!) There are a number of things that will be ready to either debut or, if I am lucky, open source by then. <br /><br />Would you like to see a Chrome extension to display bug, test case, coverage and other information as an overlay on top of your web app UI? Imagine being able to see bugs at their exact location on the UI, report bugs by simply right-clicking the errant part of the web page, see footprints where test cases (automated and manual and across multiple testers) have been and lots more useful information. Are you tired of querying databases to see these things and just want your test data as a cellophane wrapper around your web app UI? If you were at STAR this week, you got a preview. But that presentation is already out of date. <br /><br />Would you like to be able to write automated test cases that can control your web app, your browser <span style="font-style:italic;">and </span>the operating system they are running on? Well if that stack contains Chrome and Chrome OS, you can do it with a new web test framework we are developing. Would you like to do all of this with Java Script? Sound like magic? Well I think the Web Test Framework is appropriately named: WTF. <br /><br />Would you like to see a record and playback facility that records directly to Java Script and is actually built-in to your browser? A R/P tool that handles AJAX and won't get confused by self-updating web pages? That stores recordings directly into a test case management system that is accessible to the world? <br /><br />Would you like to hear about the extensive library of testing tours we have developed and how our manual testing strategy across the web-app/Chrome/Chrome-OS stack is shaping up? <br /><br />These are some of the things that have kept me from this blog. Forgive me. I will report on them here and, with a little pressure directed toward Sujay The Decider perhaps demo them at GTAC.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-7392235519031529285?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/gtac/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Still Stuck in the 90s</title>
		<link>https://googledata.org/google-testing/still-stuck-in-the-90s/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=still-stuck-in-the-90s</link>
		<comments>https://googledata.org/google-testing/still-stuck-in-the-90s/#comments</comments>
		<pubDate>Tue, 02 Mar 2010 05:09:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerFlashback. It's 1990. Chances are you do not own a cell phone. And if you do it weighs more than a full sized laptop does now. You certainly have no iPod. The music in your car comes from the one or two local radio stations that pl...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<div><br /></div><div>Flashback. It's 1990. Chances are you do not own a cell phone. And if you do it weighs more than a full sized laptop does now. You certainly have no iPod. The music in your car comes from the one or two local radio stations that play songs you can tolerate and a glove box full of CDs and cassettes. Yes, I said cassettes...you know the ones next to those paper road maps. Music on the go? We carried our boom boxes on our shoulder back then. </div><div><br /></div><div>If you are a news junkie, you get your fix from the newspaper or you wait until 6 ... or 11. Sports? Same. Oh and I hope you don't like soccer or hockey because you can't watch that stuff in this country more often than every four years. Go find a phone book if you want to call someone and complain. </div><div><br /></div><div>I could go on, and on, and on, but you get the point. Oh wait, one more: how many of you had an email address in 1990? Be honest. And the people reading this blog are among the most likely to answer that affirmatively. </div><div><br /></div><div>The world is different. The last 20 years has changed the human condition in ways that no other 20 year period can match. Imagine taking a 16 year old from 1990 and transplanting him or her to a 2010 high school. Culture shock indeed. Imagine transporting a soccer mom, a politician, a university professor... Pick just about any profession and the contrast would be so stark that those 1990 skills would be a debilitating liability. </div><div><br /></div><div>Except one: that of a software tester. A circa 1990 tester would come from a mainframe/terminal world. Or if they were on the real cutting edge, a locally networked PC. They'd fit into the data center/slim client world with nary a hiccup. They'd know all about testing techniques because input partitioning, boundary cases, load and stress, etc, are still what we do today. Scripting? Yep, he'd be good there too. Syntax may have changed a bit, but that wouldn't take our time traveler long to pick up. That GEICO caveman may look funny at the disco, but he has the goods to get the job done. </div><div><br /></div><div>Don't get me wrong, software testing has been full of innovation. We've minted patents and PhD theses. We built tools and automated the crud out of certain types of interfaces. But those interfaces change and that automation, we find to our distress, is rarely reuseable. How much real innovation have we had in this discipline <i>that has actually stood the test of time</i>? I argue that we've thrown most of it away. A disposable two decades. It was too tied to the application, the domain, the technology. Each project we start out basically anew, reinventing the testing wheel over and over. Each year's innovation looks much the same as the year before. 1990 quickly turns into 2010 and we remain stuck in the same old rut. </div><div><br /></div><div>The challenge for the next twenty years will be to make a 2010 tester feel like a complete noob when transported to 2030. Indeed, I think this may be accomplished in far less than 20 years if we all work together. Imagine, for example, testing infrastructure <i>built into the platform. </i>Not enough for you? Imagine writing a single simple script that exercises your app, the browser and the OS at the same time and using the same language. Not enough for you? Imagine building an app and having it automatically download all applicable test suites <i>and execute them on itself</i>. Anyway, what are you working on?</div><div><br /></div><div>Interested? Progress reports will be given at the following locations this year:</div><div><br /></div><div>Swiss Testing Day, Zurich, March 17 2010</div><div><br /></div><div>STAR East, Orlando, May 2010</div><div><br /></div><div>GTAC, TBD, Fall 2010</div><div><br /></div><div>Here's to an interesting future.</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-4616850883211727348?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/still-stuck-in-the-90s/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Testing in the Data Center (Manufacturing No More)</title>
		<link>https://googledata.org/google-testing/testing-in-the-data-center-manufacturing-no-more/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=testing-in-the-data-center-manufacturing-no-more</link>
		<comments>https://googledata.org/google-testing/testing-in-the-data-center-manufacturing-no-more/#comments</comments>
		<pubDate>Tue, 09 Feb 2010 23:42:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerW. Edwards Deming helped to revolutionize the process of manufacturing automobiles in the 1970s and a decade later the software industry ran with the manufacturing analogy and the result was nearly every waterfall, spiral or agile ...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<br /><br />W. Edwards Deming helped to revolutionize the process of manufacturing automobiles in the 1970s and a decade later the software industry ran with the manufacturing analogy and the result was nearly every waterfall, spiral or agile method we have. Some like TQM, Cleanroom and Six Sigma are obvious descendants of Deming while others were just heavily influenced by his thinking. Deming <span style="font-style: italic;">was </span>the man.<br /><br />I repeat, <span style="font-style: italic;">was</span>. My time testing in Google's data center makes it clear that this analogy just doesn't fit anymore. I want a new one. And I want one that helps me as a tester. I want one that better guides my behavior.<br /><br />We just don't write or release software the way we used to. Software isn't so much <span style="font-style: italic;">built </span>as it is <span style="font-style: italic;">grown</span>. Software isn't <span style="font-style: italic;">shipped</span> ... it's simply <span style="font-style: italic;">made available</span> by, often literally, the flip of a switch. This is not your father's software. 21st century development is a seamless path from innovation to release where every phase of development, including release, is happening all the time. Users are on the inside of the firewall in that respect and feedback is constant. If a product isn't compelling we find out much earlier and it dies in the data center. I fancy these dead products serve to enrich the data center, a digital circle of life where new products are built on the bones of the ones that didn't make it.<br /><br />In our father's software and Deming's model we talk about quality control and quality assurance while we play the role of inspector. In contrast, my job seems much more like that of an attending physician. In fact, a medical analogy gives us some interesting parallels to think about software testing. A physician's hospital is our data center, there is always activity and many things are happening in parallel. Physicians have patients; we have applications and features. Their medical devices are our infrastructure and tools. I can picture my application's features strewn across the data center in little virtual hospital beds. Over here is the GMail ward, over there is Maps. Search, of course, has a wing of its own and Ads, well, they all have private rooms.<br /><br />In a hospital records are important. There are too many patients with specific medical conditions and treatment histories for any physician to keep straight. Imagine walking up to the operating table without examination notes and diagnoses? Imagine operating without a constant stream of real time health data?<br /><br />Yet as software testers we find ourselves in this situation often. That app lying in our data center has been tested before. It has been treated before. Where are our medical notes?<br /><br />So let's add little clipboards to the virtual data center beds in which our apps lay. Let's add equipment to take vitals and display them for any attending tester to see. Like human patients, apps have a pulse, data runs through code paths like blood through veins. There are important things happening, countable events that lead to statistics, indicators and create a medical history for an attending tester to use in whatever procedure they must now perform. The work of prior testers need not be ignored.<br /><br />It's an unsettling aspect of the analogy that I have put developers in the role of creator, but so be it. Like other metaphorical creators before them they have spawned intrinsically flawed creatures. Security is their cancer, privacy their aging. Software is born broken and only some things can be fixed. The cancer of security can only be managed. Like actual aging, privacy is a guarantee only young software enjoys. Such is the life of a data center app.<br /><br />But it is the monitors and clipboards that intrigue me. What do they say of our digital patients? As an app grows from concept into adolescence what part of their growth do we monitor? Where is the best place to place our probes? How do we document treatment and evaluations? Where do we store the notes about surgeries? What maladies have been treated? Are there problematic organs and recurrent illness? The documents and spreadsheets of the last century are inadequate. A patient's records are only useful if they are attached to the patient, up-to-date and in full living color to be read by whatever attending tester happens to be on call.<br /><br />This is the challenge of the new century of software. It's not a process of get-it-as-reliable-as-possible-before-we-ship. It's health care, cradle to grave health care ... prevention, diagnosis, treatment and cure.<br /><br />So slip into your scrubs, it's going to be a long night in the ER.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-6332621290904877935?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/testing-in-the-data-center-manufacturing-no-more/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Interviewing Insights and Test Frameworks</title>
		<link>https://googledata.org/google-testing/interviewing-insights-and-test-frameworks/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=interviewing-insights-and-test-frameworks</link>
		<comments>https://googledata.org/google-testing/interviewing-insights-and-test-frameworks/#comments</comments>
		<pubDate>Wed, 06 Jan 2010 00:19:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerGoogle is hiring. We have openings for security testers, test tool developers, automation experts and manual testers. That's right, I said manual testers.As a result of all this interviewing I've been reading a lot of interview fee...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<br /><br /><span class="byline-author">Google is hiring. We have openings for security testers, test tool developers, automation experts and manual testers. That's right, I said manual testers.<br /><br />As a result of all this interviewing I've been reading a lot of interview feedback and wanted to pass along some insights about how these applicants approach solving the testing problems we ask in our interviews. I think the patterns I note in this post are interesting insights into the mind of the software tester, at least the ones who want to work for Google.<br /><br />One of the things our interviewers like to ask is 'how would you test product <span style="font-style: italic;">xyz</span>?' The answers help us judge a tester's instincts, but after reading many hundreds of these interviews I have noticed marked patterns in how testers approach solving such problems. It's as though testers have a default testing framework built into their thinking that guides them in choosing test cases and defines the way they approach test design.<br /><br />In fact, these built-in frameworks seem to drive a tester's thinking to the extent that when I manage to identify the framework a tester is using, I can predict with a high degree of accuracy how they will answer the interviewers' questions. The framework defines what kind of tester they are. I find this intriguing and wonder if others have similar or counter examples to cite.<br /><br />Here are the frameworks I have seen just in the last two weeks:<br /><br />The <span style="font-weight: bold;">Input Domain Framework </span>treats software as an input-output mechanism. Subscribers of this framework think in terms of sets of inputs, rules about which inputs are more important and relationships between inputs, input sequences and outputs. This is a common model in random testing, model-based testing and the testing of protocols and APIs. An applicant who uses this framework will talk about which inputs they would use to test a specific application and try to justify why those inputs are important.<br /><br />The <span style="font-weight: bold;">Divide and Conquer Framework</span> treats software as a set of features. Subscribers begin by decomposing an app into its features, prioritizing them and then working through that list in order. Often the decomposition is multi-layered creating a bunch of small testing problems out of one very large one. You don't test the feature so much as you test its constituent parts. An applicant who uses this framework is less concerned with actual test cases and more concerned with reducing the size of the problem to something manageable.<br /><br />The <span style="font-weight: bold;">Fishbowl Framework </span>is a big picture approach to testing in which we manipulate the application while watching and comparing the results. Put the app in a fishbowl, swirl it around in the water and watch what happens. The emphasis is more on the watching and analyzing than it is on exactly how we manipulate the features. An applicant who uses this framework chooses tests that cause visible output and large state changes.<br /><br />The <span style="font-weight: bold;">Storybook Framework </span>consists of developing specific scenarios and making sure the software does what is is supposed to do when presented with those scenarios. Stories start with the expected path and work outward. They don't always get beyond the expected. This framework tests coherence of behavior more than subtle errors. Applicants who employ this framework often take a user's point of view and talk about using the application to get real work done.<br /><br />The <span style="font-weight: bold;">Pessimists Framework </span>starts with edge cases. Subscribers test erroneous input, bad data, misconfigured environments and so on. This is a common strategy on mature products where the main paths are well trodden. Applicants who use this framework like to assume that the main paths will get tested naturally as part of normal dev use and dog-fooding and that the testing challenge is concentrated on lower probability scenarios. They are quick to take credit for prior testing, assume its rationality and pound on problematic scenarios.<br /><br />There are more and I am taking furious notes to try and make sense of them all. As I get to know the testers who work in my organization, it doesn't take long to see which frameworks they employ and in what order (many are driven by multiple frameworks). Indeed, after studying an applicant's first interview, I can almost always identify the framework they use to answer testing questions and can often predict how they are going to answer the questions other interviewers ask even before I read that far.<br /><br />Now some interesting questions come out of this that I am still looking into. Which of these frameworks is best? Which is best suited to certain types of functionality? Which is better for getting a job at Google? Already patterns are emerging.<br /><br />One thing is for sure, we're interviewing at a rate that will provide me with lots of data on this subject. Contact me if you'd like to participate in this little study!<br /></span><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-6058805237611285876?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/interviewing-insights-and-test-frameworks/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Upcoming Webinar</title>
		<link>https://googledata.org/google-testing/upcoming-webinar/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=upcoming-webinar</link>
		<comments>https://googledata.org/google-testing/upcoming-webinar/#comments</comments>
		<pubDate>Tue, 08 Dec 2009 23:50:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerIf anyone is interested, I am giving a webinar through uTest. It's open to the uTest community and the public. You can register at https://www2.gotomeeting.com/register/793439307. I'll be giving a talk about how we do testing at Go...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<div><br /></div><div>If anyone is interested, I am giving a webinar through uTest. It's open to the uTest community and the public. You can register at <a href="https://www2.gotomeeting.com/register/793439307" >https://www2.gotomeeting.com/<wbr>register/793439307</a>. </div><div><br /></div><div>I'll be giving a talk about how we do testing at Google, specifically related to an update on our use of exploratory testing that I introduced at STAR East.</div><div><br /></div><div>Look forward to seeing you online.</div><div><br /></div><div><br /></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-2094446656948046906?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/upcoming-webinar/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>&quot;If you were a brand new QA manager &#8230;&quot; (cont)</title>
		<link>https://googledata.org/google-testing/if-you-were-a-brand-new-qa-manager-cont/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=if-you-were-a-brand-new-qa-manager-cont</link>
		<comments>https://googledata.org/google-testing/if-you-were-a-brand-new-qa-manager-cont/#comments</comments>
		<pubDate>Fri, 04 Dec 2009 16:11:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerMore thoughts:Understand your orgs release process and prioritiesLate cycle pre-release testing is the most nerve racking part of the entire development cycle. Test managers have to strike a balance between doing the right testing ...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<br /><br /><div>More thoughts:</div><div><br /></div><div><b>Understand your orgs release process and priorities</b></div><div>Late cycle pre-release testing is the most nerve racking part of the entire development cycle. Test managers have to strike a balance between doing the right testing and ensuring a harmonious release. I suggest attending all the dev meetings, but certainly as release approaches you shouldn't miss a single one. Pay close attention to their worries and concerns. Nightmare scenarios have a tendency to surface late in the process. Add test cases to your verification suite to ensure these scenarios won't happen. </div><div><br /></div><div>The key here is to get late cycle pre-release testing right without any surprises. Developers can get skittish so make sure they understand your test plan going into the final push. The trick isn't to defer to development as to how to perform release testing but to make sure they are on-board with your plan. I find that at Google increasing the team's focus on manual testing is wholeheartedly welcomed by the dev team. Find your dev team's comfort zone and strike a balance between doing the right testing and making the final hours/days as wrinkle-free as possible.  </div><div><br /></div><div><b>Question your testing process</b></div><div>Start by reading every test case and reviewing all automation. Can you map these test cases back to the test plan? How many tests do you have per component? Per feature? If a bug is found outside the testing process did you create a test case for it? Do you have a process to fix or deprecate broken or outdated test cases? </div><div><br /></div><div>As a test manager the completeness and thoroughness of the set of tests is your job. You may not be writing or running a lot of tests, but you should have them all in your head and be the first to spot gaps. It should be something a new manager tackles early and stays on top of at all times. </div><div><br /></div><div><b>Look for ways to innovate</b></div><div>The easiest way to look good in the eyes of developers is to maintain the status quo. Many development managers appreciate a docile and subservient test team. Many of them like a predictable and easily understood testing practice. It's one less thing to worry about (even in the face of obvious inefficiencies the familiar path is often the most well worn). </div><div><br /></div><div>As a new manager it is your job not to let them off so easy! You should make a list of the parts of the process that concern you and the parts that seem overly hard or inefficient. These are the places to apply innovation. Prepare for nervousness from the developer ranks, but do yourself and the industry a favor and place some bets for the long term. </div><div><br /></div><div>There is no advice I have found universally applicable concerning how to best foster innovation. What works for me is to find the stars on your team and make sure they are working on something they can be passionate about. As a manager this is the single most important thing you can do to increase productivity and foster innovation.  </div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-2594170400126695993?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/if-you-were-a-brand-new-qa-manager-cont/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>&quot;If you were a brand new QA manager &#8230;&quot;</title>
		<link>https://googledata.org/google-testing/if-you-were-a-brand-new-qa-manager/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=if-you-were-a-brand-new-qa-manager</link>
		<comments>https://googledata.org/google-testing/if-you-were-a-brand-new-qa-manager/#comments</comments>
		<pubDate>Wed, 02 Dec 2009 16:59:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerI got this question in email this morning from a reader:"I am a test supervisor at --- and was promoted to a QA management position yesterday.  I'm excited and terrified, so I have been thinking about how to organize the thought in...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<div><br /></div><div>I got this question in email this morning from a reader:<div><br /></div><div>"<span class="Apple-style-span" style="font-family: arial, sans-serif; font-size: 13px; border-collapse: collapse; ">I am a test supervisor at --- and was promoted to a QA management position yesterday.  I'm excited and terrified, so I have been thinking about how to organize the thought in my mind. After attending StarWest and following your blog for a while now, I am very interested in your opinion.</span></div><span class="Apple-style-span" style="font-family: arial, sans-serif; font-size: 13px; border-collapse: collapse; "><br />If you were a brand new QA Manager, and you knew what you know now, what are the top 5-10 things you would focus on?<span class="Apple-style-span" style="border-collapse: separate; font-family: Georgia, serif; font-size: 16px; ">"</span></span><div><br /></div><div>I am flattered by the confidence but in the event it is misplaced I wanted to answer this question publicly and invite readers to chime in with their own experiences. Besides, I am curious as to other opinions because I live with this same excitement and terror every day and could use a little advice myself. Here's my first couple and I'll add some more in future posts (unless of course you guys beat me to it).</div><div><br /></div><div><b>Start living with your product, get passionate about it</b></div><div>Drink your product's kool-aid, memorize the sales pitch, understand it's competitive advantages but retain your skepticism. Test/QA managers should be as passionate about the product as dev managers but we need to temper our passion with <i>proof</i>. Make sure the test team never stops testing the functionality represented by the sales pitch. </div><div><br /></div><div>Furthermore, part of living with your product is being a user yourself. I now live without a laptop and exclusively use my Chrome OS Netbook for my day to day work. As people see me with it in the hallways, I get to recite its sales pitch many times every day. Great practice. I also get to live with its inadequacies and take note of the things it has yet to master. This is great discussion fodder with devs and other stakeholders and also forces me to consider competitive products. When I can't do something important on my Chrome OS Netbook, I have to use a competing product and this spawns healthy discussions about how users will perceive our product's downside and how we can truthfully communicate the pros and cons of our product to customers. Every day becomes a deep dive into my product <i>as an actual user</i>. </div><div><br /></div><div>This is a great way to start off on a new product. </div><div><br /></div><div><b>Really focus on the test plan, make it an early priority</b></div><div>If you are taking over an existing role as test manager for an existing product chances are that a test plan already exists and chances are that test plan is inadequate. I'm not being unkind to your predecessor here, I am just being truthful. Most test plans are transitory docs. </div><div><br /></div><div>Now let me explain what I mean by that. Testers are quick to complain about inadequate design docs: that devs throw together a quick design doc or diagram but once they start coding, that design stagnates as the code takes on a life of its own. Soon the code does not match the design and the documentation is unreliable. If this is not your experience, congratulations but I find it far more the norm than design docs that are continually updated. </div><div><br /></div><div>Testers love to complain about this. "How can I test a product without a full description of what the product does?" But don't we often do the same thing with respect to our test plans? We throw together a quick test plan but as we start writing test cases (automated or manual) they take on a life of their own. Soon the test cases diverge from the test plan as we chase new development and our experience develops new testing insight. The test plan has just become like the design docs: a has-been document. </div><div><br /></div><div>You're a new test manager now, make fixing these documents one of your first priorities. You'll get to know your product's functionality and you'll see holes in the current test infrastructure that will need plugging. Plus, you'll have a basis to communicate with dev managers and show them you are taking quality seriously. Dev managers at Google love a good test plan, it gives them confidence you know what you are doing. </div><div><br /></div><div>Coming up next:</div><div><br /></div><div>Understand your orgs release process and priorities</div><div>Question your testing process</div><div>Look for ways to innovate</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-5603157247607195678?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/if-you-were-a-brand-new-qa-manager/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Speaking Tonight at SASQAG</title>
		<link>https://googledata.org/google-testing/speaking-tonight-at-sasqag/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=speaking-tonight-at-sasqag</link>
		<comments>https://googledata.org/google-testing/speaking-tonight-at-sasqag/#comments</comments>
		<pubDate>Thu, 19 Nov 2009 18:17:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerI am pleased to be speaking tonight at the local (and in my experience one of the finest) QA special interest group, SASQAG. My talk is based on my STAR keynote, but having just released Chrome OS today I am going to be detailing m...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<div><br /></div><div>I am pleased to be speaking tonight at the local (and in my experience one of the finest) QA special interest group, SASQAG. My talk is based on my STAR keynote, but having just released Chrome OS <i>today</i> I am going to be detailing more of our process for making testing more conscious and deliberate. </div><div><br /></div><div>If you are local and want to attend go to www.sasqag.org for details. I hope to see you there. </div><div><br /></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-7074602923790120420?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/speaking-tonight-at-sasqag/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The FedEx Tour</title>
		<link>https://googledata.org/google-testing/the-fedex-tour/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-fedex-tour</link>
		<comments>https://googledata.org/google-testing/the-fedex-tour/#comments</comments>
		<pubDate>Tue, 20 Oct 2009 22:48:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By Rajat DewanI appreciate James' offer to talk about how I have used the FedEx tour in Mobile Ads. Good timing too as I just found two more priority 0 bugs with the automation that the FedEx tour inspired! It was fun presenting this at STAR and I am p...]]></description>
				<content:encoded><![CDATA[<span class="Apple-style-span"  style="font-family:arial;"><span class="Apple-style-span" style="font-size: small;">By Rajat Dewan</span></span><div><span class="Apple-style-span"  style="font-family:verdana;"><br /></span></div><div><span class="Apple-style-span"   style="  ;font-family:'Times New Roman';font-size:medium;"><div style="margin-top: 6px; margin-right: 6px; margin-bottom: 6px; margin-left: 6px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; font-family: Verdana; font-size: 10pt; background-color: rgb(255, 255, 255); min-height: 1100px; counter-reset: __goog_page__ 0; line-height: normal; ">I appreciate James' offer to talk about how I have used the FedEx tour in Mobile Ads. Good timing too as I just found two more priority 0 bugs with the automation that the FedEx tour inspired! It was fun presenting this at STAR and I am pleased so many people attended.<br /><div style="margin-top: 0px; margin-bottom: 0px; "><br /></div><div style="margin-top: 0px; margin-bottom: 0px; ">Mobile has been a hard problem space for testing: a humongous  browser, phone, capability combination which is changing fast as the underlying technology evolves. Add to this poor tool support for the mobile platform and the rapid evolution of the device and you'll understand why I am so interested in advice on how to do better test design. We've literally tried everything, from checking screenshots of Google's properties on mobile phones to treating the phone like a collection of client apps and automating them in the UI button-clicking traditional way.<br /><br />Soon after James joined Google in May 2009, he started introducing the concept of tours, essentially making a point of "structured" exploratory testing. Tours presented a way for me to look at the testing problem in a radical new way. <i>Traditionally, the strategy is simple, focus on the end user interaction, and verify the expected outputs from the system under test. </i>Tours (at least for me) change this formula. They force the tester to focus on what the software does, isolating the different moving parts of software in execution, and isolating the different parts of the software at the component (and composition) level. Tours tell me to focus on testing the parts that drive the car, rather than on whether or not the car drives. This is somewhat counter intuitive I admit, that's why it is so important. The real value add of the tours comes from the fact that they guide me in testing those different parts and help me analyze how different capabilities inter-operate. Cars will always drive you off the lot, which part will break first is the real question.<br /><br />I think testing a car is a good analogy. As a system it's devilishly complicated, hard to automate and hard to find the right combination of factors to make it fail. However, testing the dashboard can be automated; so can testing the flow of gasoline from the fuel tank to the engine and from there to the exhaust, so can lots of other capabilities. These automated point solutions can also be combined to test a bigger piece of the whole system. It's exactly what a mechanic does when trying to diagnose a problem: he employs different strategies for testing/checking each mechanical subsystem.<br /><br />At STAR West, I spoke about evolving a good test strategy with the help of tours, specifically the FedEx tour. Briefly, the FedEx tour talks about tracking the movement of data and how it gets consumed and transformed by the system. It focuses on a very specific moving part, and as it turns out a crucial one for mobile. </div><div style="margin-top: 0px; margin-bottom: 0px; "><br /></div><div style="margin-top: 0px; margin-bottom: 0px; ">James' FedEx tour tells me to identify and track data through my system. Identifying it is the easy part: the data comes from the Ads Database and is basically the information a user sees when the ad is rendered. When I followed it through the system, I noted three (and only three) places where the data is used (either manipulated or rendered for display). I found this to be true for all 10 local versions of the Mobile Ads application. The eureka moment for me was realizing that if I validated the data at those three points, I had little else to do in order to verify any specific localized version of an ad. Add all the languages you want, I'll be ready!</div><div style="margin-top: 0px; margin-bottom: 0px; "><br /></div><div style="margin-top: 0px; margin-bottom: 0px; ">I was able to hook verification modules at each one of these three data inflection points. This basically meant validating data for the new Click-to-Call Ad parameters and locale specific phone number format. I was tracking how code is affecting the data at each stage, which also helps in localizing a bug better than other conventional means...I knew exactly where the failure was! For overcoming the location dependency, I mocked the GPS location parameters of the phone. As soon as I finished with the automation, I ran each ad in our database through each of the language versions verifying the integrity of the data. The only thing that was left was to visually verify rendering of the ads on the three platforms, reducing the manual tests to three (one each for Android, iPhone and Palm Pre).<br /></div><div style="margin-top: 0px; margin-bottom: 0px; "><br /></div><div style="margin-top: 0px; margin-bottom: 0px; ">The FedEx tour guided me to build a succinct piece of automation and turned what could have been a huge and error prone manual test into a reusable piece of automation that will find and localize bugs quickly. We're now looking at applying the FedEx tour across ads and in other client and cloud areas in the company. Hopefully there will be more experience reports from others who have found it useful.</div><br /><div style="margin-top: 0px; margin-bottom: 0px; ">Exploratory Testing ... it's not just for manual testers anymore!</div><div><br /></div></div></span></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-1000184391974582058?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/the-fedex-tour/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>STAR West Trip Report</title>
		<link>https://googledata.org/google-testing/star-west-trip-report/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=star-west-trip-report</link>
		<comments>https://googledata.org/google-testing/star-west-trip-report/#comments</comments>
		<pubDate>Mon, 12 Oct 2009 20:08:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerI am happy to report that attendance is way up at STAR. My back of the envelope calculations put it at several hundred more than STAR East a mere five months ago. A sure sign of economic recovery; I am surprised the stat hasn't mad...]]></description>
				<content:encoded><![CDATA[By James A. Whittaker<div><br /></div><div>I am happy to report that attendance is way up at STAR. My back of the envelope calculations put it at several hundred more than STAR East a mere five months ago. A sure sign of economic recovery; I am surprised the stat hasn't made it to <span class="blsp-spelling-error" id="SPELLING_ERROR_0">Obama's</span> resume yet.</div><div><br /></div><div>The Expo was my main disappointment. The vendor exhibits are still in atrophy. I realize the days of Mercury and Rational are over and <span class="blsp-spelling-error" id="SPELLING_ERROR_1">Empirix's</span> $ix figure rotating-parts booth is packed away in <span class="blsp-spelling-error" id="SPELLING_ERROR_2">someone's</span> garage, but there were only two short rows of sedate booths. (The magician was a nice touch though ... wish I could remember what he was selling.) Where have all the big players gone? </div><div><br /></div><div>I gave a tutorial with the arrogant title (Lee Copeland's idea, not mine) "James Whittaker: On Testing." It was listed as sold out (STAR capped the audience at 100) but a couple dozen truants clearly snuck in. Apparently there is a bug in their 'sold out' exception handler and I am a poor door warden. The tutorial is a discussion of problems and trends in testing. I gave it at STAR East and it was different again this time. It's half a discussion of what we do wrong in testing and half about how to correct those behaviors. As my understanding of these issues evolves, so does this tutorial. If you attended (only a small handful of the 100+ would admit to reading this blog), feel free to post a comment, I promise not to delete any negative ones.</div><div><br /></div><div>I had an amicable hallway conversation with James Bach. His blogger angst at my use of the title 'Exploratory Testing' didn't spill over to a face-to-face discussion. Frankly, I am not surprised. I've never claimed the term as my own, I simply took it and made it work in the hands of real testers on real software under real ship pressure. Consultants can coin all the terms they want, but when us practitioners add meat to their pie, why cry foul? Is it not a better reaction to feel happy that there are people actually doing something with the idea? </div><div><br /></div><div>Yet I still made some jabs at the broader consultant community in my keynote. STAR remains full of vendors and people trying to sell ideas instead of results and good engineering practice. I am committing Google and the projects that I lead here to an openness regarding how we do testing and hope to be joined by others. I'd like to see the real practitioners, those who work at financial companies, data centers, <span class="blsp-spelling-error" id="SPELLING_ERROR_4">ISVs</span>, online retailers, and so forth to come out in larger numbers ... not just as the learners and attendees but also as speakers, panelists and active participants. I'm not saying the consultant community has nothing to say, those guys simply need no encouragement to open their mouths. It's the practitioners who I want to encourage. It's one thing to <i>think</i> really hard about testing, it's another thing to actually put those thoughts into practice. </div><div><br /></div><div>The jabs aside, my keynote was aimed at describing the practice of exploratory testing I helped create at Microsoft and am now employing at Google and which is embodied in my new book. But it was my Google cohort <span class="blsp-spelling-error" id="SPELLING_ERROR_5">Rajat</span> <span class="blsp-spelling-error" id="SPELLING_ERROR_6">Dewan</span> who stole the show. After I detailed the Landmark Tour and how we applied it to Chrome, I ran out of time to talk about the FedEx tour. The folks at STAR were kind enough to set up an impromptu breakfast presentation for <span class="blsp-spelling-error" id="SPELLING_ERROR_7">Rajat</span> and he delivered a 20 minute talk to a standing room only crowd (I stopped counting at 150) on how he applied the FedEx tour to Mobile Ads. He showed three bugs the tour helped find and described how he automated the tour itself. (Has anyone coined the term 'automated exploratory testing' yet?)</div><div><br /></div><div>Perhaps he can steal the show again by blogging about his presentation. <span class="blsp-spelling-error" id="SPELLING_ERROR_8">Rajat</span>?</div><div><br /></div><div>Other highlights: apparently the twitter-verse was alight over my comment about god-the-developer. I don't tweet and I avoid twits at all costs so I am not sure if people were offended or found it insightful. Comments from <span class="blsp-spelling-corrected" id="SPELLING_ERROR_9">tweeters</span>? Also, I've been invited back for the tutorial at STAR East and also plan on submitting a track talk on <i>How we test Google Chrome</i>. Let the detailed discussion about real testing, warts and all, begin!</div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-4014879767705642300?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/star-west-trip-report/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Exploratory Testing &#8230; in print and at STAR</title>
		<link>https://googledata.org/google-testing/exploratory-testing-in-print-and-at-star/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=exploratory-testing-in-print-and-at-star</link>
		<comments>https://googledata.org/google-testing/exploratory-testing-in-print-and-at-star/#comments</comments>
		<pubDate>Mon, 28 Sep 2009 16:13:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerWell my book survived a recession haunted publishing house and my own change of employer and is now available in print. The subtitle even changed as the techniques, which guided manual testing at Microsoft, were reapplied by Google...]]></description>
				<content:encoded><![CDATA[<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_9ywu5aylLpo/SsDiKKelkGI/AAAAAAAAABQ/h8sqM2n_ne8/s1600-h/exploratory+software+testing+(cover).bmp"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 246px; height: 320px;" src="http://4.bp.blogspot.com/_9ywu5aylLpo/SsDiKKelkGI/AAAAAAAAABQ/h8sqM2n_ne8/s320/exploratory+software+testing+(cover).bmp" border="0" alt="" id="BLOGGER_PHOTO_ID_5386553818574327906" /></a><br />By James A. Whittaker<br /><br />Well my book survived a recession haunted publishing house and my own change of employer and is now available in print. The subtitle even changed as the techniques, which guided manual testing at Microsoft, were reapplied by Google engineers as a way to design test automation. <div><br /></div><div>I'll be at STAR next week in Anaheim to talk about exploratory testing, the subject of the book. Accompanying me will be Rajat Dewan of Google who used the 'FedEx Tour' to reduce a test set from hundreds of manual test cases to exactly 9 automated ones. I hope you'll join us.</div><div><br /></div><div><br /></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-3078076813854376104?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/exploratory-testing-in-print-and-at-star/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>The Plague of Entropy</title>
		<link>https://googledata.org/google-testing/the-plague-of-entropy/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-plague-of-entropy</link>
		<comments>https://googledata.org/google-testing/the-plague-of-entropy/#comments</comments>
		<pubDate>Mon, 14 Sep 2009 16:37:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerMathematically entropy is a measure of uncertainty. If there are, say, five events then maximum entropy occurs when those five events are equally likely and minimum entropy when one of those events is certain and the other four imposs...]]></description>
				<content:encoded><![CDATA[By James Whittaker<div><br />Mathematically entropy is a measure of uncertainty. If there are, say, five events then maximum entropy occurs when those five events are equally likely and minimum entropy when one of those events is certain and the other four impossible. </div><div><br /></div><div>The more uncertain events you have to consider, the higher measured entropy climbs. People often think of entropy as a measure of randomness: the more (uncertain) events one must consider, the more random the outcome becomes. </div><div><br /></div><div>Testers introduce entropy into development by adding to the number of things a developer has to do. When developers are writing code, entropy is low. When we submit bugs, we increase entropy. Bugs divert their attention from coding. They must now progress in parallel on creating and fixing features. More bugs means more parallel tasks and raises entropy. This entropy is one reason that bugs foster more bugs ... the entropic principle ensures it. Entropy creates more entropy! Finally there is math to show what is intuitively <span class="blsp-spelling-corrected" id="SPELLING_ERROR_0">appealing</span>: that prevention beats a cure. </div><div><br /></div><div>However, there is nothing we can do to completely prevent the plague of entropy other than create developers who never err. Since this is unlikely any time soon we must recognize how and when we are introducing entropy and do what we can to manage it. The more we can do <i>during</i> development the better. Helping out in code reviews, educating our developers about test plans, user scenarios and execution environments so they can code against them will reduce the number of bugs we have to report. Smoking out bugs early, submitting them in batches and making sure we submit only high quality bugs by triaging them ourselves will keep their mind on development. Writing good bug reports and quickly regressing fixes will keep their attention where it needs to be. In effect, it maximizes the certainty of the 'development event' and minimizes the number and impact of bugs. Entropy thus tends toward it's minimum. </div><div><br /></div><div>We can't banish this plague but if we can recognize the introduction of entropy into development and understand its inevitable effect on code quality, we can keep it at bay. </div><div><br /></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-8689734607920519872?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/the-plague-of-entropy/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The 7th Plague and Beyond</title>
		<link>https://googledata.org/google-testing/the-7th-plague-and-beyond/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-7th-plague-and-beyond</link>
		<comments>https://googledata.org/google-testing/the-7th-plague-and-beyond/#comments</comments>
		<pubDate>Wed, 02 Sep 2009 15:48:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James WhittakerSorry I haven't followed up on this, let the excuse parade begin: A) My new book just came out and I have spent a lot of time corresponding with readers. B) I have taken on leadership of some new projects including the testing of Chro...]]></description>
				<content:encoded><![CDATA[By James Whittaker<div><br /></div><div>Sorry I haven't followed up on this, let the excuse parade begin: A) My new book just came out and I have spent a lot of time corresponding with readers. B) I have taken on leadership of some new projects including the testing of Chrome and Chrome OS (yes you will hear more about these projects right here in the future). C) I've gotten just short of 100 emails suggesting the 7<span class="blsp-spelling-error" id="SPELLING_ERROR_0">th</span> plague and that takes time to sort through.</div><div><br /></div><div>This is clearly one plague-ridden industry (and, no, I am not talking about my book!)</div><div><br /></div><div>I've thrown out many of them that deal with a specific organization or person who just doesn't take testing seriously enough. Things like the Plague of Apathy (suggested exactly 17 times!) just doesn't fit. This isn't an industry plague, it's a personal/group plague. If you don't care about quality, please do us all a favor and get out of the software business. Go screw someone <span class="blsp-spelling-error" id="SPELLING_ERROR_1">else's</span> industry up, we have enough organic problems we have to deal with. I also didn't put down the Plague of the Deluded Developer (suggested by various names 22 times) because it dealt with developers that as a Googler I no longer have to deal with ... those who think they never write bugs. Our developers know better and if I find out exactly where they purchased that clue I will forward the link. </div><div><br /></div><div>Here's some of the best. As many of them have multiple suggesters I have credited the persons who were either first or gave the most thoughtful analysis. Feel free, if you are one of these people, to give further details or clarifications in the comments of this post as I am sure these summaries do not do them justice. </div><div><br /></div><div><b>The Plague of Metrics</b> (Nicole Klein, Curtis <span class="blsp-spelling-error" id="SPELLING_ERROR_2">Pettit</span> plus 18 others): Metrics change behavior and once a tester knows how the measurement works, they test to make themselves look good or say what they want it to say ignoring other more important factors. The metric becomes the goal instead of measuring progress. The distaste for metrics in many of these emails was palpable!</div><div><br /></div><div><b>The Plague of Semantics</b> (Chris <span class="blsp-spelling-error" id="SPELLING_ERROR_3">LeMesurier</span> plus 3 others): We misuse and overuse terms and people like to assign their own meaning to certain terms. It means that designs and specs are often misunderstood or misinterpreted. This was also called the plague of assumptions by other contributors. </div><div><br /></div><div><b>The Plague of Infinity</b> (<span class="blsp-spelling-error" id="SPELLING_ERROR_4">Jarod</span> <span class="blsp-spelling-error" id="SPELLING_ERROR_5">Salamond</span>, <span class="blsp-spelling-error" id="SPELLING_ERROR_6">Radislav</span> <span class="blsp-spelling-error" id="SPELLING_ERROR_7">Vasilev</span> and 14 others): The testing problem is so huge it's overwhelming. We spend so much time trying to justify our coverage and explain what we are and are not testing that it takes away from our focus on testing. Every time we take a look at the testing problem we see new risks and new things that need our attention. It randomizes us and stalls our progress. This was also called the plague of endlessness and exhaustion. </div><div><br /></div><div><b>The Plague of Miscommunication </b>(Scott White and 2 others): The language of creation (development) and the language of destruction (testing) are different. Testers write a bug report and the <span class="blsp-spelling-error" id="SPELLING_ERROR_8">devs</span> don't understand it and cycles have to be spent explaining and reexplaining. A related plague is the lack of communication that causes testers to redo work and tread over the same paths as unit tests, integration tests and even the tests that other testers on the team are performing. This was also called the plague of language (meaning lack of a common one). </div><div><br /></div><div><b>The Plague of Rigidness</b> (<span class="blsp-spelling-error" id="SPELLING_ERROR_9">Roussi</span> <span class="blsp-spelling-error" id="SPELLING_ERROR_10">Roussev</span>, Steven Woody, Michele Smith and 5 others): Sticking to the plan/process/procedure no matter what. Test strategy cannot be bottled in such a manner yet process heavy teams often ignore creativity for the sake of process. We stick with the same stale testing ideas product after product, release after release. This was also called the plague of complacency. <span class="blsp-spelling-error" id="SPELLING_ERROR_11">Roussi</span> suggested a novel twist calling this the success plague where complacency is brought about through success of the product. How can we be wrong when our software was so successful in the market? </div><div><br /></div><div>And I have my own 7<span class="blsp-spelling-error" id="SPELLING_ERROR_12">th</span> Plague that I'll save for the next post. Unless anyone would like to write it for me? It's called the Plague of Entropy. A free book to the person who nails it.</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-2441538521820777322?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/the-7th-plague-and-beyond/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The 7th Plague?</title>
		<link>https://googledata.org/google-testing/the-7th-plague/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-7th-plague</link>
		<comments>https://googledata.org/google-testing/the-7th-plague/#comments</comments>
		<pubDate>Mon, 10 Aug 2009 16:25:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerYes, I only posted 6 plagues. Congratulations for catching this purposeful omission! You wouldn't trust a developer who argues "this doesn't need to be tested" or "that function works like so" and you shouldn't trust me when I say ...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">By James A. Whittaker</span><div><br /></div><div>Yes, I only posted 6 plagues. Congratulations for catching this purposeful omission! You wouldn't trust a developer who argues "this doesn't need to be tested" or "that function works like so" and you shouldn't trust me when I say there are 7 plagues. In the world of testing <i>all assumptions</i> must be scrutinized and it doesn't work until someone, namely a tester, verifies that it does!</div><div><br /></div><div>Clearly this is an alert and education readership. But why assume even this statement is true? How about another test? Anyone feel like contributing the 7th plague?</div><div><br /></div><div>I've actually received a few via email already and I have an idea of my own 7th. So email them to me at docjamesw@gmail.com and I'll post a few of the best, with attribution, on this blog. Maybe I can even scare up some Google SWAG or a copy of my latest book to the best one. </div><div><br /></div><div>First come, first published. </div><div><br /></div><div><br /></div><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-4383938670771813043?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/the-7th-plague/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Plague of Blindness</title>
		<link>https://googledata.org/google-testing/the-plague-of-blindness/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-plague-of-blindness</link>
		<comments>https://googledata.org/google-testing/the-plague-of-blindness/#comments</comments>
		<pubDate>Wed, 29 Jul 2009 21:30:00 +0000</pubDate>
		<dc:creator><![CDATA[James Whittaker]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By James A. WhittakerAnd now for the last plague in this series. I hope you enjoyed them (the posts ...not the plagues!)Imagine playing a video game blindfolded or even with the heads up display turned off. You cannot monitor your character's health, y...]]></description>
				<content:encoded><![CDATA[<span class="Apple-style-span"   style="font-family:Verdana;font-size:13;"><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:130%;"><span class="Apple-style-span"  style="font-size:16;">By James A. Whittaker</span></span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:130%;"><span class="Apple-style-span"  style="font-size:16;"><br /></span></span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:medium;">And now for the last plague in this series. I hope you enjoyed them (the posts ...not the plagues!)</span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:medium;"><br /></span></p><p class="MsoNormal" style="MARGIN: 0px"><span style="font-size:+0;"><span class="Apple-style-span"  style="font-size:medium;">Imagine playing a video game blindfolded or even with the heads up display turned off. You cannot monitor your character's health, your targeting system is gone. There is no look ahead radar and no advance warning of any kind. In gaming, the inability to access information about the campaign world is debilitating and a good way to get your character killed. </span></span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:medium;"><br /></span></p><p class="MsoNormal" style="MARGIN: 0px"><span style="font-size:+0;"><span class="Apple-style-span"  style="font-size:medium;">There are many aspects of testing software that fall into this invisible spectrum. Software itself is invisible. We see it only through the </span><span class="Apple-style-span"  style="font-size:medium;">UI</span><span class="Apple-style-span"  style="font-size:medium;"> with much of what is happening doing so under the covers and out of our line of sight. It’s not like building a car in which you can clearly see missing pieces and many engineers can look at a car and get the exact same view of it. There is no arguing whether the car has a bumper installed, it is in plain sight for everyone involved to </span></span><i><span style="font-size:+0;"><span class="Apple-style-span"  style="font-size:medium;">see</span></span></i><span style="font-size:+0;"><span class="Apple-style-span"  style="font-size:medium;">. Not so with software which exists as magnetic fluctuations on storage media. It’s not a helpful visual.</span></span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:medium;"><br /></span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:medium;">Software testing is much like game playing while blindfolded. We can't see bugs; we can't see coverage; we can't see code changes. This information, so valuable to us as testers, is hidden in useless static reports. If someone outfitted us with an actual blindfold, we might not even notice. </span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:medium;"><br /></span></p><p class="MsoNormal" style="MARGIN: 0px"><span style="font-size:+0;"><span class="Apple-style-span"  style="font-size:medium;">This blindness concerning our product and its behavior creates some very real problems for the software tester. Which parts of the software have enjoyed the most unit testing? Which parts have changed from one build to the next? Which parts have existing bugs posted against them? What part of the software does a specific test case cover? Which parts have been tested thoroughly and which parts have received no attention whatsoever?</span></span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:medium;"><br /></span></p><p class="MsoNormal" style="MARGIN: 0px"><span style="font-size:+0;"><span class="Apple-style-span"  style="font-size:medium;">Our folk remedy for the blindness plague has always been to measure code coverage, </span><span class="Apple-style-span"  style="font-size:medium;">API</span><span class="Apple-style-span"  style="font-size:medium;">/method coverage or </span><span class="Apple-style-span"  style="font-size:medium;">UI</span><span class="Apple-style-span"  style="font-size:medium;"> coverage. We pick the things we can see the best and measure them, but do they really tell us anything? We’</span><span class="Apple-style-span"  style="font-size:medium;">ve</span><span class="Apple-style-span"  style="font-size:medium;"> been doing it this way for years not because it is insightful, but simply because it is all our blindness will allow us to do. We’re interacting with our application under test a great deal, but we must rely on other, less concrete senses for any feedback about our effort.</span></span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:medium;"><br /></span></p><p class="MsoNormal" style="MARGIN: 0px"><span class="Apple-style-span"  style="font-size:medium;">Software testers could learn a lot from the world of gaming. Turn on your heads up display and see the information you've been blind to. There's power in information. </span></p></span><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-8474638822530544537?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/the-plague-of-blindness/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
