<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Google Data &#187; niceredfrog</title>
	<atom:link href="/author/niceredfrog/feed/" rel="self" type="application/rss+xml" />
	<link>https://googledata.org</link>
	<description>Everything Google: News, Products, Services, Content, Culture</description>
	<lastBuildDate>Fri, 10 Apr 2015 22:28:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>Web app acceptance test survival techniques, Part 3: Musings</title>
		<link>https://googledata.org/google-testing/web-app-acceptance-test-survival-techniques-part-3-musings/</link>
		<comments>https://googledata.org/google-testing/web-app-acceptance-test-survival-techniques-part-3-musings/#comments</comments>
		<pubDate>Thu, 21 May 2009 06:19:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By Julian HartyPart 1 and Part 2 of this series provided how-tos and usefulness tips for creating acceptance tests for Web apps. This final post reflects on some of the broader topics for our acceptance tests.Aims and drivers of our testsIn my experien...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">By Julian Harty</span><br /><p><a href="http://googletesting.blogspot.com/2009/04/survival-techniques-for-acceptance.html">Part 1</a> and <a href="http://googletesting.blogspot.com/2009/05/survival-techniques-for-web-app.html">Part 2</a> of this series provided how-tos and usefulness tips for creating acceptance tests for Web apps. This final post reflects on some of the broader topics for our acceptance tests.</p><p><span style="font-size:150;"><span style="font-weight: bold;">Aims and drivers of our tests</span></span></p><p>In my experience and that of my colleagues, there are drivers and aims for acceptance tests. They should act as ‘<span style="font-weight: bold;">safety rails</span>’, by analogy similar to crash barriers at the sides of roads, that keep us from straying too far from the right direction. Our tests need to ensure development doesn’t break essential functionality. The tests must also provide <span style="font-weight: bold;">early warning</span>, preferably minutes after relevant changes have been made to the code.</p><p>My advice for developing acceptance tests for Web applications: start simple, keep them simple, and find ways to build and establish trust in your automation code. One of the maxims I use when assessing the value of a test is to think of ways to fool my test into giving erroneous results. Then I decide whether the test is <span style="font-weight: bold;">good enough</span> or whether we need to add safeguards to the test code to make it harder to fool. I’m pragmatic and realise that all my tests are imperfect; I prefer to make tests ‘good enough’ to be useful where essential preconditions are embedded into the test. Preconditions should include checking for things that invalidate assumptions for that test (for example, the logged-in account is assumed to have administrative rights) and checking for the appropriate system state (for example, to confirm the user is starting from the correct homepage and has several items in the shopping basket).</p><p>The value of the tests, and their ability to act as safety rails, is directly related to how often failing tests are a "false positive." Too many false positives, and a team loses trust in their acceptance tests entirely.</p><p>Acceptance tests aren’t a ‘silver bullet.’ They don’t solve all our problems or provide complete confidence in the system being tested (real life usage generates plenty of humbling experiences). They should be backed up by comprehensive automated unit tests and tests for quality attributes such as performance and security. Typically, unit tests should comprise 70% of our functional tests, integration tests 20%, and acceptance tests the remaining 10%.</p><p>We need to be able to justify the benefits of the automated tests and understand both the return on investment (ROI) and Opportunity Cost – the time we spend on creating the automated tests is not available to do other things, so we need to ask whether we could spend our time better. Here, the intent is to consider the effects and costs rather than provide detailed calculations; I typically spend a few minutes thinking about these factors as I’m deciding whether to create or modify an automated test. As code spends the vast majority of time in maintenance mode, living on for a long time after active work has ceased, I recommend assessing most costs and benefits over the life of the software. However, opportunity cost must be considered within the period I’m actively working on the project, as that’s all the time I have available.</p><span style="font-size:150;"><span style="font-weight: bold;">Test automation challenges</span></span><p></p><p>Unlike testing of traditional web sites, where the contents tend not to change once they have been loaded, tests for web applications need to cope with highly dynamic contents that may change several times a second, sometimes in hard-to-predict ways, caused by factors outside our control.</p><p>As web applications are highly dynamic, the tests need to detect relevant changes,  wait until the desired behaviour has occurred, and interrogate the application state before the system state changes again. There is a window of opportunity for each test where the system is in an appropriate state to query. The changes can be triggered by many sources, including input, such as a test script clicking a button; clock based, such as a calendar reminder is displayed for 1 minute; and server initiated changes, such as when a new chat message is received.</p><p>The tests can simply poll the application, trying to detect relevant changes or  timeouts. If the test only looks for expected behaviour, it might spend a long time waiting in the event of problems. We can improve the speed and reliability of the tests by checking for problems, such as error messages.</p><p>Browser-based UI tests are relatively heavy-weight, particularly if each test has to start from a clean state, such as the login screen. Individual tests can take seconds to execute. While this is much faster than a human could execute a test, it’s much slower than a unit test (which takes milliseconds). There is a trade-off between optimizing tests by reducing the preliminary steps (such as bypassing the need to log in by using an authentication cookie) and maintaining the independence of the tests – the system or the browser may be affected by earlier tests. Fast tests make for happier developers, unless the test results prove to be erroneous.</p><p>As with other software, automated tests need ongoing nurturing to retain their utility, especially when the application code is changed. If each test contains information on how to obtain information, such as an xpath expression to get the count of unread email, then a change to the UI can affect many tests and require each of those tests to be changed and retested. By applying good software design practices, we can encapsulate the ‘how’ from the rest of our tests. That way, if the application changes, we only need to change how we get the email count in one piece of code, instead of having to change it in every piece of code.</p><p><span style="font-size:150;"><span style="font-weight: bold;">Practical tests</span></span></p><p>Lots of bugs are discovered by means other than automated testing – they might be reported by users, for example. Once these bugs are fixed, the fixes must be tested. The tests must establish whether the problem has been fixed and, where practical, show that the root cause has been addressed. Since we want to make sure the bug doesn’t resurface unnoticed in future releases, having automated tests for the bug seems sensible. Create the acceptance tests first, and make sure they expose the problem; then fix the bug and run the tests again to ensure the fix works. Antony Marcano is one of the pioneers of acceptance tests for bugs.</p><p>Although this article focuses on acceptance tests, I’d like to encourage you to consider creating smaller tests when practical. Smaller tests are more focused, run significantly faster, and are more likely to be run sooner and more often. We sweep through our acceptance tests from time to time and replace as many as we can with small or medium tests. The remaining acceptance tests are more likely to be maintained because we know they’re essential, and the overall execution time is reduced – keeping everyone happy!</p><p><span style="font-size:150;"><span style="font-weight: bold;">Further information</span></span></p><p>A useful tutorial on xpath<br /><a href="http://www.zvon.org/xxl/XPathTutorial/General/examples.html">http://www.zvon.org/xxl/XPathTutorial/General/examples.html</a></p><p></p><p>Google Test Automation Conference (GTAC) 2008: The value of  small tests<br /><a href="http://www.youtube.com/watch?v=MpG2i_6nkUg">http://www.youtube.com/watch?v=MpG2i_6nkUg</a></p><p>GTAC 2008: Taming the Beast - How to Test an AJAX Application<br /><a href="http://www.youtube.com/watch?v=5jjrTBFZWgk">http://www.youtube.com/watch?v=5jjrTBFZWgk</a></p><p><a href="http://googletesting.blogspot.com/2009/04/survival-techniques-for-acceptance.html">Part 1</a> of this article contains an additional long list of excellent resources.<br /><span class="byline-author"></span></p><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-64384284399520649?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/web-app-acceptance-test-survival-techniques-part-3-musings/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Survival techniques for web app acceptance tests, Part 2: increasing effectiveness and utility</title>
		<link>https://googledata.org/google-testing/survival-techniques-for-web-app-acceptance-tests-part-2-increasing-effectiveness-and-utility/</link>
		<comments>https://googledata.org/google-testing/survival-techniques-for-web-app-acceptance-tests-part-2-increasing-effectiveness-and-utility/#comments</comments>
		<pubDate>Thu, 07 May 2009 05:29:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By Julian Harty Part 1 of this series provided practical how-tos to create acceptance tests. Read on to learn how to make your tests more useful. Increasing the velocity Once we have some automated acceptance tests, they must be run, without delay, as ...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">By Julian Harty</span> <br /><p><a href="http://googletesting.blogspot.com/2009/04/survival-techniques-for-acceptance.html">Part 1</a> of this series provided practical <i>how-tos</i> to create acceptance tests. Read on to learn how to make your tests more useful.</p> <span style="font-size:150%;"><span style="font-weight: bold;">Increasing the velocity</span></span> <p>Once we have some automated acceptance tests, they must be run, without delay, as often as appropriate to answer the concerns of the team. We may want to run a subset of the tests after each change of the code. The process of running tests can be automated and integrated with a source control system that continuously builds the code and runs various automated tests. If the acceptance tests are sufficiently fast and can run unattended, they should be included in the tests run by the continuous build system. One challenge for our acceptance tests at Google is to enable the web browser to run without appearing on screen; our machines don’t typically have a physical or logical GUI. Utilities such as vnc and xvfb can host the web browser and enable us to run our acceptance tests. A useful guide on test automation is the book <span style="color: rgb(0, 0, 255);"><u><a href="http://www.pragprog.com/titles/auto/pragmatic-project-automation">Pragmatic Project Automation</a></u></span>  by Mike Clark.</p>  <p>Fluid writing of test automation code smooths over obstacles, coping with the interface twixt web application and your code. When the application has been designed with testing in mind, hooks exist; keyboard shortcuts are proffered; and debug data is available for the asking. Hooks include IDs on key elements such as the search field, enabling tests to identify the correct element quickly, unambiguously, and correctly even as the layout of the UI changes.</p> <span style="font-size:150%;"><span style="font-weight: bold;">Increasing variety and fidelity</span></span><br><br><span style="font-size:130%;"><span style="font-weight: bold;">Varying the tests</span></span> <p>Tests that repeat exactly the same steps using the same parameters tread a well-worn path through the application and may side-step some nearby bugs which we could find by changing a couple of parameters in the tests. Ways to change the tests include using external data sources and using random values for number of repetitions, sleeps, number of items to order, etc. You need to be able to distinguish between tests that fail because they are flaky and those that report valid failures in the software being tested, so make sure the tests record the parameters they used in sufficient detail to enable the test to be re-run consistently and predictably.</p> <span style="font-size:130%;"><span style="font-weight: bold;">Using a variety of web browsers</span></span> <p>Browsers differ between one provider and another and between versions. Your application may be trouble-free in one browser, yet entirely unusable in another. Make sure your automated tests execute in each of the major browsers used by your users; our list typically includes Internet Explorer, Firefox, Safari, Opera, and Chrome. Tools such as Selenium RC (<span style="color: rgb(0, 0, 255);"><u><a href="http://seleniumhq.org/">http://seleniumhq.org/</a></u></span>) and WebDriver (<span style="color: rgb(0, 0, 255);"><u><a href="http://code.google.com/p/webdriver/">http://code.google.com/p/webdriver/</a></u></span>) support most of the browsers, and if your tests are designed to run in parallel, you may be able to take advantage of parallel test execution frameworks such as Selenium Grid.</p> <span style="font-size:130%;"><span style="font-weight: bold;">Emulating mobile devices</span></span> <p>Many web applications are now used on mobile phones such as the iPhone or G1. While there are some early versions of WebDriver for these devices, you may find emulating these devices in a desktop browser is sufficient to give you the confidence you need. Firefox’s excellent extensions and profiles make such testing easy to implement. Safari’s development tools can be used to specify the parameters you need, such as which device to emulate. Here’s an example of how to configure Firefox in WebDriver to emulate a version 1.1 iPhone.</p>  <p><span style="font-family:Courier New,monospace;"><span style="font-size:85%;"><p><br />private static final String IPHONE_USER_AGENT_V1_1 =<br />&nbsp;&nbsp;&nbsp;&nbsp;"Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420.1 "<br />&nbsp;&nbsp;&nbsp;&nbsp;+ "(KHTML; like Gecko) Version/3.0 Mobile/3B48b Safari/419.3";<br /><br />&nbsp;&nbsp;/**<br />&nbsp;&nbsp;&nbsp;* Returns the WebDriver instance with settings to emulate<br />&nbsp;&nbsp;&nbsp;* an iPhone V1.1<br />&nbsp;&nbsp;&nbsp;*/<br />&nbsp;&nbsp;public static WebDriver createWebDriverForIPhoneV1_1() {<br />&nbsp;&nbsp;&nbsp;&nbsp;final String emptyString = "";<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;FirefoxProfile profile = new FirefoxProfile();<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;// Blank out headers that would otherwise confuse the web server.<br />&nbsp;&nbsp;&nbsp;&nbsp;profile.setPreference("general.appversion.override", "");<br />&nbsp;&nbsp;&nbsp;&nbsp;profile.setPreference("general.description.override", "");<br />&nbsp;&nbsp;&nbsp;&nbsp;profile.setPreference("general.platform.override", "");<br />&nbsp;&nbsp;&nbsp;&nbsp;profile.setPreference("general.vendor.override","");<br />&nbsp;&nbsp;&nbsp;&nbsp;profile.setPreference("general.vendorsub.override","");<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;profile.setPreference("general.appname.override", "iPhone");<br />&nbsp;&nbsp;&nbsp;&nbsp;profile.setPreference(<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"general.useragent.override", IPHONE_USER_AGENT_V1_1);<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;WebDriver webDriver = new FirefoxDriver(profile);<br />&nbsp;&nbsp;&nbsp;&nbsp;return webDriver;<br />&nbsp;&nbsp;}<br /></span></span></p>  <p>The user-agent string can be found online in many cases or captured from a tame web server that records the HTTP headers. I use <span style="color: rgb(0, 0, 255);"><u><a href="http://www.pycopia.net/webtools/headers">http://www.pycopia.net/webtools/headers</a></u></span>, which even emails the values to me in a format I can easily adapt to use in my test code.</p> <span style="font-size:150%;"><span style="font-weight: bold;">Robust tests</span></span>  <p>Robust tests can continue to operate correctly even when things change in the application being tested or in the environment. Web applications use HTML, so try to add IDs and CSS classes to relevant elements of the application. Although these additions potentially increase the size of the page, they enable easier and more consistent identification, navigation, and selection of the user interface.<br /></p> <p>Try to avoid brittle identifiers, such as xpath expressions that rely on positional data. For example, /div[3]/div[1] becomes unreliable as soon as any of the positions change – and problems may be hard to identify unless the change is easy to identify.</p>  <p>Add guard conditions that assert your assumptions are still accurate. Design the tests to fail if any of the assumptions prove false. If possible, make the tests fail at compile time to provide the earliest possible feedback.</p>  <p>Try to only make positive assertions. For example, if you expect an action to cause an item to be added to a list, assert that after the action the list contains the expected value, not that the list has changed size (because other functionality may affect the size). Also, if it's not something your test is concerned about, don't make assertions about it.</p> <span style="font-size:150%;"><span style="font-weight: bold;">Informative tests</span></span> <p>Help your tests to help others by being informative. Use a combination of meaningful error messages and more detailed logs to help people to tell whether the tests are working as intended and, if problems occur, to figure out what’s going wrong.</p> <span style="font-size:130%;"><span style="font-weight: bold;">Recording evidence</span></span> <p>Taking screenshots of the UI when a problem occurs can help to debug the issue and disambiguate between mismatches in our assumptions vs. problems in the application. It’s not an exact science: screenshots are seldom recorded at exactly the same time as the interaction with the application; typically they’re recorded afterwards, and the application may have changed in the interim period, no matter how short that period is.</p>  <p>Debug traces are useful for diagnosing acute problems, and range from simple debug statements like ‘I made it to this point’ to dumps of the entire state of values returned from the application by our automation tool. In comparison, logging is intended for longer-term tracking of behaviour which enables larger-scale thinking, such as enabling a test to be reproduced reliably over time.</p>   <p>Good error messages should say what’s expected and include the actual values being compared. Here are two examples of combinations of tests and assert messages, the second more helpful than the first:<br /></p> <p><span style="font-family:Courier New,monospace;"><span style="font-size:85%;">1. Int actualResult = addTwoRandomOddNumbers();</span></span></p> <p style="margin-left: 0.5in;"><span style="font-family:Courier New,monospace;"><span style="font-size:85%;">assertTrue("Something wrong with calculation", actualResult % 2 == 0);</span></span></p>  <p><span style="font-family:Courier New,monospace;"><span style="font-size:85%;">2.    Int actualResult = addTwoRandomOddNumbers(number1, number2);</span></span></p> <p style="margin-left: 0.5in;"><span style="font-family:Courier New,monospace;"><span style="font-size:85%;">assertEquals(String.format("Adding two odd numbers [%d] and [%d] should return an even result. Calculated result = %d", number1, number2, actualResult) actualResult % 2 == 0);</span></span></p> <span style="font-size:150%;"><span style="font-weight: bold;">Bit-rot, the half-life of tests</span></span>   <p>Vint Cerf coined the phrase <span style="color: rgb(0, 0, 255);"><u><a href="http://en.wikipedia.org/wiki/Bit_rot">bit-rot</a></u></span> to reflect the decay of usefulness or availability of software and data stored on computers. In science, half-life is a measurement of the decay of radioactivity over time, and is the period taken for the radioactivity to reduce by 50%. Similarly, our tests are likely to suffer from bit-rot and will become less useful over time as the system and its use change.<br /></p> <p>The only cure for bit-rot is prevention. Encourage the developers to adopt and own the tests.</p> <span style="font-size:150%;"><span style="font-weight: bold;">Tests for our tests?</span></span>  <p>As our tests get bigger and more complex, let’s add unit tests to help ensure our acceptance tests behave as expected. Mock objects are one practical way to reliably automate the tests, with several good and free frameworks available for common programming languages. I suggest you create unit tests for more involved support functions and for ‘driver’ code, rather than for the tests themselves.</p> <span style="font-size:150%;"><span style="font-weight: bold;">Peripheral topics</span></span>  <p>If you think creating automated tests for a web application is hard, try using the web site with accessibility software such as a screen reader to learn just how inaccessible some of our web applications are! Screen readers, like automated tests, need ways to interrogate, interact with, and interpret the contents of web applications. In general, increasing the accessibility of a site can improve testability and vice-versa. So while you’re working hard with the team to improve the testability, try to use the site with a screen reader. Here's one example: Fire Vox, a screen reader for Firefox. <span style="color: rgb(0, 0, 255);"><u><a href="http://firevox.clcworld.net/about.html">http://firevox.clcworld.net/about.html</a></u></span></p>  <p>The <a href="http://googletesting.blogspot.com/2009/05/web-app-acceptance-test-survival.html">third and final post</a> of this series will reflect on the aims and challenges of acceptance tests.</p><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-9170451790397169755?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/survival-techniques-for-web-app-acceptance-tests-part-2-increasing-effectiveness-and-utility/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Survival techniques for acceptance tests of web applications (Part 1)</title>
		<link>https://googledata.org/google-testing/survival-techniques-for-acceptance-tests-of-web-applications-part-1/</link>
		<comments>https://googledata.org/google-testing/survival-techniques-for-acceptance-tests-of-web-applications-part-1/#comments</comments>
		<pubDate>Wed, 29 Apr 2009 00:32:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[By Julian HartyAutomated tests are often touted as a solution for software testing, and effective automated tests certainly have their place and can deliver vital confidence in the software being tested. However, many tests fail to deliver value, eithe...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">By Julian Harty</span><br /><br />Automated tests are often touted as a solution for software testing, and effective automated tests certainly have their place and can deliver vital confidence in the software being tested. However, many tests fail to deliver value, either now or in the future, and there are plenty of projects whose automated tests are broken, forlorn, and unloved – cluttering up projects and wasting time and resources.<br /><br /><span style="font-size:150;"><span style="font-weight: bold;">How to create acceptance tests</span></span><br /><br />Tests need to do something useful to survive. Automated tests should help the team to make the next move by providing justified confidence a bug has been fixed, confirming refactored code still works as intended, or demonstrating that new features have been successfully implemented. (See Alistair Cockburn’s discussion, referenced in the “More Information” section below, on intermediate work products – do they remind? inform? or inspire?) There should be sufficient tests – neither more nor less: more increase the support burden, fewer leave us open to unpleasant surprises in production.<br /><p></p><p>Acceptance tests must meet the needs of several groups, including the users and the developers. Long-lived tests must be written in the language of each group, using terms users will recognize and a programming language and style in which the developers are competent.<br /></p><p>We create tests by modelling the purpose of a test from the user’s perspective: send a message, order a book, etc. Each test is decomposed into individual actions: to send a message, a user must be logged in, select the compose message icon, specify one or more recipients, type a minimum of either a subject or a message, then select Send. From this list of actions, create a skeleton in the programming language of choice and create a method name that reflects each action. Show these to both the users and programmers and ask them to tell you what they think each step represents. Now is a great time to refine the names and decide which methods are appropriate: before you’ve invested too much time in the work. If you wait until later, your natural protective instincts will make it harder for you to accept good suggestions and make useful changes.<br /></p><p>For each method, we need to work out how to implement it in code. How could an automated test select the compose message icon? Do alternative ways exist? An understanding of HTML, CSS, and JavaScript will help you if you plan to use browser automation tools. All the visible elements of a web application are reflected in the Document Object Model (DOM) in HTML, and they can be addressed in various ways: the directions from the root of the document to the element using xpath; unique identifiers; or characteristics possessed by the elements, such as class names, attributes, or link text. Some examples of these addressing options are shown in the Navigation Options illustration below. (Notes: navigation using xpath is much slower than using IDs; and IDs should be unique.)</p><br /><p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_-SI9VvqJ4Ag/SfiEMlDqVpI/AAAAAAAAAfM/AFOzELlcW9k/s1600-h/Navigation+Options.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 229px;" src="http://3.bp.blogspot.com/_-SI9VvqJ4Ag/SfiEMlDqVpI/AAAAAAAAAfM/AFOzELlcW9k/s400/Navigation+Options.PNG" alt="" id="BLOGGER_PHOTO_ID_5330155510633748114" border="0" /></a><br /></p><p>Some actions can be initiated using JavaScript running in the browser. For devices such as the iPhone, changes in orientation when the phone is rotated are triggered this way (see <a href="http://developer.apple.com/safari/library/documentation/AppleApplications/Reference/SafariWebContent/HandlingEvents/chapter_7_section_9.html#//apple_ref/doc/uid/TP40006511-SW16">Handling Orientation Events</a> in the Safari Reference Library).<br /></p><p>Typically, automated web application tests use JavaScript, either directly or indirectly, to interact with the web application being tested.<br /></p><p>Utilities such as recording tools can help reduce the effort required to discover how to interact with the web application. The open-source test automation tool Selenium (<a href="http://seleniumhq.org/">http://seleniumhq.org/</a>) includes a simple IDE record and playback tool that runs in the Firefox browser. Recorded scripts can help bootstrap your automated tests. However, don’t be tempted to consider the recorded scripts as automated tests: they’re unlikely to be useful for long. Instead, plan to design and implement your test code properly, using good software design techniques. Read on to learn how to use the PageObject design pattern to design your test code.<br /></p><p>Two of the tools I find most useful are Firebug (<a href="http://getfirebug.com/">http://getfirebug.com/</a>), a Swiss Army knife for the Web Browser, and Wireshark (<a href="http://www.wireshark.org/">http://www.wireshark.org/</a>), a network protocol analysis tool with a distinguished pedigree. Firebug is extremely useful when learning how to interact with a web application or debug mysterious problems with your tests when they seem to be misbehaving. I encourage you to persist when learning to use these tools – it took me a while to get used to their foibles, but I wouldn’t be without either of them these days.</p><span style="font-size:150;"><span style="font-weight: bold;">Homogenous languages and tools</span></span><br /><br /><p>Several years of experience across multiple project teams have taught us that the tests are more likely to survive when they’re familiar and close to the developers. Use their programming language, put them in their codebase, use their test automation framework (and even their operating system). We need to reduce the effort of maintaining the tests to a minimum. Get the developers to review the automated tests (whether they write them or you do) and actively involve them when designing and implementing the tests.<br /></p><p>Typically, our acceptance tests use the xUnit framework; for example, JUnit for Java projects (see <a href="http://www.junit.org/">http://www.junit.org/</a>). A good source of inspiration for creating effective tests is Gerard Meszaros’ work (see <a href="http://www.xunitpatterns.com/">http://www.xunitpatterns.com</a>).<br /></p><span style="font-weight: bold;font-size:150;" >Effective test designs</span><br /><br /><p>By using effective test designs, we can make tests easier to implement and maintain. The initial investment is minor compared to the benefits. One of my favourite designs is called Page Objects (see <a href="http://code.google.com/p/webdriver/wiki/PageObjects">PageObjects on the Google Code site</a>). A PageObject represents part or all of a page in a web application – something a user would interact with. A PageObject provides services to your test automation scripts and encapsulates the nitty-gritty details of how these services are performed. By encapsulating the nitty-gritty stuff, many changes to the web application, such as the reordering or renaming of elements, can be reflected in one place in your tests. A well-designed PageObject separates the ‘<span style="font-weight: bold;">what</span>’ from the ‘<span style="font-weight: bold;">how</span>’.</p>Another effective test design is based on three simple words: ‘<span style="font-weight: bold;">given</span>’, ‘<span style="font-weight: bold;">when</span>’, and ‘<span style="font-weight: bold;">then</span>’. As a trio they reflect the essential elements of many tests:  given various preconditions and expectations, when such-and-such happens, then I expect a certain result.<br /><p><br /><span style="font-family:courier new;">// Given I have a valid user account and am at the login page,</span><br /><br /><span style="font-family:courier new;">// When I enter the account details and select the Enter button,</span><br /><br /><span style="font-family:courier new;">// Then I expect the inbox to be displayed with the most recent email selected.</span><br /></p><p>The previous code consists of three programming comments that are easy for users to read. The actual programming code is entered immediately below each comment. Programming concepts such as literate programming are intended to make the code almost as readable as the textual comments.<br /></p><p>Isolate things that change from those that don’t. For example, separate user account data from your test code. The separation makes changes easier, faster, and safer to implement, compared to making updates in the code for each test.</p><span style="font-weight: bold;font-size:150;" >Gaining Skills</span><br /><br /><p>Writing automated tests may be easy for some of you. In my case, I started with some simple example tests and tweaked them to suit my needs. I received boosts from working with more experienced practitioners who were able to correct my course and educate me in how to use various tools effectively. I recommend pairing with one of the developers of the software to be tested when you face a new testing requirement. Their intimate knowledge of the code and your understanding of the tests can form a potent combination. For instance, by working with one of the developers on a recent project, we were able to implement bi-directional injection of JSON messages and capture the responses from the server to test a key interaction between the server and client that was causing problems in production.<br /></p><p>I encourage you to try out examples, tweak them, experiment, and plunge in to writing your first automated tests. Learn about AJAX – it underpins the web applications. And learn from more experienced practitioners – I’ve added some links at the end of the article to some of the people I respect who write great acceptance tests, including Antony Marcano and Alan Richardson.<br /></p><p><a href="http://googletesting.blogspot.com/2009/05/survival-techniques-for-web-app.html">Part 2</a> of this series helps you create more specialized tests (for example, to emulate mobile web browsers) and gives advice on how to increase the utility and effectiveness of your tests.<br /><br /><span style="font-weight: bold;font-size:150;" >Further Information</span><br /><br /><span style="font-style: italic;">Intermediate work products</span><br /></p><p>‘The<span style="font-weight: bold;"> intermediate work products</span> have only one real purpose in life: ‘‘to help the team make their next move’’.’ ‘An intermediate work product might be measured for ‘‘<span style="font-weight: bold;">sufficiency</span>” — was it sufficient to remind, inform or inspire? Any amount of effort or detail beyond sufficiency is extraneous to the purpose of the team and the purpose of the work product.’  Cooperative game manifesto for software development (Alistair Cockburn)<br /><a href="http://alistair.cockburn.us/Cooperative+game+manifesto+for+software+development">Cooperative game manifesto for software development</a> at http://alistair.cockburn.us.<br /><br /><span style="font-style: italic;">JUnit info</span><br /><br /><span style="font-weight: bold;">JUnit in Action</span>, available from Manning Publications Co. (<a href="http://www.manning.com/tahchiev/">2nd edition, early access</a> or  <a href="http://www.manning.com/massol/">1st edition</a>)<br /><br /><span style="font-weight: bold;">JUnit Recipes</span>, by J. B. Rainsberger with Scott Stirling, available from <a href="http://www.manning.com/rainsberger/">Manning Publications Co.</a><br /><br /><span style="font-style: italic;">Firebug info</span><br /><br /><a href="http://www.evotech.net/blog/2007/06/introduction-to-firebug/">Introduction to Firebug</a> on Estelle Weyl’s blog, "CSS, JavaScript and XHTML Explained"<br /><br /><a href="http://michaelsync.net/category/firebug">Firebug tutorials</a> in the Firebug Archive at Michael Sync's blog<br /><br /><a href="http://code.google.com/support/bin/answer.py?answer=94630&amp;topic=11530">Fun with Firebug Tutorial</a> on the Google Code site<br /><br /><span style="font-style: italic;">WebDriver info<br /><br /></span><a href="http://code.google.com/p/webdriver/">webdriver</a> on the Google Code site<br /><br /><span style="font-style: italic;">AJAX resources</span><br /></p><p><a href="http://bulletproofajax.com/">Bulletproof Ajax</a>—An incredibly good book on how to write good AJAX code. It starts with the basics and builds reliably and clearly from good foundations. The DOM manipulation code is relevant for implementing your acceptance tests in tools such as WebDriver.<br /></p><p><a href="http://www.dmcinsights.com/ajax/">Building a web site with Ajax</a> —Again, a book that starts simple and builds a simple application step by step.<br /></p><p><a href="http://www.testingreflections.com/node/view/6704">Acceptance tests are more A+S than T+G</a> (Antony Marcano, in his blog at testingReflections.com)<br />A+S => Activities + Specific<br />T+G => Tasks + General<br /></p><p>Alan Richardson: any and everything. For example, see:<br /><a href="http://www.eviltester.com/index.php/2008/03/09/a-generalised-model-for-user-acceptance-testing-uat/">A generalised model for User Acceptance Testing</a> and<br /><a href="http://www.eviltester.com/index.php/2008/03/08/a-little-abstraction-when-testing-software-with-selenium-rc-and-java/">A little abstraction when testing software with Selenium-RC and Java</a>, both at the  Evil Tester blog<br /></p><p></p><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-3795955204389693791?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/survival-techniques-for-acceptance-tests-of-web-applications-part-1/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Announcing Google C++ Mocking Framework</title>
		<link>https://googledata.org/google-testing/announcing-google-c-mocking-framework/</link>
		<comments>https://googledata.org/google-testing/announcing-google-c-mocking-framework/#comments</comments>
		<pubDate>Fri, 12 Dec 2008 00:50:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Posted by Zhanyong Wan, Software EngineerFive months ago we open-sourced Google C++ Testing Framework to help C++ developers write better tests. Enthusiastic users have embraced it and sent in numerous encouraging comments and suggestions, as well as p...]]></description>
				<content:encoded><![CDATA[<p><i>Posted by Zhanyong Wan, Software Engineer</i></p>Five months ago we open-sourced <a href="http://code.google.com/p/googletest/" id="ik0m" title="Google C++ Testing Framework">Google C++ Testing Framework</a> to help C++ developers write better tests. Enthusiastic users have embraced it and sent in numerous encouraging comments and suggestions, as well as patches to make it more useful. It was a truly gratifying experience for us.<br> <br> Today, we are excited to release <a href="http://code.google.com/p/googlemock/" id="q_7q" style="color: rgb(85, 26, 139);" title="Google C++ Mocking Framework">Google C++ Mocking Framework</a>&nbsp;(Google Mock for short)&nbsp;under the&nbsp;<a href="http://www.opensource.org/licenses/bsd-license.php" id="o2nw" title="new BSD license">new BSD license</a>. When used with Google Test, it lets you easily create and use <a href="http://en.wikipedia.org/wiki/Mock_Object" id="ylkf" title="mock objects">mock objects</a>&nbsp;in C++ tests and rapid prototypes. If you aren&#39;t sure what mocks are or why you&#39;ll need them, our <a title="for dummies" href="http://code.google.com/p/googlemock/wiki/ForDummies" id="c9s3">Why Google Mock?</a> article will help explain why this is so exciting, and the Testing on the Toilet episode posted nearby on this blog gives a more light-hearted overview. In short, this technique can greatly improve the design and testability of software systems, as shown in this <a title="OOPSLA paper" href="http://www.jmock.org/oopsla2004.pdf" id="uwu1">OOPSLA paper</a>.<br> <br> We are happily using Google Mock in more than 100 projects at Google. It works on Linux, Windows, and Mac OS X. Its benefits include:<br> <ul><li>Simple, declarative syntax for defining mocks  </li><li>Rich set of matchers for validating function arguments  </li><li>  Intuitive syntax for controlling the behavior of a mock  </li><li>  Automatic verification of expectations  </li><li>  Easy extensibility through new user-defined matchers and actions  </li></ul>  Our users inside Google have appreciated that Google Mock is easy and even fun to use, and is an effective tool for improving software quality. We hope you&#39;ll like it too. Interested? Please take a few minutes to read the <a href="http://code.google.com/p/googlemock/w/list" id="x2c0" title="documentation">documentation</a> and <a href="http://code.google.com/p/googlemock/downloads/list" id="jo3t" title="download">download Google Mock</a>. Be warned, though: mocking is addictive, so proceed at your own risk.<br>  <br> And... we&#39;d love to hear from you!&nbsp; If you have any questions or feedback, please meet us on the <a href="http://groups.google.com/group/googlemock" id="nzyu" title="Google Mock Discussion Group">Google Mock Discussion Group</a>. Happy mocking!<br><br><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-44272234430161593?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/announcing-google-c-mocking-framework/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Mockers of the (C++) World, Delight!</title>
		<link>https://googledata.org/google-testing/mockers-of-the-c-world-delight/</link>
		<comments>https://googledata.org/google-testing/mockers-of-the-c-world-delight/#comments</comments>
		<pubDate>Thu, 11 Dec 2008 23:55:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[by Zhanyong Wan, Software Engineer      Life is unfair. You work every bit as hard as Joe the Java programmer next to you. Yet as a C++ programmer, you don&#39;t get to play with all the fancy programming tools Joe takes for granted.    In particular, ...]]></description>
				<content:encoded><![CDATA[<i>by Zhanyong Wan, Software Engineer</i> <p class="western" style="margin-top: 0.1in; margin-bottom: 0in;">  <br> </p> <p class="western" style="margin-top: 0.04in; margin-bottom: 0in;"> Life is unfair. You work every bit as hard as Joe the Java programmer next to you. Yet as a C++ programmer, you don&#39;t get to play with all the fancy programming tools Joe takes for granted. </p> <p class="western" style="margin-top: 0.1in; margin-bottom: 0in;">  In particular, without a good mocking framework, mock objects in C++ have to be rolled by hand. Boy, is that tedious! (Not to mention how error-prone it is.) Why should you endure this? </p> <p class="western" style="margin-top: 0.1in; margin-bottom: 0in;"> Dread no more. Google Mock is finally here to help! It&#39;s a Google-originated open-source framework for <b><font color="#800000">creating and using C++ mocks</font></b>. Inspired by jMock and EasyMock, Google Mock is <b><font color="#800000">easy</font></b> to use, yet <b><font color="#800000">flexible and extensible</font></b>. All you need to get started is the ability to count from 0 to 10 and use an editor. </p> <p class="western" style="margin-top: 0.1in; margin-bottom: 0in;">Think you can do it? Let&#39;s try this simple example: you have a <b><font face="Courier New, monospace">ShoppingCart</font></b> class that gets the tax rate from a server, and you want to test that it remembers to disconnect from the server even when the server has generated an error. It&#39;s easy to write the test using a mock tax server, which implements this interface:</p> <p style="border: 1px solid rgb(128, 128, 128); padding: 0.01in; background: rgb(230, 245, 255) none repeat scroll 0% 50%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; margin-left: 0.39in; margin-right: 0.39in; margin-bottom: 0in;">   <font face="Courier New, monospace"><font size="2">class <b>TaxServer</b> {<br>  &nbsp; // Returns the tax rate of a location<br>  &nbsp; // (by postal code) or -1 on error.<br>  &nbsp; virtual double FetchTaxRate(<br>  &nbsp;&nbsp;&nbsp; const string&amp; postal_code) = 0;<br>  &nbsp; virtual void CloseConnection() = 0;<br>  };</font></font> </p> <p class="western" style="background: transparent none repeat scroll 0% 50%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; margin-top: 0.1in; margin-bottom: 0in;" align="left">Here&#39;s how you mock it and use the mock server to verify the expected behavior of <b><font face="Courier New, monospace">ShoppingCart</font></b>:  </p> <p style="border: 1px solid rgb(128, 128, 128); padding: 0.01in; background: rgb(230, 255, 230) none repeat scroll 0% 50%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; margin-left: 0.39in; margin-right: 0.39in; margin-bottom: 0in;">   <font face="Courier New, monospace"><font size="2">class <b>MockTaxServer</b> : public <b>TaxServer</b> { &nbsp;&nbsp;&nbsp;&nbsp;// #1<br>  <b><font color="#800000">&nbsp;&nbsp;MOCK_METHOD1</font></b>(FetchTaxRate, double(const string&amp;));<br>  <b><font color="#800000">&nbsp;&nbsp;MOCK_METHOD0</font></b>(CloseConnection, void());<br>  };<br>  <br>  <font face="Courier New, monospace">TEST(ShoppingCartTest,&nbsp;<br>  &nbsp;&nbsp;&nbsp; StillCallsCloseIfServerErrorOccurs) {<br>  <b>&nbsp; MockTaxServer</b> mock_taxserver; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;// #2<br>  <b><font color="#800000">&nbsp; EXPECT_CALL</font></b>(mock_taxserver, FetchTaxRate(_))<br>  &nbsp;&nbsp;&nbsp; .<b>WillOnce</b>(<b>Return</b>(-1)); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;// #3<br>  <b><font color="#800000">&nbsp; EXPECT_CALL</font></b>(mock_taxserver, CloseConnection());<br>  <b>&nbsp; ShoppingCart</b> cart(&amp;mock_taxserver); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;// #4<br>  &nbsp; cart.CalculateTax(); &nbsp;// Calls FetchTaxRate()<br>  &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;// and CloseConnection().<br>  } &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;// #5 </font></font></font></p> <ol><li>  <p class="western" style="background: transparent none repeat scroll 0% 50%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; margin-top: 0.02in; margin-bottom: 0in;" align="left">   Derive the mock class from the interface. For each virtual method, count how many arguments it has, name the result <i>n</i>, and define it using </font></font><b><font face="Courier New, monospace"><font color="#800000">MOCK_METHOD<i>n</i></font></font></b>, whose arguments are the <b><font color="#800000">name</font></b><font color="#000000"> and </font><b><font color="#800000">type</font></b><font color="#000000"> of the method.</font></font></font>  </p>  </li><li>  <p class="western" style="background: transparent none repeat scroll 0% 50%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; margin-top: 0.02in; margin-bottom: 0in;" align="left">  Create an instance of the mock class. It will be used where you would normally use a real object.</font></font>  </p>  </li><li>  <p class="western" style="background: transparent none repeat scroll 0% 50%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; margin-top: 0.02in; margin-bottom: 0in;" align="left">   Set expectations on the mock object (How will it be used? What will it do?). For example, the first </font></font></font><b><font face="Courier New, monospace"><font color="#800000">EXPECT_CALL </font></font></b>says that </font></font><b><font face="Courier New, monospace">FetchTaxRate()</font></b> will be called and will return an error. The underscore (_) is a </font><b><font color="#800000">matcher</font></b><font color="#000000"> that says the argument can be anything. Google Mock has many matchers you can use to precisely specify what the argument should be like. You can also define your own matcher or use an exact value.</font></font></font></font>  </p>  </li><li>  <p class="western" style="background: transparent none repeat scroll 0% 50%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; margin-top: 0.02in; margin-bottom: 0in;" align="left">Exercise code that uses the mock object. You&#39;ll get an error immediately if a mock method is called more times than expected or with the wrong arguments.  </p>  </li><li>  <p class="western" style="background: transparent none repeat scroll 0% 50%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; margin-top: 0.02in; margin-bottom: 0in;" align="left">  When the mock object is destroyed, it checks that all expectations on it have been satisfied. </p>  </li></ol> <p class="western" style="margin-top: 0.1in; margin-bottom: 0in;"> You can also use Google Mock for rapid prototyping – and get a better design. To find out more, visit the project homepage at <a href="http://code.google.com/p/googlemock/">http://code.google.com/p/googlemock/</a>. Now, be the first one on your block to use Google Mock and prepare to be envied. Did I say life is unfair?</p> <br>Remember to download this episode and post it in your office!<br><a href="http://code.google.com/testing/TotT-2008-12-11.pdf">Toilet-Friendly Version</a><br><br><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-2871820243373118346?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/mockers-of-the-c-world-delight/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GUI Testing: Don&#8217;t Sleep Without Synchronization</title>
		<link>https://googledata.org/google-testing/gui-testing-dont-sleep-without-synchronization/</link>
		<comments>https://googledata.org/google-testing/gui-testing-dont-sleep-without-synchronization/#comments</comments>
		<pubDate>Tue, 28 Oct 2008 07:54:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Posted by Philip Zembrod, Software Engineer in Test, SwedenSo you're working on TheFinalApp - the ultimate end-user application, with lots of good features and a really neat GUI. You have a team that's keen on testing and a level of unit test coverage ...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Philip Zembrod, Software Engineer in Test, Sweden</span><br /><br />So you're working on TheFinalApp - the ultimate end-user application, with lots of good features and a really neat GUI. You have a team that's keen on testing and a level of unit test coverage that others only dream of. The star of the show is your suite of automatic GUI end-to-end tests &mdash; your team doesn't have to manually test every release candidate.<br /><br /><span style="font-weight:bold;">Life would be good if only the GUI tests weren't so flaky.</span> Every once and again, your test case clicks a menu item too early, while the menu is still opening. Or it double-clicks to open a tree node, tries to verify the open too early, then retries, which closes the node (oops). You have tried adding sleep statements, which has helped somewhat, but has also slowed down your tests.<br /><br />Why all this pain? Because <span style="font-weight:bold;">GUIs are not designed to synchronize with other computer programs. They are designed to synchronize with human beings</span>, which are not like computers:<br /><ul><br /><li>Humans act much more slowly. Well-honed GUI test robots drive GUIs at near theoretical maximum speed.</li><br /><li>Humans are much better at observing the GUI, and they react intelligently to what they see.</li><br /><li>Humans extract more meaningful information from a GUI.</li><br /></ul><br />In contrast to testing a server, where you usually find enough methods or messages in the server API to synchronize the testing with the server, <span style="font-weight:bold;">a GUI application usually lacks these means of synchronization</span>. As a result, a running automated GUI test often consists of one long sequence of race conditions between the automated test and the application under test.<br /><br />GUI test synchronization boils down to the question: <span style="font-weight:bold;">Is the app under test finished with what it's doing?</span> "What it's doing" may be small, like displaying a combo box, or big, like a business transaction. Whatever "it" is, the test must be able to tell whether "it" is finished. <span style="font-weight:bold;">Maybe you want to test something while "it" is underway</span>, like verify that the browser icon is rotating while a page is loading. Maybe you want to deliberately click the "Submit" button again in the middle of a transaction to verify that nothing bad happens. <span style="font-weight:bold;">But usually, you want to wait until "it" is done</span>.<br /><br /><span style="font-weight:bold;">How to find out whether "it" is done? Ask!</span> Let your test case ask your GUI app. In other words: <span style="font-weight:bold;">provide one or several test hooks suitable for your synchronization needs</span>.<br /><br />The questions to ask depend on the type, platform, and architecture of your application. Here are three questions that  worked for me when dealing with a single-threaded Win32 MFC database app:<br /><br />The first is a question for the OS. The Win32 API provides a function to <span style="font-weight:bold;">wait while a process has pending input events</span>:<br>DWORD WaitForInputIdle(HANDLE hProcess, DWORD dwMilliseconds). Choosing the shortest possible timeout (dwMilliseconds = 1) effectively turns this from a wait-for to a check-if function, so you can explicitly control the waiting loop; for example, to combine several different check functions. Reasoning: <span style="font-weight:bold;">If the GUI app has pending input, it's surely not ready for new input.</span><br /><br />The second question is: <span style="font-weight:bold;">Is the GUI app's message queue empty?</span> I did this with a test hook, in this case a WM_USER message; it could perhaps also be done by calling PeekMessage() in the GUI app's process context via CreateRemoteThread(). Reasoning: <span style="font-weight:bold;">If the GUI app still has messages in its queue, it's not yet ready for new input.</span><br /><br />The third is <span style="font-weight:bold;">more like sending a probe than a question</span>, but again using a test hook. The test framework resets a certain flag in the GUI app (synchronously) and then (asynchronously) posts a WM_USER message into the app's message queue that, upon being processed, sets this flag. Now <span style="font-weight:bold;">the test framework checks periodically</span> (and synchronously again) <span style="font-weight:bold;">to see whether the flag has been set.</span> Once it has, you know the posted message has been processed. Reasoning: <span style="font-weight:bold;">When the posted message (the probe) has been processed, then surely messages and events sent earlier to the GUI app have been processed.</span> Of course, for multi-threaded applications this might be more complex.<br /><br />These three synchronization techniques resulted in <span style="font-weight:bold;">fast and stable test execution, without any test flakiness due to timing issues. All without sleeps,</span> except in the synchronization loop.<br /><br />Applying this idea to different platforms requires finding the right questions to ask and the right way to ask them. <span style="font-weight:bold;">I'd be interested to hear if someone has done something similar</span>, e.g. for an Ajax application. A query into the server to check if any XML responses are pending, perhaps?<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-6439752824814475237?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/gui-testing-dont-sleep-without-synchronization/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Six Hats of Software Testing</title>
		<link>https://googledata.org/google-testing/six-hats-of-software-testing/</link>
		<comments>https://googledata.org/google-testing/six-hats-of-software-testing/#comments</comments>
		<pubDate>Wed, 01 Oct 2008 08:19:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Posted by Jessica Tomechak, Test Engineering TeamJulian Harty, one of our senior test engineers, is presenting a keynote at the STARWEST conference today (Wednesday, October 1) on Six Thinking Hats for Software Testers. Expanding on the Thinking Hats c...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Jessica Tomechak, Test Engineering Team</span><br /><br />Julian Harty, one of our senior test engineers, is presenting a keynote at the <a href="http://www.sqe.com/starwest/Keynotes/Default.aspx">STARWEST</a> conference today (Wednesday, October 1) on Six Thinking Hats for Software Testers. Expanding on the Thinking Hats concept, originated many years ago by Edward De Bono and used by large numbers of people and various types of businesses, Julian's talk will add his experience and views about how Thinking Hats can be used in software testing. At Google, we have found it delivers breakthroughs in the short term and great results in the longer term -- one Googler called it the "universal unblocker."<br /><br />For those of you unable to attend the conference, a video recording will be available (UPDATE: <a href="http://stickyminds.com/Media/Video/Detail.aspx?WebPage=117">Video now available</a>). Julian has also written an article on this topic to appear in Better Software magazine within the next few months.<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-31247577407284468?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/six-hats-of-software-testing/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Test first is fun!</title>
		<link>https://googledata.org/google-testing/test-first-is-fun/</link>
		<comments>https://googledata.org/google-testing/test-first-is-fun/#comments</comments>
		<pubDate>Mon, 08 Sep 2008 18:36:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Posted by Philip ZembrodSo the Test-Driven-Development and Extreme-Programming people tell you you should write your tests even before you write the actual code. "Now this is taking things a bit too far," you might think. "To the extreme, even. Why wou...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Philip Zembrod</span><br /><br />So the Test-Driven-Development and Extreme-Programming people tell you you should write your tests even before you write the actual code. "Now this is taking things a bit too far," you might think. "To the extreme, even. Why would I want to do this?"<br /><br />In this post, I'll tell you my answer to this question. I now really do want to write my tests first...and here's why!<br /><br />After many years of writing code without using or writing unit tests, I took a colleague's advice and read Kent Beck's "Extreme Programming Explained." I picked "write tests first" as the first XP practice to try out in my daily coding.<br /><br />The practice is: <span style="font-weight:bold;">Write a failing test for each feature</span> you plan to implement. Run the test and see it fail. <span style="font-weight:bold;">Then implement the feature until the test succeeds.</span> Refactor now and begin again.<br /><br />Why write the test first? The obvious reason, I thought, was to make it more likely that tests will get written at all. But I heard the promise that this was not just a way to ensure tests aren't overlooked, but <span style="font-weight:bold;">a way to higher productivity</span>. I tried it, and found that <span style="font-weight:bold;">getting tests written was indeed one of the less important reasons to write tests first!</span><br /><br />Writing tests firsts <span style="font-weight:bold;">leads you to think about the interface first</span>. Of course, you do that anyway when you write the header file with the C++ class definition or when you write a Java interface before you implement any methods. However, writing a test lets you <span style="font-weight:bold;">focus on how the new interface will be <span style="font-style:italic;">used</span></span> before even writing the interface. You could call writing the interface the supply side and writing the test the demand side of the deal. Writing the test first, you set out with the customer's or user's view of the new class.<br /><br />Another way of seeing the same thing is to <span style="font-weight:bold;">regard the test as a coded specification</span>. In the test, you specify what service the new class or feature should provide, and you specify, by example, the syntax with which this service will be requested. In contrast to specifications written in natural language, a specification written into a test contains <span style="font-weight:bold;">a technical safeguard against growing stale: if it does, the test will probably fail</span>.<br /><br />These two aspects of unit tests are enough to make me feel excited about writing them first. <span style="font-weight:bold;">Tests are no longer a necessary chore, but the place and time where I start to design something new. </span>That's what I love to do. How soon can I get started writing my next test?<br /><br />But this is still not the best part: If I write a test first, run it to see it fail (often even fail to compile), and write the code to satisfy the test, then I have everything in place to <span style="font-weight:bold;"><span style="font-style:italic;">see my code running the minute it is written and compiled!</span></span> No more dread of strange behaviour or system crashes the first time I launch the system with my new code! No more laborious navigating through the application to my code's feature! No more wondering: Did my code actually get executed or not?<br /><br />Just a quick run-the-testcase, and I know how my code runs: green - good. Red - not yet good. Read failure message and fix code until green. Debugging sucks, testing rocks, indeed!<br /><br />Of course, all the complex issues of integration and system testing remain. Good unit testing gives me a good head start for integration, but I might still be in for unpleasant surprises there.<br /><br />The point I want to make here, though, is about my state of mind when I write new code. For me, <span style="font-weight:bold;">writing new code for complex systems was always accompanied by fear</span>: fear of crashes I'd have to debug, fear of creating bugs I might not discover, fear of the dreary work of searching for bugs I might have created. Fear that took up a considerable amount of my mind space and <span style="font-weight:bold;">slowed me down</span>.<br /><br /><span style="font-weight:bold;">Now, this fear is gone!</span> I happily go about writing my code because I know the tests are already in place. <span style="font-weight:bold;">It will cost me just a few keystrokes to run my finished code, and I will immediately see what it does.</span> Hooray, I wrote a program, and it works, and it's easy to prove it!<br /><br /><span style="font-weight:bold;">It's the same old enthusiasm that I felt more than 20 years ago when I wrote and ran my first programs.</span> Many of you have felt it, too - the joy of inducing some life into this dead piece of hardware through our written word. And now this joy sits in my mind again where fear of crashes was before. You'd better believe that speeds up my coding! Want to give it a try yourself?<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-5616620937874760541?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/test-first-is-fun/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Announcing: New Google C++ Testing Framework</title>
		<link>https://googledata.org/google-testing/announcing-new-google-c-testing-framework/</link>
		<comments>https://googledata.org/google-testing/announcing-new-google-c-testing-framework/#comments</comments>
		<pubDate>Fri, 04 Jul 2008 00:02:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Posted by Zhanyong Wan, Software EngineerWe all know the importance of writing automated tests to cover our code. To make it easier for everyone to write good C++ tests, today we have open-sourced Google C++ Testing Framework (Google Test for short), a...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Zhanyong Wan, Software Engineer</span><br /><br />We all know the importance of writing automated tests to cover our code. To make it easier for everyone to write good C++ tests, today we have open-sourced <a href="http://code.google.com/p/googletest/" id="fl82" >Google C++ Testing Framework</a> (Google Test for short), a library that thousands of Googlers have been using in our C++ programs.  Highlights of the project include:<br /> <ul id="fl821"><li id="fl822">     Google Test is <i id="fl823">portable</i>: it works on a variety of platforms (Linux, Windows, Mac OS X, and more), with several versions of GCC and MSVC compilers, and with or without exceptions. You can even use it in embedded systems like Windows CE and Symbian. Build tools and test runners for many of these are under active development, with Linux <a title="Autotools" href="http://www.gnu.org/software/autoconf/" id="aq27">Autotools</a> support already in place.<br />   </li><li id="fl824">     It supports both <a href="http://code.google.com/p/googletest/wiki/GoogleTestPrimer#Basic_Concepts" id="fl825" >fatal and nonfatal</a> assertions. The test will continue after a nonfatal failure. This allows more problems to be uncovered and fixed in a single edit-compile-test cycle. </li><li id="fl826">     It provides many <a href="http://code.google.com/p/googletest/wiki/GoogleTestPrimer#Assertions" id="fl827" >assertions</a> for common testing needs, and lets you easily <a href="http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide#Predicate_Assertions" id="fl828" >define new assertions</a> for less common cases.   </li><li id="fl829">     On Linux, you can write <a href="http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide#Death_Tests" id="fl8210" >death tests</a> to ensure that your code crashes with expected errors.   </li><li id="fl8211">Because it's based on the popular <a href="http://en.wikipedia.org/wiki/XUnit" id="fl8212" >xUnit</a> architecture, Google Test is easy to learn if you've used any testing framework in this family before.   </li></ul><br />It will take you about 10 minutes to learn <a href="http://code.google.com/p/googletest/wiki/GoogleTestPrimer" id="fl8213" >the basics</a> and get started. Stay tuned to this blog for helpful Google Test information in upcoming Testing on the Toilet episodes.<br /><br />Please send questions and feedback to <a title="googletestframework@googlegroups.com" href="mailto:googletestframework@googlegroups.com" id="bx9o">googletestframework@googlegroups.com</a> (the <a href="http://groups.google.com/group/googletestframework" id="fl8215" >Google Test Discussion Group</a>). See you there!<div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-4596842432845665451?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/announcing-new-google-c-testing-framework/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>TotT: Extracting Methods to Simplify Testing</title>
		<link>https://googledata.org/google-testing/tott-extracting-methods-to-simplify-testing/</link>
		<comments>https://googledata.org/google-testing/tott-extracting-methods-to-simplify-testing/#comments</comments>
		<pubDate>Wed, 13 Jun 2007 21:47:00 +0000</pubDate>
		<dc:creator><![CDATA[niceredfrog]]></dc:creator>
				<category><![CDATA[Google Testing]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[When a method is long and complex, it is harder to test. You can make it easier by extracting methods: finding pieces of code in existing, complex methods (or functions) that can be replaced with method calls (or function calls). Consider the following...]]></description>
				<content:encoded><![CDATA[<p>When a method is long and complex, it is harder to test. You can make it easier by <b><span style="color:#800000;">extracting methods</span></b>: finding pieces of code in existing, complex methods (or functions) that can be replaced with method calls (or function calls). Consider the following complicated method:</p><blockquote style="background: rgb(255, 255, 224) none repeat scroll 0% 50%; font-size:small"><pre><br /> def GetTestResults(self):<br />   <span style="color: rgb(0, 0, 255);"># Check if results have been cached.</span><br />   results = cache.get('test_results', None)<br />   if results is None:<br />     <span style="color: rgb(0, 0, 255);"># No results in the cache, so check the database.</span><br />     results = db.FetchResults(SQL_SELECT_TEST_RESULTS)<br />   <span style="color: rgb(0, 0, 255)"># Count passing and failing tests.</span><br />   num_passing = len([r for r in results if r['outcome'] == 'pass'])<br />   num_failing = len(results) - num_passing<br />   return num_passing, num_failing<br /></pre></blockquote><p>This method is difficult to test because it not only relies on a database, but also on a cache. In addition, it performs some post processing of the retrieved results. The first hint that this method could use refactoring is the abundance of comments. Extracting sections of code into well-named methods reduces the original method's complexity. When complexity is reduced, comments often become unnecessary. For example, consider the following:</p><blockquote style="background: rgb(255, 255, 224) none repeat scroll 0% 50%; font-size:small;"><pre><br /> def GetTestResults(self):<br />   results = self._GetTestResultsFromCache()<br />   if results is None:<br />     results = self._GetTestResultsFromDatabase()<br />   return self._CountPassFail(results)<br /><br /> def _GetTestResultsFromCache(self):<br />   return cache.get('test_results', None)<br /><br /> def _GetTestResultsFromDatabase(self):<br />   return db.FetchResults(SQL_SELECT_TEST_RESULTS)<br /><br /> def _CountPassFail(self, results):<br />   num_passing = len([r for r in results if r['outcome'] == 'pass'])<br />   num_failing = len(results) - num_passing<br />   return num_passing, num_failing<br /><br /></pre></blockquote> <p>Now, tests can focus on each individual piece of the original method by testing each extracted method. This has the added benefit of making the code more readable and easier to maintain.</p><p>(Note: Method extraction can be done for you automatically in Python by the open-source refactoring browser BicycleRepairMan, and in Java by several IDEs, including IntelliJ IDEA and Eclipse.)</p><p>Remember to download <a href="http://code.google.com/testing/TotT-2007-06-13.pdf">this episode</a> of Testing on the Toilet and post it in your office.<br>&nbsp;</p><div class="blogger-post-footer"><img width='1' height='1' src='https://blogger.googleusercontent.com/tracker/15045980-6382897261833502183?l=googletesting.blogspot.com' alt='' /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-testing/tott-extracting-methods-to-simplify-testing/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
