<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Google Data &#187; Research @ Google</title>
	<atom:link href="/author/research-google/feed/" rel="self" type="application/rss+xml" />
	<link>https://googledata.org</link>
	<description>Everything Google: News, Products, Services, Content, Culture</description>
	<lastBuildDate>Thu, 19 Mar 2015 12:00:08 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.7.5</generator>
	<item>
		<title>HDR+: Low Light and High Dynamic Range photography in the Google Camera App</title>
		<link>https://googledata.org/google-research/hdr-low-light-and-high-dynamic-range-photography-in-the-google-camera-app/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=hdr-low-light-and-high-dynamic-range-photography-in-the-google-camera-app</link>
		<comments>https://googledata.org/google-research/hdr-low-light-and-high-dynamic-range-photography-in-the-google-camera-app/#comments</comments>
		<pubDate>Mon, 27 Oct 2014 13:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=7db0b1cc8a1ebd3e4dbc8944f0d92311</guid>
		<description><![CDATA[<span>Posted by Marc Levoy, Google[x] Software Engineering Manager and <a href="http://graphics.stanford.edu/~levoy/">Professor Emeritus, Stanford University</a></span><br /><br />As anybody who has tried to use a smartphone to photograph a dimly lit scene knows, the resulting pictures are often blurry or full of random variations in brightness from pixel to pixel, known as <a href="http://en.wikipedia.org/wiki/Image_noise">image noise</a>. Equally frustrating are smartphone photographs of scenes where there is a large range of brightness levels, such as a family photo backlit by a bright sky. In <a href="http://en.wikipedia.org/wiki/High-dynamic-range_imaging">high dynamic range</a> (HDR) situations like this, photographs will either come out with an overexposed sky (turning it white) or an underexposed family (turning them into silhouettes).<br /><br />HDR+ is a feature in the <a href="https://play.google.com/store/apps/details?id=com.google.android.GoogleCamera&#38;hl=en">Google Camera app</a> for Nexus 5 and Nexus 6 that uses computational photography to help you take better pictures in these common situations. When you press the shutter button, HDR+ actually captures a rapid burst of pictures, then quickly combines them into one. This improves results in both low-light and high dynamic range situations. Below we delve into each case and describe how HDR+ works to produce a better picture. <br /><br /><b>Capturing low-light scenes</b><br /><br />The camera on a smartphone has a small lens, meaning that it doesn't gather much light. If a scene is dimly lit, the resulting photograph will contain image noise. One solution is to lengthen the exposure time - how long the sensor chip collects light. This reduces noise, but since it's hard to hold a smartphone perfectly steady, long exposures have the unwanted side effect of blurring the shot. Devices with <a href="http://en.wikipedia.org/wiki/Image_stabilization">optical image stabilization</a> (OIS) sense this "camera shake&#8221; and shift the lens rapidly to compensate. This allows longer exposures with less blur, but it can&#8217;t help with really dark scenes.<br /><br />HDR+ addresses this problem by taking a burst of shots with short exposure times, aligning them algorithmically, and replacing each pixel with the average color at that position across all the shots. <a href="http://www.cambridgeincolour.com/tutorials/image-averaging-noise.htm">Averaging multiple shots</a> reduces noise, and using short exposures reduces blur.  HDR+ also begins the alignment process by choosing the sharpest single shot from the burst. Astronomers call this <a href="http://en.wikipedia.org/wiki/Lucky_imaging">lucky imaging</a>, a technique used to reduce the blurring of images caused by Earth's shimmering atmosphere.<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://3.bp.blogspot.com/-C_pqraCOxmA/VE5ICYqWUGI/AAAAAAAAAag/Kng9uoU-KeY/s1600/HDR%2B1.png"><img border="0" src="http://3.bp.blogspot.com/-C_pqraCOxmA/VE5ICYqWUGI/AAAAAAAAAag/Kng9uoU-KeY/s1600/HDR%2B1.png" height="430" width="640"></a></td></tr><tr><td><i>A low light example is captured at dusk. The picture at left was taken with <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6071336378673224498?pid=6071336378673224498&#38;oid=105100397789364595413">HDR+ off</a> and the picture at right with <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6072799427317681538?pid=6072799427317681538&#38;oid=105100397789364595413">HDR+ on</a>. The HDR+ image is brighter, cleaner, and sharper, with much more detail seen in the subject&#8217;s hair and eyelashes. Photos by Florian Kainz</i></td></tr></tbody></table><b>Capturing high dynamic range scenes</b><br /><br />Another limitation of smartphone cameras is that their sensor chips have small pixels. This limits the camera's <a href="http://en.wikipedia.org/wiki/Dynamic_range">dynamic range</a>, which refers to the span between the brightest highlight that doesn't blow out (turn white) and the darkest shadow that doesn't look black. One solution is to capture a sequence of pictures with different exposure times (sometimes called <a href="http://en.wikipedia.org/wiki/Bracketing">bracketing</a>), then align and blend the images together. Unfortunately, bracketing causes parts of the long-exposure image to blow out and parts of the short-exposure image to be noisy. This makes alignment hard, leading to ghosts, double images, and other artifacts.<br /><br />However, bracketing is not actually necessary; one can use the same exposure time in every shot. By using a short exposure HDR+ avoids blowing out highlights, and by combining enough shots it reduces noise in the shadows. This enables the software to boost the brightness of shadows, saving both the subject and the sky, as shown in the example below.  And since all the shots look similar, alignment is robust; you won&#8217;t see ghosts or double images in HDR+ images, as one sometimes sees with other HDR software.<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://3.bp.blogspot.com/-utUr33f9l_Y/VE1iR-ApMkI/AAAAAAAAAZQ/N2UaHstq3Uw/s1600/HDR%2B2.png"><img border="0" src="http://3.bp.blogspot.com/-utUr33f9l_Y/VE1iR-ApMkI/AAAAAAAAAZQ/N2UaHstq3Uw/s1600/HDR%2B2.png" height="237" width="640"></a></td></tr><tr><td><div><div><i>A classic high dynamic range situation. With <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6071336352812984194?pid=6071336352812984194&#38;oid=105100397789364595413">HDR+ off (left)</a>, the camera exposes for the subjects&#8217; faces, causing the landscape and sky to blow out. With <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6071336355357437778?pid=6071336355357437778&#38;oid=105100397789364595413">HDR+ on (right)</a>, the picture successfully captures the subjects, the landscape, and the sky.&#160;</i><i>Photos by Ryan Geiss</i></div></div></td></tr></tbody></table>Our last example illustrates all three of the problems we&#8217;ve talked about - high dynamic range, low light, and camera shake.  With HDR+ off, a photo of Princeton University Chapel (shown below) taken with Nexus 6 chooses a relatively long 1/12 second exposure. Although optical image stabilization reduces camera shake, this is a long time to hold a camera still, so the image is slightly blurry. Since the scene was very dark, the walls are noisy despite the long exposure. Therefore, strong denoising is applied, causing smearing (below, left inset image). Finally, because the scene also has high dynamic range, the window at the end of the nave is blown out (below, right inset image), and the side arches are lost in darkness.<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://2.bp.blogspot.com/-EQmUwsrM5Lo/VE26fr8w-sI/AAAAAAAAAZ0/nednky5nXMs/s1600/princeton-church-normal-with-inset.jpg"><img border="0" src="http://2.bp.blogspot.com/-EQmUwsrM5Lo/VE26fr8w-sI/AAAAAAAAAZ0/nednky5nXMs/s1600/princeton-church-normal-with-inset.jpg" height="472" width="640"></a></td></tr><tr><td><i>Click <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6072493005547593218?pid=6072493005547593218&#38;oid=105100397789364595413">here</a> to see the full resolution image. Photo by Marc Levoy</i></td></tr></tbody></table>HDR+ mode performs better on all three problems, as seen in the image below: the chandelier at left is cleaner and sharper, the window is no longer blown out, there is more detail in the side arches, and since a burst of shots are captured and the software begins alignment by choosing the sharpest shot in the burst (lucky imaging), the resulting picture is sharp.<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://1.bp.blogspot.com/-qNiuoM3vy94/VE27Ed4kMKI/AAAAAAAAAZ8/HLzCuWByzSI/s1600/princeton-church-hdrp-with-inset.jpg"><img border="0" src="http://1.bp.blogspot.com/-qNiuoM3vy94/VE27Ed4kMKI/AAAAAAAAAZ8/HLzCuWByzSI/s1600/princeton-church-hdrp-with-inset.jpg" height="473" width="640"></a></td></tr><tr><td><i>Click <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6072493009205599122?pid=6072493009205599122&#38;oid=105100397789364595413">here</a> to see the full resolution image. Photo by Marc Levoy</i></td></tr></tbody></table>Here's an <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233">album</a> containing these comparisons and others as high-resolution images. For each scene in the album there is a pair of images captured by Nexus 6; the first was was taken with HDR+ off, and the second with HDR+ on.<br /><br /><b>Tips on using HDR+</b><br /><br />Capturing a burst in HDR+ mode takes between 1/3 second and 1 second, depending on how dark the scene is.  During this time you'll see a circle animating on the screen (left image below). Try to hold still until it finishes. The combining step also takes time, so if you scroll to the camera roll right after taking the shot, you'll see a thumbnail image and a progress bar (right image below). When the bar reaches 100%, your HDR+ picture is ready.<br /><div><a href="http://4.bp.blogspot.com/-SDzLRc8BdFA/VE1lX7QwTmI/AAAAAAAAAZk/7B1Pes6ULAA/s1600/HDR%2B3.png"><img border="0" src="http://4.bp.blogspot.com/-SDzLRc8BdFA/VE1lX7QwTmI/AAAAAAAAAZk/7B1Pes6ULAA/s1600/HDR%2B3.png" height="180" width="640"></a></div>Should you leave HDR+ mode on? We do. The only times we turn it off are for fast-moving sports, because HDR+ pictures take longer to capture than a single shot, or for scenes that are so dark we need the flash. But before you turn off HDR+ for these action shots or super-dark scenes, give it a try; we think you'll be surprised how well it works!<br /><br />At this time HDR+ is available only on Nexus 5 and Nexus 6, as part of the <a href="https://play.google.com/store/apps/details?id=com.google.android.GoogleCamera&#38;hl=en">Google Camera app</a>. <br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Marc Levoy, Google[x] Software Engineering Manager and <a href="http://graphics.stanford.edu/~levoy/">Professor Emeritus, Stanford University</a></span><br /><br />As anybody who has tried to use a smartphone to photograph a dimly lit scene knows, the resulting pictures are often blurry or full of random variations in brightness from pixel to pixel, known as <a href="http://en.wikipedia.org/wiki/Image_noise">image noise</a>. Equally frustrating are smartphone photographs of scenes where there is a large range of brightness levels, such as a family photo backlit by a bright sky. In <a href="http://en.wikipedia.org/wiki/High-dynamic-range_imaging">high dynamic range</a> (HDR) situations like this, photographs will either come out with an overexposed sky (turning it white) or an underexposed family (turning them into silhouettes).<br /><br />HDR+ is a feature in the <a href="https://play.google.com/store/apps/details?id=com.google.android.GoogleCamera&amp;hl=en">Google Camera app</a> for Nexus 5 and Nexus 6 that uses computational photography to help you take better pictures in these common situations. When you press the shutter button, HDR+ actually captures a rapid burst of pictures, then quickly combines them into one. This improves results in both low-light and high dynamic range situations. Below we delve into each case and describe how HDR+ works to produce a better picture. <br /><br /><b>Capturing low-light scenes</b><br /><br />The camera on a smartphone has a small lens, meaning that it doesn't gather much light. If a scene is dimly lit, the resulting photograph will contain image noise. One solution is to lengthen the exposure time - how long the sensor chip collects light. This reduces noise, but since it's hard to hold a smartphone perfectly steady, long exposures have the unwanted side effect of blurring the shot. Devices with <a href="http://en.wikipedia.org/wiki/Image_stabilization">optical image stabilization</a> (OIS) sense this "camera shake” and shift the lens rapidly to compensate. This allows longer exposures with less blur, but it can’t help with really dark scenes.<br /><br />HDR+ addresses this problem by taking a burst of shots with short exposure times, aligning them algorithmically, and replacing each pixel with the average color at that position across all the shots. <a href="http://www.cambridgeincolour.com/tutorials/image-averaging-noise.htm">Averaging multiple shots</a> reduces noise, and using short exposures reduces blur.  HDR+ also begins the alignment process by choosing the sharpest single shot from the burst. Astronomers call this <a href="http://en.wikipedia.org/wiki/Lucky_imaging">lucky imaging</a>, a technique used to reduce the blurring of images caused by Earth's shimmering atmosphere.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-C_pqraCOxmA/VE5ICYqWUGI/AAAAAAAAAag/Kng9uoU-KeY/s1600/HDR%2B1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-C_pqraCOxmA/VE5ICYqWUGI/AAAAAAAAAag/Kng9uoU-KeY/s1600/HDR%2B1.png" height="430" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>A low light example is captured at dusk. The picture at left was taken with <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6071336378673224498?pid=6071336378673224498&amp;oid=105100397789364595413">HDR+ off</a> and the picture at right with <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6072799427317681538?pid=6072799427317681538&amp;oid=105100397789364595413">HDR+ on</a>. The HDR+ image is brighter, cleaner, and sharper, with much more detail seen in the subject’s hair and eyelashes. Photos by Florian Kainz</i></td></tr></tbody></table><b>Capturing high dynamic range scenes</b><br /><br />Another limitation of smartphone cameras is that their sensor chips have small pixels. This limits the camera's <a href="http://en.wikipedia.org/wiki/Dynamic_range">dynamic range</a>, which refers to the span between the brightest highlight that doesn't blow out (turn white) and the darkest shadow that doesn't look black. One solution is to capture a sequence of pictures with different exposure times (sometimes called <a href="http://en.wikipedia.org/wiki/Bracketing">bracketing</a>), then align and blend the images together. Unfortunately, bracketing causes parts of the long-exposure image to blow out and parts of the short-exposure image to be noisy. This makes alignment hard, leading to ghosts, double images, and other artifacts.<br /><br />However, bracketing is not actually necessary; one can use the same exposure time in every shot. By using a short exposure HDR+ avoids blowing out highlights, and by combining enough shots it reduces noise in the shadows. This enables the software to boost the brightness of shadows, saving both the subject and the sky, as shown in the example below.  And since all the shots look similar, alignment is robust; you won’t see ghosts or double images in HDR+ images, as one sometimes sees with other HDR software.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-utUr33f9l_Y/VE1iR-ApMkI/AAAAAAAAAZQ/N2UaHstq3Uw/s1600/HDR%2B2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-utUr33f9l_Y/VE1iR-ApMkI/AAAAAAAAAZQ/N2UaHstq3Uw/s1600/HDR%2B2.png" height="237" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><div style="text-align: left;"><div style="text-align: center;"><i>A classic high dynamic range situation. With <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6071336352812984194?pid=6071336352812984194&amp;oid=105100397789364595413">HDR+ off (left)</a>, the camera exposes for the subjects’ faces, causing the landscape and sky to blow out. With <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6071336355357437778?pid=6071336355357437778&amp;oid=105100397789364595413">HDR+ on (right)</a>, the picture successfully captures the subjects, the landscape, and the sky.&nbsp;</i><i>Photos by Ryan Geiss</i></div></div></td></tr></tbody></table>Our last example illustrates all three of the problems we’ve talked about - high dynamic range, low light, and camera shake.  With HDR+ off, a photo of Princeton University Chapel (shown below) taken with Nexus 6 chooses a relatively long 1/12 second exposure. Although optical image stabilization reduces camera shake, this is a long time to hold a camera still, so the image is slightly blurry. Since the scene was very dark, the walls are noisy despite the long exposure. Therefore, strong denoising is applied, causing smearing (below, left inset image). Finally, because the scene also has high dynamic range, the window at the end of the nave is blown out (below, right inset image), and the side arches are lost in darkness.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-EQmUwsrM5Lo/VE26fr8w-sI/AAAAAAAAAZ0/nednky5nXMs/s1600/princeton-church-normal-with-inset.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-EQmUwsrM5Lo/VE26fr8w-sI/AAAAAAAAAZ0/nednky5nXMs/s1600/princeton-church-normal-with-inset.jpg" height="472" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Click <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6072493005547593218?pid=6072493005547593218&amp;oid=105100397789364595413">here</a> to see the full resolution image. Photo by Marc Levoy</i></td></tr></tbody></table>HDR+ mode performs better on all three problems, as seen in the image below: the chandelier at left is cleaner and sharper, the window is no longer blown out, there is more detail in the side arches, and since a burst of shots are captured and the software begins alignment by choosing the sharpest shot in the burst (lucky imaging), the resulting picture is sharp.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-qNiuoM3vy94/VE27Ed4kMKI/AAAAAAAAAZ8/HLzCuWByzSI/s1600/princeton-church-hdrp-with-inset.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-qNiuoM3vy94/VE27Ed4kMKI/AAAAAAAAAZ8/HLzCuWByzSI/s1600/princeton-church-hdrp-with-inset.jpg" height="473" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Click <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233/6072493009205599122?pid=6072493009205599122&amp;oid=105100397789364595413">here</a> to see the full resolution image. Photo by Marc Levoy</i></td></tr></tbody></table>Here's an <a href="https://plus.google.com/photos/105100397789364595413/albums/6071336333050111233">album</a> containing these comparisons and others as high-resolution images. For each scene in the album there is a pair of images captured by Nexus 6; the first was was taken with HDR+ off, and the second with HDR+ on.<br /><br /><b>Tips on using HDR+</b><br /><br />Capturing a burst in HDR+ mode takes between 1/3 second and 1 second, depending on how dark the scene is.  During this time you'll see a circle animating on the screen (left image below). Try to hold still until it finishes. The combining step also takes time, so if you scroll to the camera roll right after taking the shot, you'll see a thumbnail image and a progress bar (right image below). When the bar reaches 100%, your HDR+ picture is ready.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-SDzLRc8BdFA/VE1lX7QwTmI/AAAAAAAAAZk/7B1Pes6ULAA/s1600/HDR%2B3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-SDzLRc8BdFA/VE1lX7QwTmI/AAAAAAAAAZk/7B1Pes6ULAA/s1600/HDR%2B3.png" height="180" width="640" /></a></div>Should you leave HDR+ mode on? We do. The only times we turn it off are for fast-moving sports, because HDR+ pictures take longer to capture than a single shot, or for scenes that are so dark we need the flash. But before you turn off HDR+ for these action shots or super-dark scenes, give it a try; we think you'll be surprised how well it works!<br /><br />At this time HDR+ is available only on Nexus 5 and Nexus 6, as part of the <a href="https://play.google.com/store/apps/details?id=com.google.android.GoogleCamera&amp;hl=en">Google Camera app</a>. <br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/hdr-low-light-and-high-dynamic-range-photography-in-the-google-camera-app/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Helping teachers teach computer science</title>
		<link>https://googledata.org/google-research/helping-teachers-teach-computer-science/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=helping-teachers-teach-computer-science</link>
		<comments>https://googledata.org/google-research/helping-teachers-teach-computer-science/#comments</comments>
		<pubDate>Fri, 24 Oct 2014 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=260c17ba1db0f9ecaf4453a8f5a302fb</guid>
		<description><![CDATA[Posted by Karen Parker, Education Program Manager and Jason Ravitz, Education Evaluation Manager<br /><br /><i>(Cross-posted on the <a href="http://googleforeducation.blogspot.com/">Google for Education Blog</a>)</i><br /><br />Since 2009, Google&#8217;s <a href="http://www.cs4hs.com/" target="_blank">CS4HS (Computer Science for High School)</a> grant program has connected more than 12,000 computer science (CS) teachers with skills and resources to teach CS in fun and relevant ways. An estimated 600,000 students have been impacted by the teachers who have completed CS4HS professional development workshops so far. Through annual grants, nearly 230 colleges and universities have hosted professional development workshops worldwide.<br /><br />Grantees use the funds to develop CS curriculum and professional development workshops tailored for  local middle and high school teachers. These workshops expose teachers to CS curriculum using real-world applications that spark students&#8217; curiosity. As feedback from those teachers rolls in, we want to share some highlights from what we&#8217;ve learned so far.<br /><br /><b>What went well:</b><br /><ul><li>89% of participants reported they would recommend their workshop to others</li><li>44% more  participants reported a &#8220;high&#8221; or &#8220;very high knowledge&#8221; of CS after their workshop vs. before</li><li>More than half of participants said they would use &#8220;most&#8221; or &#8220;all&#8221; of the activities or resources presented during their workshop.</li><li>In 2014 the number of teachers who took part in a CS4HS professional development workshop increased by 50%, primarily due to the funding of multiple <a href="http://www.cs4hs.com/current-programs/" target="_blank">MOOCs</a>.</li></ul><b><br />Ways to make a bigger impact:</b><br /><ul><li>Just 53% of participants said  they felt a sense of community among the other workshop participants. Research by <a href="http://books.google.com/books?hl=en&#38;lr=&#38;id=MtBj2sJXHGwC&#38;oi=fnd&#38;pg=PR7&#38;dq=Joyce+%26+Showers,+2002&#38;ots=u-TMdtUI9p&#38;sig=JnqJFu-YUClR54KGl9t3aArxmxo#v=onepage&#38;q=Joyce%20&#38;f=false" target="_blank">Joyce &#38; Showers (2002) and  Wiske, Stone, &#38; Levinson (1993)</a> shows that peer-to-peer professional development, along with ongoing support, helps teachers implement new content, retain skills, and create lasting change. We&#8217;ll explore new ways to build community among participants as we plan future workshops.</li><li>83% of participants reported being Caucasian, which is consistent with the current <a href="http://outlier.uchicago.edu/computerscience/OS4CS/teacher-capacity/" target="_blank">demographics of CS educators</a>. This indicates a need to increase efforts in diversifying the <a href="http://googleforeducation.blogspot.com/2014/10/why-google-cares-about-computer-science.html" target="_blank">CS teacher population</a>.</li><li>Outcome measures show us that the most knowledge gains were among teachers who had no prior experience teaching CS or participating in CS professional development -- a population that made up just 30% of participants. While we see that the workshops are meeting a need, there remains an opportunity to develop materials geared toward more experienced CS teachers while also encouraging more new teachers to participate.</li></ul><br />We know there are many challenges to overcome to improve the state of CS teacher professional development. We look forward to sharing new ideas for working in partnership with the CS education community to help address those challenges, in particular by helping more teachers teach computer science.<br /><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://1.bp.blogspot.com/-YE_NIrdmhkw/VEqE2EFcKhI/AAAAAAAAEeE/0wtUI_D7R90/s1600/image00.jpg"><img border="0" src="http://1.bp.blogspot.com/-YE_NIrdmhkw/VEqE2EFcKhI/AAAAAAAAEeE/0wtUI_D7R90/s1600/image00.jpg" height="188" width="640"></a></td></tr><tr><td><span></span><br /><div dir="ltr"><span><span>At the University of Sydney CS4HS workshop teachers are learning how to teach </span></span></div><span><span>Computer Science without a computer during a CS Unplugged activity.</span></span></td></tr></tbody></table><br />]]></description>
				<content:encoded><![CDATA[Posted by Karen Parker, Education Program Manager and Jason Ravitz, Education Evaluation Manager<br /><br /><i>(Cross-posted on the <a href="http://googleforeducation.blogspot.com/">Google for Education Blog</a>)</i><br /><br />Since 2009, Google’s <a href="http://www.cs4hs.com/" >CS4HS (Computer Science for High School)</a> grant program has connected more than 12,000 computer science (CS) teachers with skills and resources to teach CS in fun and relevant ways. An estimated 600,000 students have been impacted by the teachers who have completed CS4HS professional development workshops so far. Through annual grants, nearly 230 colleges and universities have hosted professional development workshops worldwide.<br /><br />Grantees use the funds to develop CS curriculum and professional development workshops tailored for  local middle and high school teachers. These workshops expose teachers to CS curriculum using real-world applications that spark students’ curiosity. As feedback from those teachers rolls in, we want to share some highlights from what we’ve learned so far.<br /><br /><b>What went well:</b><br /><ul><li>89% of participants reported they would recommend their workshop to others</li><li>44% more  participants reported a “high” or “very high knowledge” of CS after their workshop vs. before</li><li>More than half of participants said they would use “most” or “all” of the activities or resources presented during their workshop.</li><li>In 2014 the number of teachers who took part in a CS4HS professional development workshop increased by 50%, primarily due to the funding of multiple <a href="http://www.cs4hs.com/current-programs/" >MOOCs</a>.</li></ul><b><br />Ways to make a bigger impact:</b><br /><ul><li>Just 53% of participants said  they felt a sense of community among the other workshop participants. Research by <a href="http://books.google.com/books?hl=en&amp;lr=&amp;id=MtBj2sJXHGwC&amp;oi=fnd&amp;pg=PR7&amp;dq=Joyce+%26+Showers,+2002&amp;ots=u-TMdtUI9p&amp;sig=JnqJFu-YUClR54KGl9t3aArxmxo#v=onepage&amp;q=Joyce%20&amp;f=false" >Joyce &amp; Showers (2002) and  Wiske, Stone, &amp; Levinson (1993)</a> shows that peer-to-peer professional development, along with ongoing support, helps teachers implement new content, retain skills, and create lasting change. We’ll explore new ways to build community among participants as we plan future workshops.</li><li>83% of participants reported being Caucasian, which is consistent with the current <a href="http://outlier.uchicago.edu/computerscience/OS4CS/teacher-capacity/" >demographics of CS educators</a>. This indicates a need to increase efforts in diversifying the <a href="http://googleforeducation.blogspot.com/2014/10/why-google-cares-about-computer-science.html" >CS teacher population</a>.</li><li>Outcome measures show us that the most knowledge gains were among teachers who had no prior experience teaching CS or participating in CS professional development -- a population that made up just 30% of participants. While we see that the workshops are meeting a need, there remains an opportunity to develop materials geared toward more experienced CS teachers while also encouraging more new teachers to participate.</li></ul><br />We know there are many challenges to overcome to improve the state of CS teacher professional development. We look forward to sharing new ideas for working in partnership with the CS education community to help address those challenges, in particular by helping more teachers teach computer science.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-YE_NIrdmhkw/VEqE2EFcKhI/AAAAAAAAEeE/0wtUI_D7R90/s1600/image00.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-YE_NIrdmhkw/VEqE2EFcKhI/AAAAAAAAEeE/0wtUI_D7R90/s1600/image00.jpg" height="188" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-e4b35186-4317-52a7-27f2-49a612fd09d9"></span><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span id="docs-internal-guid-e4b35186-4317-52a7-27f2-49a612fd09d9"><span style="background-color: white; color: #434343; font-family: Arial; font-size: 11px; vertical-align: baseline; white-space: pre-wrap;">At the University of Sydney CS4HS workshop teachers are learning how to teach </span></span></div><span id="docs-internal-guid-e4b35186-4317-52a7-27f2-49a612fd09d9"><span style="background-color: white; color: #434343; font-family: Arial; font-size: 11px; vertical-align: baseline; white-space: pre-wrap;">Computer Science without a computer during a CS Unplugged activity.</span></span></td></tr></tbody></table><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/helping-teachers-teach-computer-science/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Smart Autofill &#8211; Harnessing the Predictive Power of Machine Learning in Google Sheets</title>
		<link>https://googledata.org/google-research/smart-autofill-harnessing-the-predictive-power-of-machine-learning-in-google-sheets/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=smart-autofill-harnessing-the-predictive-power-of-machine-learning-in-google-sheets</link>
		<comments>https://googledata.org/google-research/smart-autofill-harnessing-the-predictive-power-of-machine-learning-in-google-sheets/#comments</comments>
		<pubDate>Mon, 13 Oct 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=74a6de4798cff7ccb8228a6fde4a21cd</guid>
		<description><![CDATA[<span>Posted by Konstantin Davydov, Software Engineer and Afshin Rostamizadeh, Research Scientist</span><br /><br />Much of Google&#8217;s work on language, speech, translation, and visual processing relies on <a href="http://en.wikipedia.org/wiki/Machine_learning">machine learning</a>, where we construct and apply learning algorithms that make use of labeled data in order to make predictions for new data. What if you could leverage machine learning algorithms to learn patterns in your spreadsheet data, automatically build a model, and infer unknown values?<br /><br />You can now use machine learning to make predictions in Google Sheets with the newly launched <a href="https://chrome.google.com/webstore/detail/smart-autofill/jchhjkpfdfiekidiklgbmpgbjoeiokna?utm_source=permalink">Smart Autofill Add-on</a>. With a single click, Smart Autofill predicts the missing values of a partially filled column in your spreadsheet by using the data of other related columns. Smart Autofill uses the non-missing data to learn patterns and differs from the standard "Auto-fill" feature of Sheets, which attempts to fill in only simple patterns that it already knows (e.g. calendar dates, days of the week, ordered numbers).<br /><br />As an example, in the screenshots below, we give four very simple characteristics of used vehicles (year, number of miles, number of doors, and type: car or truck) as well as the price for some of the vehicles. Since the prices are probably correlated with the characteristics of the vehicle, we can use Smart Autofill to estimate what the missing prices should be. The rows that do contain a price will be used as examples to learn from in order to fill in the rows with a missing price.<br /><br />Smart Autofill uses Google's cloud-based machine learning service&#160;<a href="https://cloud.google.com/prediction/">Prediction API</a>, which trains several linear as well as non-linear classification and regression models. The best model is automatically chosen for your problem by finding the one with the smallest misclassification error (for categorical data) or root-mean-squared error (for numeric data) calculated by using cross-validation on the labeled (non-empty) set of examples.<br /><br />To use Smart Autofill, after following the installation procedure, simply select "Add-ons &#62; Smart Autofill &#62; Start" which will open a sidebar. Select a block of data that includes the column to Autofill and click "Next". Finally, from the selected data, choose a target column to Autofill and click "Start" (Figure 1). Now just sit back as Smart Autofill does its work and fills in the missing values (Figure 2).<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://3.bp.blogspot.com/-zXhmh0rkPT4/VDs9TWrjvAI/AAAAAAAAAYY/BWhT_55KC4A/s1600/image01.jpg"><img border="0" src="http://3.bp.blogspot.com/-zXhmh0rkPT4/VDs9TWrjvAI/AAAAAAAAAYY/BWhT_55KC4A/s1600/image01.jpg" height="400" width="640"></a></td></tr><tr><td><span><span><i>Figure 1: Highlighting the dataset and selecting the target column.</i></span></span></td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://2.bp.blogspot.com/-70_3Mza9YU4/VDs9yeHY69I/AAAAAAAAAYg/9jKas2TAMmM/s1600/image00.jpg"><img border="0" src="http://2.bp.blogspot.com/-70_3Mza9YU4/VDs9yeHY69I/AAAAAAAAAYg/9jKas2TAMmM/s1600/image00.jpg" height="400" width="640"></a></td></tr><tr><td><span><span><i>Figure 2: After clicking "Start" a model is trained and applied to automatically fill in the missing values of the target column. Note, the estimated error of the model is reported in the sidebar.</i></span></span></td></tr></tbody></table>An estimate of the error-rate of the model (based on the non-missing data) is shown in the sidebar after the missing values are filled. The accuracy of Smart Autofill (as well as the accuracy of the estimated error) depends on many factors, including the amount and quality of the data provided. While not all datasets will be ideally suited for machine learning, we hope our <a href="https://cloud.google.com/prediction/docs/smart_autofill_add_on">more in-depth tutorial</a> will provide an idea of the range of problems where Smart Autofill can be effective.<br /><br />While the vehicle pricing example is relatively simple (in reality used vehicle prices are a function of more than just four variables), more complex datasets could have many more non-target columns as well as data rows. Also, the target column does not need to be numeric, since Smart Autofill can also predict categorical values (i.e. in the car example the target column value could have contained the categories "expensive", "moderate", "affordable" instead of price). Other illustrative scenarios include:<br /><br /><ul><li>You have a spreadsheet that holds the results of a customer survey, but one of the columns (e.g. "overall satisfaction 1-5") has some missing values. If the other columns of the survey can help indicate overall satisfaction then you can try using Smart Autofill to estimate the missing values.</li><li>You keep a spreadsheet of restaurants that you've visited and their characteristics (type: Italian, ambiance: quiet, cost: $$$, etc.) and whether you enjoyed the restaurant or not. Now you can add the characteristics of new restaurants to your spreadsheet and use Smart Autofill to guess at which ones you might enjoy.</li></ul><br />The example dataset and more detailed tutorial for the add-on can be found <a href="https://cloud.google.com/prediction/docs/smart_autofill_add_on">here</a>. We hope you discover new and useful ways to incorporate the predictive power of machine learning with your data.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Konstantin Davydov, Software Engineer and Afshin Rostamizadeh, Research Scientist</span><br /><br />Much of Google’s work on language, speech, translation, and visual processing relies on <a href="http://en.wikipedia.org/wiki/Machine_learning">machine learning</a>, where we construct and apply learning algorithms that make use of labeled data in order to make predictions for new data. What if you could leverage machine learning algorithms to learn patterns in your spreadsheet data, automatically build a model, and infer unknown values?<br /><br />You can now use machine learning to make predictions in Google Sheets with the newly launched <a href="https://chrome.google.com/webstore/detail/smart-autofill/jchhjkpfdfiekidiklgbmpgbjoeiokna?utm_source=permalink">Smart Autofill Add-on</a>. With a single click, Smart Autofill predicts the missing values of a partially filled column in your spreadsheet by using the data of other related columns. Smart Autofill uses the non-missing data to learn patterns and differs from the standard "Auto-fill" feature of Sheets, which attempts to fill in only simple patterns that it already knows (e.g. calendar dates, days of the week, ordered numbers).<br /><br />As an example, in the screenshots below, we give four very simple characteristics of used vehicles (year, number of miles, number of doors, and type: car or truck) as well as the price for some of the vehicles. Since the prices are probably correlated with the characteristics of the vehicle, we can use Smart Autofill to estimate what the missing prices should be. The rows that do contain a price will be used as examples to learn from in order to fill in the rows with a missing price.<br /><br />Smart Autofill uses Google's cloud-based machine learning service&nbsp;<a href="https://cloud.google.com/prediction/">Prediction API</a>, which trains several linear as well as non-linear classification and regression models. The best model is automatically chosen for your problem by finding the one with the smallest misclassification error (for categorical data) or root-mean-squared error (for numeric data) calculated by using cross-validation on the labeled (non-empty) set of examples.<br /><br />To use Smart Autofill, after following the installation procedure, simply select "Add-ons &gt; Smart Autofill &gt; Start" which will open a sidebar. Select a block of data that includes the column to Autofill and click "Next". Finally, from the selected data, choose a target column to Autofill and click "Start" (Figure 1). Now just sit back as Smart Autofill does its work and fills in the missing values (Figure 2).<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-zXhmh0rkPT4/VDs9TWrjvAI/AAAAAAAAAYY/BWhT_55KC4A/s1600/image01.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-zXhmh0rkPT4/VDs9TWrjvAI/AAAAAAAAAYY/BWhT_55KC4A/s1600/image01.jpg" height="400" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-1bade993-0767-e2be-692d-781819466bf2"><span style="font-family: Arial; font-size: 12px; vertical-align: baseline; white-space: pre-wrap;"><i>Figure 1: Highlighting the dataset and selecting the target column.</i></span></span></td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-70_3Mza9YU4/VDs9yeHY69I/AAAAAAAAAYg/9jKas2TAMmM/s1600/image00.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-70_3Mza9YU4/VDs9yeHY69I/AAAAAAAAAYg/9jKas2TAMmM/s1600/image00.jpg" height="400" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-1bade993-0769-ebdd-7d07-6449236185d6"><span style="font-family: Arial; font-size: 12px; vertical-align: baseline; white-space: pre-wrap;"><i>Figure 2: After clicking "Start" a model is trained and applied to automatically fill in the missing values of the target column. Note, the estimated error of the model is reported in the sidebar.</i></span></span></td></tr></tbody></table>An estimate of the error-rate of the model (based on the non-missing data) is shown in the sidebar after the missing values are filled. The accuracy of Smart Autofill (as well as the accuracy of the estimated error) depends on many factors, including the amount and quality of the data provided. While not all datasets will be ideally suited for machine learning, we hope our <a href="https://cloud.google.com/prediction/docs/smart_autofill_add_on">more in-depth tutorial</a> will provide an idea of the range of problems where Smart Autofill can be effective.<br /><br />While the vehicle pricing example is relatively simple (in reality used vehicle prices are a function of more than just four variables), more complex datasets could have many more non-target columns as well as data rows. Also, the target column does not need to be numeric, since Smart Autofill can also predict categorical values (i.e. in the car example the target column value could have contained the categories "expensive", "moderate", "affordable" instead of price). Other illustrative scenarios include:<br /><br /><ul><li>You have a spreadsheet that holds the results of a customer survey, but one of the columns (e.g. "overall satisfaction 1-5") has some missing values. If the other columns of the survey can help indicate overall satisfaction then you can try using Smart Autofill to estimate the missing values.</li><li>You keep a spreadsheet of restaurants that you've visited and their characteristics (type: Italian, ambiance: quiet, cost: $$$, etc.) and whether you enjoyed the restaurant or not. Now you can add the characteristics of new restaurants to your spreadsheet and use Smart Autofill to guess at which ones you might enjoy.</li></ul><br />The example dataset and more detailed tutorial for the add-on can be found <a href="https://cloud.google.com/prediction/docs/smart_autofill_add_on">here</a>. We hope you discover new and useful ways to incorporate the predictive power of machine learning with your data.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/smart-autofill-harnessing-the-predictive-power-of-machine-learning-in-google-sheets/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>All the News that&#8217;s Fit to Read: A Study of Social Annotations for News Reading</title>
		<link>https://googledata.org/google-research/all-the-news-thats-fit-to-read-a-study-of-social-annotations-for-news-reading/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=all-the-news-thats-fit-to-read-a-study-of-social-annotations-for-news-reading</link>
		<comments>https://googledata.org/google-research/all-the-news-thats-fit-to-read-a-study-of-social-annotations-for-news-reading/#comments</comments>
		<pubDate>Wed, 08 Oct 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=fdc1812ac819843cb0ad89edc25f8d97</guid>
		<description><![CDATA[<span>Posted by Chinmay Kulkarni, Stanford University Ph.D candidate and former Google Intern, and Ed H. Chi, Google Research Scientist</span><br /><br />News is one of the most important parts of our collective information diet, and like any other activity on the Web, online news reading is fast becoming a social experience. Internet users today see recommendations for news from a variety of sources; newspaper websites allow readers to recommend news articles to each other, restaurant review sites present other diners&#8217; recommendations, and now several social networks have integrated social news readers.<br /><br />With news article recommendations and endorsements coming from a combination of computers and algorithms, companies that publish and aggregate content, friends and even complete strangers, how do these explanations (i.e. why the articles are shown to you, which we call &#8220;annotations&#8221;) affect users' selections of what to read? Given the ubiquity of online social annotations in news dissemination, it is surprising how little is known about how users respond to these annotations, and how to offer them to users productively.<br /><br />In <a href="http://research.google.com/pubs/pub41200.html">All the News that&#8217;s Fit to Read: A Study of Social Annotations for News Reading</a>, presented at the <a href="http://chi2013.acm.org/">2013 ACM SIGCHI Conference on Human Factors in Computing Systems</a> and highlighted in the list of <a href="http://googleresearch.blogspot.com/2014/06/influential-papers-for-2013.html">influential Google papers from 2013</a>, we reported on results from two experiments with voluntary participants that suggest that social annotations, which have so far been considered as a generic simple method to increase user engagement, are not simple at all; social annotations vary significantly in their degree of persuasiveness, and their ability to change user engagement.<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://2.bp.blogspot.com/-7Ooa5-CawNQ/VDQruV4DtaI/AAAAAAAAAYE/HaVEYv_hdE4/s1600/image00.png"><img border="0" src="http://2.bp.blogspot.com/-7Ooa5-CawNQ/VDQruV4DtaI/AAAAAAAAAYE/HaVEYv_hdE4/s1600/image00.png" height="148" width="640"></a></td></tr><tr><td><i>News articles in different annotation conditions</i></td></tr></tbody></table>The first experiment looked at how people use annotations when the content they see is not personalized, and the annotations are not from people in their social network, as is the case when a user is not signed into a particular social network. Participants who signed up for the study were suggested the same set of news articles via annotations from strangers, a computer agent, and a fictional branded company. Additionally, they were told whether or not other participants in the experiment would see their name displayed next to articles they read (i.e. &#8220;Recorded&#8221; or &#8220;Not Recorded&#8221;).<br /><br />Surprisingly, annotations by unknown companies and computers were significantly more persuasive than those by strangers in this &#8220;signed-out&#8221; context. This result implies the potential power of suggestion offered by annotations, even when they&#8217;re conferred by brands or recommendation algorithms previously unknown to the users, and that annotations by computers and companies may be valuable in a signed-out context. Furthermore, the experiment showed that with &#8220;recording&#8221; on, the overall number of articles clicked decreased compared to participants <i>without</i> &#8220;recording,&#8221; regardless of the type of annotation, suggesting that subjects were cognizant of how they appear to other users in social reading apps.<br /><br />If annotations by strangers is not as persuasive as those by computers or brands, as the first experiment showed, what about the effects of <i>friend</i> annotations? The second experiment examined the signed-in experience (with Googlers as subjects) and how they reacted to social annotations from friends, investigating whether personalized endorsements help people discover and select what might be more interesting content. <br /><br />Perhaps not entirely surprising, results showed that friend annotations are persuasive and improve user satisfaction of news article selections. What&#8217;s interesting is that, in post-experiment interviews, we found that annotations influenced whether participants read articles primarily in three cases: first, when the annotator was above a threshold of social closeness; second, when the annotator had subject expertise related to the news article; and third, when the annotation provided additional context to the recommended article. This suggests that social context and personalized annotation work together to improve user experience overall. <br /><br />Some questions for future research include whether or not highlighting expertise in annotations help, if the threshold for social proximity can be algorithmically determined, and if aggregating annotations (e.g. &#8220;110 people liked this&#8221;) help increases engagement. We look forward to further research that enable social recommenders to offer appropriate explanations for why users should pay attention, and reveal more nuances based on the presentation of annotations.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Chinmay Kulkarni, Stanford University Ph.D candidate and former Google Intern, and Ed H. Chi, Google Research Scientist</span><br /><br />News is one of the most important parts of our collective information diet, and like any other activity on the Web, online news reading is fast becoming a social experience. Internet users today see recommendations for news from a variety of sources; newspaper websites allow readers to recommend news articles to each other, restaurant review sites present other diners’ recommendations, and now several social networks have integrated social news readers.<br /><br />With news article recommendations and endorsements coming from a combination of computers and algorithms, companies that publish and aggregate content, friends and even complete strangers, how do these explanations (i.e. why the articles are shown to you, which we call “annotations”) affect users' selections of what to read? Given the ubiquity of online social annotations in news dissemination, it is surprising how little is known about how users respond to these annotations, and how to offer them to users productively.<br /><br />In <a href="http://research.google.com/pubs/pub41200.html">All the News that’s Fit to Read: A Study of Social Annotations for News Reading</a>, presented at the <a href="http://chi2013.acm.org/">2013 ACM SIGCHI Conference on Human Factors in Computing Systems</a> and highlighted in the list of <a href="http://googleresearch.blogspot.com/2014/06/influential-papers-for-2013.html">influential Google papers from 2013</a>, we reported on results from two experiments with voluntary participants that suggest that social annotations, which have so far been considered as a generic simple method to increase user engagement, are not simple at all; social annotations vary significantly in their degree of persuasiveness, and their ability to change user engagement.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-7Ooa5-CawNQ/VDQruV4DtaI/AAAAAAAAAYE/HaVEYv_hdE4/s1600/image00.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-7Ooa5-CawNQ/VDQruV4DtaI/AAAAAAAAAYE/HaVEYv_hdE4/s1600/image00.png" height="148" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>News articles in different annotation conditions</i></td></tr></tbody></table>The first experiment looked at how people use annotations when the content they see is not personalized, and the annotations are not from people in their social network, as is the case when a user is not signed into a particular social network. Participants who signed up for the study were suggested the same set of news articles via annotations from strangers, a computer agent, and a fictional branded company. Additionally, they were told whether or not other participants in the experiment would see their name displayed next to articles they read (i.e. “Recorded” or “Not Recorded”).<br /><br />Surprisingly, annotations by unknown companies and computers were significantly more persuasive than those by strangers in this “signed-out” context. This result implies the potential power of suggestion offered by annotations, even when they’re conferred by brands or recommendation algorithms previously unknown to the users, and that annotations by computers and companies may be valuable in a signed-out context. Furthermore, the experiment showed that with “recording” on, the overall number of articles clicked decreased compared to participants <i>without</i> “recording,” regardless of the type of annotation, suggesting that subjects were cognizant of how they appear to other users in social reading apps.<br /><br />If annotations by strangers is not as persuasive as those by computers or brands, as the first experiment showed, what about the effects of <i>friend</i> annotations? The second experiment examined the signed-in experience (with Googlers as subjects) and how they reacted to social annotations from friends, investigating whether personalized endorsements help people discover and select what might be more interesting content. <br /><br />Perhaps not entirely surprising, results showed that friend annotations are persuasive and improve user satisfaction of news article selections. What’s interesting is that, in post-experiment interviews, we found that annotations influenced whether participants read articles primarily in three cases: first, when the annotator was above a threshold of social closeness; second, when the annotator had subject expertise related to the news article; and third, when the annotation provided additional context to the recommended article. This suggests that social context and personalized annotation work together to improve user experience overall. <br /><br />Some questions for future research include whether or not highlighting expertise in annotations help, if the threshold for social proximity can be algorithmically determined, and if aggregating annotations (e.g. “110 people liked this”) help increases engagement. We look forward to further research that enable social recommenders to offer appropriate explanations for why users should pay attention, and reveal more nuances based on the presentation of annotations.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/all-the-news-thats-fit-to-read-a-study-of-social-annotations-for-news-reading/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Announcing the Google CS Engagement Small Awards Program</title>
		<link>https://googledata.org/google-research/announcing-the-google-cs-engagement-small-awards-program/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=announcing-the-google-cs-engagement-small-awards-program</link>
		<comments>https://googledata.org/google-research/announcing-the-google-cs-engagement-small-awards-program/#comments</comments>
		<pubDate>Mon, 06 Oct 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=5f18a296cc6b1067da01a8802b8e46d2</guid>
		<description><![CDATA[<span>Posted by Leslie Yeh Johnson, University Relations</span><br /><br />(cross-posted on the <a href="http://googleforeducation.blogspot.com/">Google for Education blog</a>)<br /><br />College students are more interested than ever in studying computer science. There has been an unprecedented increase in enrollment in Computer Science undergraduate programs over the past six years. Harvard University&#8217;s popular introductory CS course CS50 has recently claimed the spot as the <a href="http://www.thecrimson.com/article/2014/9/11/cs50-breaks-enrollment-records/?page=single">most enrolled course on campus</a>. An astounding 50% of <a href="https://www.hmc.edu/about-hmc/2014/05/20/harvey-mudd-graduates-landmark-class/">Harvey Mudd&#8217;s graduates</a> received engineering degrees this year. However, while the overall number of students in introductory computer science courses continue to climb, the number of students who go on to complete undergraduate degrees in this field, particularly among women and under-represented minorities, does not match this increase in individual course enrollment (<a href="http://cra.org/uploads/documents/resources/crndocs/2013-Taulbee-Survey.pdf">2013 Taulbee Survey</a>).<br /><br /><a href="http://cra.org/uploads/documents/resources/crndocs/2013-Taulbee-Survey.pdf">Recent findings</a> show that while students may begin a CS degree program, retaining students after their first year remains an issue. <a href="http://www.ncwit.org/sites/default/files/resources/keypracticesretainingundergraduatescomputing_final.pdf">Research indicates</a> that one of the strongest factors in the retention of students in undergraduate CS degrees is early exposure to engaging courses and course material, such as high quality assignments that are meaningful and relevant to the student&#8217;s life or classroom activities that encourage student-to-student interaction. When an <a href="http://www.sfgate.com/education/article/Revamped-computer-science-classes-attracting-more-5243026.php">instructor</a> or <a href="http://www.usnews.com/news/stem-solutions/articles/2014/05/05/getting-girls-to-study-stem-its-about-more-than-just-making-science-cool">department</a> imbeds these practices into the introductory CS classroom, students remain excited about CS and are more likely to complete their undergraduate CS degree.<br /><br />At Google we believe in the importance of preparing the next generation of computer scientists. To this end, we&#8217;ve created the <a href="http://services.google.com/fh/files/blogs/csengagementsmallgrants.pdf">CS Engagement Small Grants Program</a> to support educators teaching introductory computer science courses in reaching their engagement and retention goals. We&#8217;ll give unrestricted gifts of $5,000 to the selected applicants&#8217; universities, towards the execution of engaging CS1 or CS2 courses in the 2014-2015 school year. We encourage educators who are teaching CS1 and CS2 courses at the post-secondary level to apply to the Google CS Engagement Small Grants Program. Applications will be accepted through <b>November 15, 2014</b> and will be evaluated on an ongoing basis. If you&#8217;re interested in applying, please check out the <a href="http://services.google.com/fh/files/blogs/csengagementsmallgrants.pdf">Call for Proposals</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Leslie Yeh Johnson, University Relations</span><br /><br />(cross-posted on the <a href="http://googleforeducation.blogspot.com/">Google for Education blog</a>)<br /><br />College students are more interested than ever in studying computer science. There has been an unprecedented increase in enrollment in Computer Science undergraduate programs over the past six years. Harvard University’s popular introductory CS course CS50 has recently claimed the spot as the <a href="http://www.thecrimson.com/article/2014/9/11/cs50-breaks-enrollment-records/?page=single">most enrolled course on campus</a>. An astounding 50% of <a href="https://www.hmc.edu/about-hmc/2014/05/20/harvey-mudd-graduates-landmark-class/">Harvey Mudd’s graduates</a> received engineering degrees this year. However, while the overall number of students in introductory computer science courses continue to climb, the number of students who go on to complete undergraduate degrees in this field, particularly among women and under-represented minorities, does not match this increase in individual course enrollment (<a href="http://cra.org/uploads/documents/resources/crndocs/2013-Taulbee-Survey.pdf">2013 Taulbee Survey</a>).<br /><br /><a href="http://cra.org/uploads/documents/resources/crndocs/2013-Taulbee-Survey.pdf">Recent findings</a> show that while students may begin a CS degree program, retaining students after their first year remains an issue. <a href="http://www.ncwit.org/sites/default/files/resources/keypracticesretainingundergraduatescomputing_final.pdf">Research indicates</a> that one of the strongest factors in the retention of students in undergraduate CS degrees is early exposure to engaging courses and course material, such as high quality assignments that are meaningful and relevant to the student’s life or classroom activities that encourage student-to-student interaction. When an <a href="http://www.sfgate.com/education/article/Revamped-computer-science-classes-attracting-more-5243026.php">instructor</a> or <a href="http://www.usnews.com/news/stem-solutions/articles/2014/05/05/getting-girls-to-study-stem-its-about-more-than-just-making-science-cool">department</a> imbeds these practices into the introductory CS classroom, students remain excited about CS and are more likely to complete their undergraduate CS degree.<br /><br />At Google we believe in the importance of preparing the next generation of computer scientists. To this end, we’ve created the <a href="http://services.google.com/fh/files/blogs/csengagementsmallgrants.pdf">CS Engagement Small Grants Program</a> to support educators teaching introductory computer science courses in reaching their engagement and retention goals. We’ll give unrestricted gifts of $5,000 to the selected applicants’ universities, towards the execution of engaging CS1 or CS2 courses in the 2014-2015 school year. We encourage educators who are teaching CS1 and CS2 courses at the post-secondary level to apply to the Google CS Engagement Small Grants Program. Applications will be accepted through <b>November 15, 2014</b> and will be evaluated on an ongoing basis. If you’re interested in applying, please check out the <a href="http://services.google.com/fh/files/blogs/csengagementsmallgrants.pdf">Call for Proposals</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/announcing-the-google-cs-engagement-small-awards-program/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Sudoku, Linear Optimization, and the Ten Cent Diet</title>
		<link>https://googledata.org/google-research/sudoku-linear-optimization-and-the-ten-cent-diet/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=sudoku-linear-optimization-and-the-ten-cent-diet</link>
		<comments>https://googledata.org/google-research/sudoku-linear-optimization-and-the-ten-cent-diet/#comments</comments>
		<pubDate>Tue, 30 Sep 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[open source]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=b58e7be6df9f172611d574ac935fb93b</guid>
		<description><![CDATA[<span>Posted by Jon Orwant, Engineering Manager</span><br /><br />(<i>cross-posted on the <a href="http://googleappsdeveloper.blogspot.com/">Google Apps Developer blog</a>, and the <a href="http://googledevelopers.blogspot.com/">Google Developers blog</a></i>)<br /><br />In 1945, future Nobel laureate <a href="http://en.wikipedia.org/wiki/George_Stigler">George Stigler</a> wrote an essay in the Journal of Farm Economics titled <i>The Cost of Subsistence</i> about a seemingly simple problem: how could a soldier be fed for as little money as possible?<br /><br />The &#8220;Stigler Diet&#8221; became a classic problem in the then-new field of <a href="http://en.wikipedia.org/wiki/Linear_programming">linear optimization</a>, which is used today in many areas of science and engineering. Any time you have a set of linear constraints such as &#8220;at least 50 square meters of solar panels&#8221; or &#8220;the amount of paint should equal the amount of primer&#8221; along with a linear goal (e.g., &#8220;minimize cost&#8221; or &#8220;maximize customers served&#8221;), that&#8217;s a linear optimization problem.<br /><br />At Google, our engineers work on plenty of optimization problems. One example is our <a href="http://googleresearch.blogspot.com/2012/05/video-stabilization-on-youtube.html">YouTube video stabilization system</a>, which uses linear optimization to eliminate the shakiness of handheld cameras. A more lighthearted example is in the <a href="https://chrome.google.com/webstore/detail/sudoku-sheets/eagolleeideiojopioiiaadjkneafmen?hl=en">Google&#160;Docs Sudoku add-on</a>, which instantaneously generates and solves Sudoku puzzles inside a Google Sheet, using the <a href="http://scip.zib.de/">SCIP</a> mixed integer programming solver to compute the solution.<br /><div><a href="http://1.bp.blogspot.com/-ENvRyZNNeig/VCMqYoTRPFI/AAAAAAAAAXU/Cvhk4RXnE6M/s1600/image02.png"><img border="0" src="http://1.bp.blogspot.com/-ENvRyZNNeig/VCMqYoTRPFI/AAAAAAAAAXU/Cvhk4RXnE6M/s1600/image02.png" height="382" width="640"></a></div>Today we&#8217;re proud to announce two new ways for everyone to solve linear optimization problems. First, you can now solve linear optimization problems in Google Sheets with the <a href="https://chrome.google.com/webstore/detail/linear-optimization/goadmgmjlkioggkbpbjlakbjjmlhdpen?utm_source=permalink">Linear Optimization add-on</a> written by Google Software Engineer Mihai Amarandei-Stavila. The add-on uses Google Apps Script to send optimization problems to Google servers. The solutions are displayed inside the spreadsheet. For developers who want to create their own applications on top of Google Apps, we also provide an <a href="https://developers.google.com/apps-script/reference/optimization/">API</a> to let you call our linear solver directly.<br /><div><a href="http://2.bp.blogspot.com/-hWVsND7a69w/VCMqiIe7CgI/AAAAAAAAAXc/c4yZu1qIHf8/s1600/image00.png"><img border="0" src="http://2.bp.blogspot.com/-hWVsND7a69w/VCMqiIe7CgI/AAAAAAAAAXc/c4yZu1qIHf8/s1600/image00.png" height="340" width="640"></a></div>Second, we&#8217;re open-sourcing the linear solver underlying the add-on: Glop (the Google Linear Optimization Package), created by <a href="https://plus.google.com/u/0/112941999316165083444/posts?e=-RedirectToSandbox">Bruno de Backer</a> with other members of the Google Optimization team. It&#8217;s available as part of the <a href="http://code.google.com/p/or-tools/">or-tools suite</a> and we provide a <a href="http://developers.google.com/optimization/docs/lp">few examples</a> to get you started. On that page, you&#8217;ll find the Glop solution to the Stigler diet problem. (A Google Sheets file that uses Glop and the Linear Optimization add-on to solve the Stigler diet problem is available <a href="https://docs.google.com/spreadsheets/d/1XWJLkAwch5GXAt_7zOFDcg8Wm8Xv29_8PWuuW15qmAE/edit?usp=sharing">here</a>. You&#8217;ll need to <a href="https://chrome.google.com/webstore/detail/linear-optimization/goadmgmjlkioggkbpbjlakbjjmlhdpen?utm_source=permalink">install the add-on first</a>.)<br /><br />Stigler posed his problem as follows: given nine nutrients (calories, protein, Vitamin C, and so on) and 77 candidate foods, find the foods that could sustain soldiers at minimum cost. <br /><br />The <a href="http://en.wikipedia.org/wiki/Simplex_algorithm">Simplex algorithm</a> for linear optimization was two years away from being invented, so Stigler had to do his best, arriving at a diet that cost $39.93 per year (in 1939 dollars), or just over ten cents per day. Even that wasn&#8217;t the cheapest diet. In 1947, Jack Laderman used Simplex, nine calculator-wielding clerks, and 120 person-days to arrive at the optimal solution.<br /><br />Glop&#8217;s Simplex implementation solves the problem in 300 milliseconds. Unfortunately, Stigler didn&#8217;t include taste as a constraint, and so the poor hypothetical soldiers will eat nothing but the following, ever:<br /><br /><ul><li>Enriched wheat flour</li><li>Liver</li><li>Cabbage</li><li>Spinach</li><li>Navy beans</li></ul><br />Is it possible to create an appealing dish out of these five ingredients? Google Chef Anthony Marco took it as a challenge, and we&#8217;re calling the result <i>Foie Lin&#233;aire &#224; la Stigler</i>:<br /><div><a href="http://1.bp.blogspot.com/-MghFagwzwmY/VCMquFVqaAI/AAAAAAAAAXk/J0PDXyYtW9I/s1600/image01.jpg"><img border="0" src="http://1.bp.blogspot.com/-MghFagwzwmY/VCMquFVqaAI/AAAAAAAAAXk/J0PDXyYtW9I/s1600/image01.jpg" height="426" width="640"></a></div>This optimal meal consists of seared calf liver dredged in flour, atop a navy bean pur&#233;e with marinated cabbage and a spinach pesto.<br /><br />Chef Marco reported that the most difficult constraint was making the dish tasty without butter or cream. That said, I had the opportunity to taste our linear optimization solution, and it was delicious.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Jon Orwant, Engineering Manager</span><br /><br />(<i>cross-posted on the <a href="http://googleappsdeveloper.blogspot.com/">Google Apps Developer blog</a>, and the <a href="http://googledevelopers.blogspot.com/">Google Developers blog</a></i>)<br /><br />In 1945, future Nobel laureate <a href="http://en.wikipedia.org/wiki/George_Stigler">George Stigler</a> wrote an essay in the Journal of Farm Economics titled <i>The Cost of Subsistence</i> about a seemingly simple problem: how could a soldier be fed for as little money as possible?<br /><br />The “Stigler Diet” became a classic problem in the then-new field of <a href="http://en.wikipedia.org/wiki/Linear_programming">linear optimization</a>, which is used today in many areas of science and engineering. Any time you have a set of linear constraints such as “at least 50 square meters of solar panels” or “the amount of paint should equal the amount of primer” along with a linear goal (e.g., “minimize cost” or “maximize customers served”), that’s a linear optimization problem.<br /><br />At Google, our engineers work on plenty of optimization problems. One example is our <a href="http://googleresearch.blogspot.com/2012/05/video-stabilization-on-youtube.html">YouTube video stabilization system</a>, which uses linear optimization to eliminate the shakiness of handheld cameras. A more lighthearted example is in the <a href="https://chrome.google.com/webstore/detail/sudoku-sheets/eagolleeideiojopioiiaadjkneafmen?hl=en">Google&nbsp;Docs Sudoku add-on</a>, which instantaneously generates and solves Sudoku puzzles inside a Google Sheet, using the <a href="http://scip.zib.de/">SCIP</a> mixed integer programming solver to compute the solution.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ENvRyZNNeig/VCMqYoTRPFI/AAAAAAAAAXU/Cvhk4RXnE6M/s1600/image02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-ENvRyZNNeig/VCMqYoTRPFI/AAAAAAAAAXU/Cvhk4RXnE6M/s1600/image02.png" height="382" width="640" /></a></div>Today we’re proud to announce two new ways for everyone to solve linear optimization problems. First, you can now solve linear optimization problems in Google Sheets with the <a href="https://chrome.google.com/webstore/detail/linear-optimization/goadmgmjlkioggkbpbjlakbjjmlhdpen?utm_source=permalink">Linear Optimization add-on</a> written by Google Software Engineer Mihai Amarandei-Stavila. The add-on uses Google Apps Script to send optimization problems to Google servers. The solutions are displayed inside the spreadsheet. For developers who want to create their own applications on top of Google Apps, we also provide an <a href="https://developers.google.com/apps-script/reference/optimization/">API</a> to let you call our linear solver directly.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-hWVsND7a69w/VCMqiIe7CgI/AAAAAAAAAXc/c4yZu1qIHf8/s1600/image00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-hWVsND7a69w/VCMqiIe7CgI/AAAAAAAAAXc/c4yZu1qIHf8/s1600/image00.png" height="340" width="640" /></a></div>Second, we’re open-sourcing the linear solver underlying the add-on: Glop (the Google Linear Optimization Package), created by <a href="https://plus.google.com/u/0/112941999316165083444/posts?e=-RedirectToSandbox">Bruno de Backer</a> with other members of the Google Optimization team. It’s available as part of the <a href="http://code.google.com/p/or-tools/">or-tools suite</a> and we provide a <a href="http://developers.google.com/optimization/docs/lp">few examples</a> to get you started. On that page, you’ll find the Glop solution to the Stigler diet problem. (A Google Sheets file that uses Glop and the Linear Optimization add-on to solve the Stigler diet problem is available <a href="https://docs.google.com/spreadsheets/d/1XWJLkAwch5GXAt_7zOFDcg8Wm8Xv29_8PWuuW15qmAE/edit?usp=sharing">here</a>. You’ll need to <a href="https://chrome.google.com/webstore/detail/linear-optimization/goadmgmjlkioggkbpbjlakbjjmlhdpen?utm_source=permalink">install the add-on first</a>.)<br /><br />Stigler posed his problem as follows: given nine nutrients (calories, protein, Vitamin C, and so on) and 77 candidate foods, find the foods that could sustain soldiers at minimum cost. <br /><br />The <a href="http://en.wikipedia.org/wiki/Simplex_algorithm">Simplex algorithm</a> for linear optimization was two years away from being invented, so Stigler had to do his best, arriving at a diet that cost $39.93 per year (in 1939 dollars), or just over ten cents per day. Even that wasn’t the cheapest diet. In 1947, Jack Laderman used Simplex, nine calculator-wielding clerks, and 120 person-days to arrive at the optimal solution.<br /><br />Glop’s Simplex implementation solves the problem in 300 milliseconds. Unfortunately, Stigler didn’t include taste as a constraint, and so the poor hypothetical soldiers will eat nothing but the following, ever:<br /><br /><ul><li>Enriched wheat flour</li><li>Liver</li><li>Cabbage</li><li>Spinach</li><li>Navy beans</li></ul><br />Is it possible to create an appealing dish out of these five ingredients? Google Chef Anthony Marco took it as a challenge, and we’re calling the result <i>Foie Linéaire à la Stigler</i>:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-MghFagwzwmY/VCMquFVqaAI/AAAAAAAAAXk/J0PDXyYtW9I/s1600/image01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-MghFagwzwmY/VCMquFVqaAI/AAAAAAAAAXk/J0PDXyYtW9I/s1600/image01.jpg" height="426" width="640" /></a></div>This optimal meal consists of seared calf liver dredged in flour, atop a navy bean purée with marinated cabbage and a spinach pesto.<br /><br />Chef Marco reported that the most difficult constraint was making the dish tasty without butter or cream. That said, I had the opportunity to taste our linear optimization solution, and it was delicious.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/sudoku-linear-optimization-and-the-ten-cent-diet/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Collaborative Mathematics with SageMathCloud and Google Cloud Platform</title>
		<link>https://googledata.org/google-research/collaborative-mathematics-with-sagemathcloud-and-google-cloud-platform/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=collaborative-mathematics-with-sagemathcloud-and-google-cloud-platform</link>
		<comments>https://googledata.org/google-research/collaborative-mathematics-with-sagemathcloud-and-google-cloud-platform/#comments</comments>
		<pubDate>Mon, 29 Sep 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[open source]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=cf4c748e560cfc2141ce7740f6776b1e</guid>
		<description><![CDATA[Posted by Craig Citro, Software Engineer(cross-posted on the Google for Education blog and Google Cloud Platform blog)Modern mathematics research is distinguished by its openness. The notion of "mathematical truth" depends on theorems being published w...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Craig Citro, Software Engineer</span><br /><br />(<i>cross-posted on the <a href="http://googleforeducation.blogspot.com/2014/09/collaborative-mathematics-with.html">Google for Education blog</a> and <a href="http://googlecloudplatform.blogspot.com/2014/09/collaborative-mathematics-with.html">Google Cloud Platform blog</a></i>)<br /><br />Modern mathematics research is distinguished by its openness. The notion of "mathematical truth" depends on theorems being published with proof, letting the reader understand how new results build on the old, all the way down to basic mathematical axioms and definitions. These new results become tools to aid further progress.<br /><br />Nowadays, many of these tools come either in the form of software or theorems whose proofs are supported by software. If new tools produce unexpected results, researchers must be able to collaborate and investigate how those results came about. Trusting software tools means being able to inspect and modify their source code. Moreover, open source tools can be modified and extended when research veers in new directions.<br /><br />In an attempt to create an open source tool to satisfy these requirements, University of Washington Professor <a href="http://wstein.org/">William Stein</a> built <a href="http://cloud.sagemath.org/">SageMathCloud</a> (or SMC). SMC is a robust, low-latency web application for collaboratively editing mathematical documents and code. This makes SMC a viable platform for mathematics research, as well as a powerful tool for teaching any mathematically-oriented course. SMC is built on top of standard open-source tools, including <a href="http://www.python.org/">Python</a>, <a href="http://www.latex-project.org/">LaTeX</a>, and <a href="http://www.r-project.org/">R</a>. In 2013, William received a 2013 Google Research Award which provided <a href="https://cloud.google.com/">Google Cloud Platform</a> credits for SMC development. This allowed William to extend SMC to use <a href="https://cloud.google.com/products/compute-engine/">Google Compute Engine</a> as a hosting platform, achieving better scalability and global availability.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-7Fs37ZrrBVo/VCWtkO0zJaI/AAAAAAAAAX0/wDHlt2Ru_JE/s1600/image00.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-7Fs37ZrrBVo/VCWtkO0zJaI/AAAAAAAAAX0/wDHlt2Ru_JE/s1600/image00.png" height="418" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>SMC allows users to interactively explore 3D graphics with only a browser</i></td></tr></tbody></table>SMC has its roots in 2005, when William started the <a href="http://www.sagemath.org/">Sage</a> project in an attempt to create a viable free and open source alternative to existing closed-source mathematical software. Rather than starting from scratch, Sage was built by making the best existing open-source mathematical software work together transparently and filling in any gaps in functionality.<br /><br />During the first few years, Sage grew to have about 75K active users, while the developer community matured with well over 100 contributors to each new Sage release and about 500 developers contributing <a href="http://trac.sagemath.org/">peer-reviewed code</a>.<br /><br />Inspired by Google Docs, William and his students built the first web-based interface to Sage in 2006, called <a href="http://sagenb.org/">The Sage Notebook</a>. However, The Sage Notebook was designed for a small number of users and would work for a small group (such as a single class), but soon became difficult to maintain for larger groups, let alone the whole web. <br /><br />As the growth of new users for Sage began to stall in 2010, due largely to installation complexity, William turned his attention to finding ways to expand Sage's availability to a broader audience. Based on his experience teaching his own courses with Sage, and feedback from others doing the same, William began building a new Web-hosted version of Sage that can scale to the next generation of users.<br /><br />The result is <a href="http://cloud.sagemath.org/">SageMathCloud</a>, a highly distributed multi-datacenter application that creates a viable way to do computational mathematics collaboratively online. SMC uses a wide variety of open source tools, from languages (<a href="http://coffeescript.org/">CoffeeScript</a>, <a href="http://nodejs.org/">node.js</a>, and <a href="https://www.python.org/">Python</a>) to infrastructure-level components (especially <a href="http://cassandra.apache.org/">Cassandra</a>, <a href="http://en.wikipedia.org/wiki/ZFS">ZFS</a>, and <a href="https://github.com/bup/bup">bup</a>) and a number of in-browser toolkits (such as <a href="http://codemirror.net/">CodeMirror</a> and <a href="http://threejs.org/">three.js</a>).<br /><br />Latency is critical for collaborative tools: like an online video game, everything in SMC is interactive. The initial versions of SMC were hosted at UW, at which point the distance between Seattle and far away continents was a significant issue, even for the fastest networks. The global coverage of Google Cloud Platform provides a low-latency connection to SMC users around the world that is both fast and stable. It's not uncommon for long-running research computations to last days, or even weeks -- and here the robustness of Google Compute Engine, with machines live-migrating during maintenance, is crucial. Without it, researchers would often face multiple restarts and delays, or would invest in engineering around the problem, taking time away from the core research.<br /><br />SMC sees use across a number of areas, especially:<br /><br /><ul><li><b>Teaching:</b> any course with a programming or math software component, where you want all your students to be able to use that component without dealing with the installation pain. Also, SMC allows students to easily share files, and even work together in realtime. There are <a href="https://github.com/sagemath/cloud/wiki/Teaching">dozens of courses</a> using SMC right now.</li></ul><ul><li><b>Collaborative Research:</b> all co-authors of a paper can work together in an SMC project, both writing the paper there and doing research-level computations.</li></ul><br />Since launching SMC in May 2013, there are already more than 20,000 monthly active users who've started using Sage via SMC. We look forward to seeing if SMC has an impact on the number of active users of Sage, and are excited to learn about the collaborative research and teaching that it makes possible. ]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/collaborative-mathematics-with-sagemathcloud-and-google-cloud-platform/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Introducing Structured Snippets, now a part of Google Web Search</title>
		<link>https://googledata.org/google-research/introducing-structured-snippets-now-a-part-of-google-web-search/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=introducing-structured-snippets-now-a-part-of-google-web-search</link>
		<comments>https://googledata.org/google-research/introducing-structured-snippets-now-a-part-of-google-web-search/#comments</comments>
		<pubDate>Mon, 22 Sep 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[search]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=5bdadb255ba329f7dd2b326183319685</guid>
		<description><![CDATA[<span>Posted by Corinna Cortes, Boulos Harb, Afshin Rostamizadeh, Ken Wilder, and Cong Yu, Google Research</span><br /><br />Google Web Search has evolved in recent years with a host of features powered by the <a href="http://www.google.com/insidesearch/features/search/knowledge.html">Knowledge Graph</a> and other data sources to provide users with highly structured and relevant data. Structured Snippets is a new feature that incorporates facts into individual result snippets in Web Search. As seen in the example below, interesting and relevant information is extracted from a page and displayed as part of the snippet for the query &#8220;<i>nikon d7100</i>&#8221;:<br /><div><a href="http://3.bp.blogspot.com/-vjx81mxUNws/VBtJGyNE8uI/AAAAAAAAAW8/vYDmc6TouiY/s1600/image00.png"><img border="0" src="http://3.bp.blogspot.com/-vjx81mxUNws/VBtJGyNE8uI/AAAAAAAAAW8/vYDmc6TouiY/s1600/image00.png" height="154" width="640"></a></div>The WebTables research team has been working to extract and understand tabular data on the Web with the intent to surface particularly relevant data to users. Our data is already used in the <a href="https://plus.google.com/+GoogleDrive/posts/e7qsYrUC9ur">Research Tool found in Google Docs and Slides</a>; Structured Snippets is the latest collaboration between Google Research and the Web Search team employing that data to seamlessly provide the most relevant information to the user. We use machine learning techniques to distinguish data tables on the Web from uninteresting tables, e.g., tables used for formatting web pages. We also have additional algorithms to determine quality and relevance that we use to display up to four highly ranked facts from those data tables. Another example of a structured snippet for the query &#8220;<i>superman</i>&#8221;, this time as it appears on a mobile phone, is shown below:<br /><div><a href="http://3.bp.blogspot.com/-P2A0yBu8x10/VBtJQpaMoUI/AAAAAAAAAXE/FRwK9z7WJv4/s1600/image01.png"><img border="0" src="http://3.bp.blogspot.com/-P2A0yBu8x10/VBtJQpaMoUI/AAAAAAAAAXE/FRwK9z7WJv4/s1600/image01.png"></a></div>Fact quality will vary across results based on page content, and we are continually enhancing the relevance and accuracy of the facts we identify and display. We hope users will find this extra snippet information useful.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Corinna Cortes, Boulos Harb, Afshin Rostamizadeh, Ken Wilder, and Cong Yu, Google Research</span><br /><br />Google Web Search has evolved in recent years with a host of features powered by the <a href="http://www.google.com/insidesearch/features/search/knowledge.html">Knowledge Graph</a> and other data sources to provide users with highly structured and relevant data. Structured Snippets is a new feature that incorporates facts into individual result snippets in Web Search. As seen in the example below, interesting and relevant information is extracted from a page and displayed as part of the snippet for the query “<i>nikon d7100</i>”:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-vjx81mxUNws/VBtJGyNE8uI/AAAAAAAAAW8/vYDmc6TouiY/s1600/image00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-vjx81mxUNws/VBtJGyNE8uI/AAAAAAAAAW8/vYDmc6TouiY/s1600/image00.png" height="154" width="640" /></a></div>The WebTables research team has been working to extract and understand tabular data on the Web with the intent to surface particularly relevant data to users. Our data is already used in the <a href="https://plus.google.com/+GoogleDrive/posts/e7qsYrUC9ur">Research Tool found in Google Docs and Slides</a>; Structured Snippets is the latest collaboration between Google Research and the Web Search team employing that data to seamlessly provide the most relevant information to the user. We use machine learning techniques to distinguish data tables on the Web from uninteresting tables, e.g., tables used for formatting web pages. We also have additional algorithms to determine quality and relevance that we use to display up to four highly ranked facts from those data tables. Another example of a structured snippet for the query “<i>superman</i>”, this time as it appears on a mobile phone, is shown below:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-P2A0yBu8x10/VBtJQpaMoUI/AAAAAAAAAXE/FRwK9z7WJv4/s1600/image01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-P2A0yBu8x10/VBtJQpaMoUI/AAAAAAAAAXE/FRwK9z7WJv4/s1600/image01.png" /></a></div>Fact quality will vary across results based on page content, and we are continually enhancing the relevance and accuracy of the facts we identify and display. We hope users will find this extra snippet information useful.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/introducing-structured-snippets-now-a-part-of-google-web-search/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Sign in to edx.org with Google (and Facebook, and&#8230;)</title>
		<link>https://googledata.org/google-research/sign-in-to-edx-org-with-google-and-facebook-and/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=sign-in-to-edx-org-with-google-and-facebook-and</link>
		<comments>https://googledata.org/google-research/sign-in-to-edx-org-with-google-and-facebook-and/#comments</comments>
		<pubDate>Thu, 18 Sep 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=3c3931b70409217d38fe6df76c5e655e</guid>
		<description><![CDATA[<span>Posted by John Cox, Software Engineer</span><br /><br />Google is passionate about online education. In addition to our own <a href="https://code.google.com/p/course-builder/">Course Builder</a> project, we&#8217;re also partners with <a href="https://www.edx.org/">edX</a>, a not-for-profit that shares our desire for scalable, quality education for everyone. Their software, <a href="http://code.edx.org/">Open edX</a>, lets people make educational content and deliver it online to anybody, anytime, anywhere. It powers their own site, edx.org, and is also used by companies and universities worldwide.<br /><br />Today we&#8217;re very pleased to announce that you can now sign in to <a href="https://www.edx.org/">edx.org</a> with your Google or Facebook account:<br /><div><a href="http://4.bp.blogspot.com/-JZrKNq9st4I/VBnvt3-4LKI/AAAAAAAAAWs/q2FegkztVIk/s1600/image00.png"><img border="0" src="http://4.bp.blogspot.com/-JZrKNq9st4I/VBnvt3-4LKI/AAAAAAAAAWs/q2FegkztVIk/s1600/image00.png" height="418" width="640"></a></div>Until recently, users who wanted to take advantage of the high quality content on <a href="https://www.edx.org/">edx.org</a> needed to create a new account first. This is a painful, error prone process&#8213;really, who wants to worry about yet another password? So we added the ability to use over 60 external authentication providers to Open edX, with support for everything from open standards like <a href="http://openid.net/">OpenID</a> or <a href="http://oauth.net/2/">OAuth 2.0</a>, to custom university single sign-on systems. For their <a href="https://www.edx.org/">edx.org</a> site, edX decided to let users pick between Google, Facebook, and a custom username and password.<br /><br />If you run Open edX, you can also use this feature now. The <a href="https://github.com/edx/edx-platform/tree/master/common/djangoapps/third_party_auth">authentication module</a> is <a href="http://johnmcox.blogspot.com/2014/05/understanding-edx-third-party.html">extensible</a> so you can add any third-party provider you want if your favorite is not yet supported. And the feature is completely <a href="http://johnmcox.blogspot.com/2014/05/getting-started-with-edx-third-party.html">configurable</a>, so you can pick whatever third-party authentication systems are best for your users, including none at all. It&#8217;s totally up to you.<br /><br />By simultaneously increasing user choice, convenience, and security, we hope to make open online education even easier and safer to use, whether people pick Course Builder or Open edX for authoring and delivering courses. We&#8217;re very grateful to our partners at edX for working with us in this exciting field.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by John Cox, Software Engineer</span><br /><br />Google is passionate about online education. In addition to our own <a href="https://code.google.com/p/course-builder/">Course Builder</a> project, we’re also partners with <a href="https://www.edx.org/">edX</a>, a not-for-profit that shares our desire for scalable, quality education for everyone. Their software, <a href="http://code.edx.org/">Open edX</a>, lets people make educational content and deliver it online to anybody, anytime, anywhere. It powers their own site, edx.org, and is also used by companies and universities worldwide.<br /><br />Today we’re very pleased to announce that you can now sign in to <a href="https://www.edx.org/">edx.org</a> with your Google or Facebook account:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-JZrKNq9st4I/VBnvt3-4LKI/AAAAAAAAAWs/q2FegkztVIk/s1600/image00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-JZrKNq9st4I/VBnvt3-4LKI/AAAAAAAAAWs/q2FegkztVIk/s1600/image00.png" height="418" width="640" /></a></div>Until recently, users who wanted to take advantage of the high quality content on <a href="https://www.edx.org/">edx.org</a> needed to create a new account first. This is a painful, error prone process―really, who wants to worry about yet another password? So we added the ability to use over 60 external authentication providers to Open edX, with support for everything from open standards like <a href="http://openid.net/">OpenID</a> or <a href="http://oauth.net/2/">OAuth 2.0</a>, to custom university single sign-on systems. For their <a href="https://www.edx.org/">edx.org</a> site, edX decided to let users pick between Google, Facebook, and a custom username and password.<br /><br />If you run Open edX, you can also use this feature now. The <a href="https://github.com/edx/edx-platform/tree/master/common/djangoapps/third_party_auth">authentication module</a> is <a href="http://johnmcox.blogspot.com/2014/05/understanding-edx-third-party.html">extensible</a> so you can add any third-party provider you want if your favorite is not yet supported. And the feature is completely <a href="http://johnmcox.blogspot.com/2014/05/getting-started-with-edx-third-party.html">configurable</a>, so you can pick whatever third-party authentication systems are best for your users, including none at all. It’s totally up to you.<br /><br />By simultaneously increasing user choice, convenience, and security, we hope to make open online education even easier and safer to use, whether people pick Course Builder or Open edX for authoring and delivering courses. We’re very grateful to our partners at edX for working with us in this exciting field.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/sign-in-to-edx-org-with-google-and-facebook-and/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Course Builder now supports the Learning Tools Interoperability (LTI) Specification</title>
		<link>https://googledata.org/google-research/course-builder-now-supports-the-learning-tools-interoperability-lti-specification/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=course-builder-now-supports-the-learning-tools-interoperability-lti-specification</link>
		<comments>https://googledata.org/google-research/course-builder-now-supports-the-learning-tools-interoperability-lti-specification/#comments</comments>
		<pubDate>Thu, 11 Sep 2014 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>
		<category><![CDATA[open source]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=8901d034bfafb534aa7906d64e1ee97e</guid>
		<description><![CDATA[<span>Posted by John Cox, Software Engineer</span>  <br /><br />Since the release of <a href="https://code.google.com/p/course-builder/">Course Builder</a> two years ago, it has been used by individuals, companies, and universities worldwide to create and deliver online courses on a variety of subjects, helping to show the potential for making education more accessible through open source technology.<br /><br />Today, we&#8217;re excited to announce that Course Builder now supports the <a href="http://www.imsglobal.org/toolsinteroperability2.cfm">Learning Tools Interoperability</a> (LTI) specification. Course Builder can now interoperate with other LTI-compliant systems and online learning platforms, allowing users to interact with high-quality educational content no matter where it lives. This is an important step toward our goal of making educational content available to everyone.<br /><br />If you have LTI-compliant software and would like to serve its content inside Course Builder, you can do so by using Course Builder as an LTI consumer. If you want to serve Course Builder content inside another LTI-compliant system, you can use Course Builder as an LTI provider. You can use either of these features, both, or none&#8212;the choice is entirely up to you.<br /><br />The Course Builder LTI extension module, <a href="https://github.com/google/coursebuilder-lti-module">now available on Github</a>, supports LTI version 1.0, and its LTI provider is certified by <a href="http://www.imsglobal.org/">IMS Global</a>, the nonprofit member organization that created the LTI specification. Like Course Builder itself, this module is open source and available under the Apache 2.0 license.<br /><br />As part of our continued commitment to online education, we are also happy to announce we have become an affiliate member of IMS Global. IMS Global shares our desire to provide education online at scale, and we look forward to working with the IMS community on LTI and other online education technologies.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by John Cox, Software Engineer</span>  <br /><br />Since the release of <a href="https://code.google.com/p/course-builder/">Course Builder</a> two years ago, it has been used by individuals, companies, and universities worldwide to create and deliver online courses on a variety of subjects, helping to show the potential for making education more accessible through open source technology.<br /><br />Today, we’re excited to announce that Course Builder now supports the <a href="http://www.imsglobal.org/toolsinteroperability2.cfm">Learning Tools Interoperability</a> (LTI) specification. Course Builder can now interoperate with other LTI-compliant systems and online learning platforms, allowing users to interact with high-quality educational content no matter where it lives. This is an important step toward our goal of making educational content available to everyone.<br /><br />If you have LTI-compliant software and would like to serve its content inside Course Builder, you can do so by using Course Builder as an LTI consumer. If you want to serve Course Builder content inside another LTI-compliant system, you can use Course Builder as an LTI provider. You can use either of these features, both, or none—the choice is entirely up to you.<br /><br />The Course Builder LTI extension module, <a href="https://github.com/google/coursebuilder-lti-module">now available on Github</a>, supports LTI version 1.0, and its LTI provider is certified by <a href="http://www.imsglobal.org/">IMS Global</a>, the nonprofit member organization that created the LTI specification. Like Course Builder itself, this module is open source and available under the Apache 2.0 license.<br /><br />As part of our continued commitment to online education, we are also happy to announce we have become an affiliate member of IMS Global. IMS Global shares our desire to provide education online at scale, and we look forward to working with the IMS community on LTI and other online education technologies.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/course-builder-now-supports-the-learning-tools-interoperability-lti-specification/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Building a deeper understanding of images</title>
		<link>https://googledata.org/google-research/building-a-deeper-understanding-of-images/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=building-a-deeper-understanding-of-images</link>
		<comments>https://googledata.org/google-research/building-a-deeper-understanding-of-images/#comments</comments>
		<pubDate>Fri, 05 Sep 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=99f24fe85f497016f16f1b9d9da6a544</guid>
		<description><![CDATA[<span>Posted by Christian Szegedy, Software Engineer</span><br /><br />The ImageNet large-scale visual recognition challenge (<a href="http://www.image-net.org/challenges/LSVRC/2014/">ILSVRC</a>) is the largest academic challenge in computer vision, held annually to test state-of-the-art technology in image understanding, both in the sense of recognizing objects in images and locating where they are. Participants in the competition include leading academic institutions and industry labs. In 2012 it was won by DNNResearch using the convolutional neural network approach described in the now-seminal <a href="http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf">paper by Krizhevsky et al.</a><sup>[4]</sup><br /><br />In this year&#8217;s challenge, team GoogLeNet (named in homage to <a href="http://yann.lecun.com/exdb/lenet/">LeNet</a>, <a href="http://yann.lecun.com/">Yann LeCun</a>'s influential convolutional network) placed first in the classification and detection (with extra training data) tasks, doubling the quality on both tasks over last year's results. The team participated with an open submission, meaning that the exact details of its approach are shared with the wider computer vision community to foster collaboration and accelerate progress in the field.<br /><div><a href="http://2.bp.blogspot.com/-TDP7iU5MRSo/VAjer5jrPzI/AAAAAAAAAV0/xk3lWXsO2xQ/s1600/ILSVRC2012_val_00021791.PNG"><img border="0" src="http://2.bp.blogspot.com/-TDP7iU5MRSo/VAjer5jrPzI/AAAAAAAAAV0/xk3lWXsO2xQ/s1600/ILSVRC2012_val_00021791.PNG"></a></div>The competition has three tracks: classification, classification with localization, and detection. The <i>classification</i> track measures an algorithm&#8217;s ability to assign correct labels to an image. The <i>classification with localization</i> track is designed to assess how well an algorithm models both the labels of an image and the location of the underlying objects. Finally, the <i>detection challenge</i> is similar, but uses much stricter evaluation criteria. As an additional difficulty, this challenge includes a lot of images with tiny objects which are hard to recognize. Superior performance in the detection challenge requires pushing beyond annotating an image with a &#8220;bag of labels&#8221; -- a model must be able to describe a complex scene by accurately locating and identifying many objects in it. As examples, the images in this post are actual top-scoring inferences of the GoogleNet detection model on the validation set of the detection challenge.<br /><div><a href="http://2.bp.blogspot.com/-qiLxaXSfcp8/VAjllBsNOhI/AAAAAAAAAWM/fIPPugiUWfQ/s1600/1XNYvfGqsv.png"><img border="0" src="http://2.bp.blogspot.com/-qiLxaXSfcp8/VAjllBsNOhI/AAAAAAAAAWM/fIPPugiUWfQ/s1600/1XNYvfGqsv.png" height="268" width="640"></a></div>This work was a concerted effort by <a href="http://www.cs.unc.edu/~wliu/">Wei Liu</a>, <a href="http://www.eecs.berkeley.edu/~jiayq/">Yangqing Jia</a>, <a href="http://cs.nyu.edu/~sermanet/">Pierre Sermanet</a>, <a href="http://www-personal.umich.edu/~reedscot/">Scott Reed</a>, <a href="http://ai.stanford.edu/~drago/">Drago Anguelov</a>, <a href="http://research.google.com/pubs/DumitruErhan.html">Dumitru Erhan</a>, <a href="http://research.google.com/pubs/AndrewRabinovich.html">Andrew Rabinovich</a> and <a href="http://research.google.com/pubs/ChristianSzegedy.html">myself</a>. Two of the team members -- Wei Liu and Scott Reed -- are PhD students who are a part of the intern program here at Google, and actively participated in the work leading to the submissions. Without their dedication the team could not have won the detection challenge.<br /><br />This effort was accomplished by using the <a href="http://research.google.com/pubs/pub40565.html">DistBelief infrastructure</a>, which makes it possible to train neural networks in a distributed manner and rapidly iterate. At the core of the approach is a radically redesigned convolutional network architecture. Its seemingly complex structure (typical incarnations of which consist of over 100 layers with a maximum depth of over 20 parameter layers), is based on two insights: the <a href="http://en.wikipedia.org/wiki/Hebbian_theory">Hebbian principle</a> and <a href="http://en.wikipedia.org/wiki/Scale_invariance">scale invariance</a>. As the consequence of a careful balancing act, the depth and width of the network are both increased significantly at the cost of a modest growth in evaluation time. The resultant architecture leads to over 10x reduction in the number of parameters compared to most state of the art vision networks. This reduces overfitting during training and allows our system to perform inference with low memory footprint.<br /><div><a href="http://2.bp.blogspot.com/-R7rgVNYKYM8/VAjmduy-VqI/AAAAAAAAAWU/TzKcFhfAfXc/s1600/ojbE4A2LeM.png"><img border="0" src="http://2.bp.blogspot.com/-R7rgVNYKYM8/VAjmduy-VqI/AAAAAAAAAWU/TzKcFhfAfXc/s1600/ojbE4A2LeM.png" height="140" width="640"></a></div>For the detection challenge, the improved neural network model is used in the sophisticated <a href="http://arxiv.org/abs/1311.2524">R-CNN detector by Ross Girshick et al.</a><sup>[2]</sup>, with additional proposals coming from the <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Erhan_Scalable_Object_Detection_2014_CVPR_paper.pdf">multibox method</a><sup>[1]</sup>. For the classification challenge entry, <a href="http://arxiv.org/abs/1312.5402">several ideas from the work of Andrew Howard</a><sup>[3]</sup> were incorporated and extended, specifically as they relate to image sampling during training and evaluation. The systems were evaluated both stand-alone and as ensembles (averaging the outputs of up to seven models) and their results were submitted as separate entries for transparency and comparison.<br /><br />These technological advances will enable even better image understanding on our side and the progress is directly transferable to Google products such as photo search, image search, YouTube, self-driving cars, and any place where it is useful to understand <i>what</i> is in an image as well as <i>where</i> things are.<br /><br /><b><i>References:</i></b><br /><br />[1] Erhan D., Szegedy C., Toshev, A., and Anguelov, D., <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Erhan_Scalable_Object_Detection_2014_CVPR_paper.pdf">"Scalable Object Detection using Deep Neural Networks"</a>, <i>The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 2147-2154.</i><br /><br />[2] Girshick, R., Donahue, J., Darrell, T., &#38; Malik, J., <a href="http://arxiv.org/abs/1311.2524">"Rich feature hierarchies for accurate object detection and semantic segmentation"</a>, <i>arXiv preprint arXiv:1311.2524, 2013.</i><br /><br />[3] Howard, A. G., <a href="http://arxiv.org/abs/1312.5402">"Some Improvements on Deep Convolutional Neural Network Based Image Classification"</a>, <i>arXiv preprint arXiv:1312.5402, 2013.</i><br /><br />[4] Krizhevsky, A., Sutskever I., and Hinton, G., <a href="http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf">"Imagenet classification with deep convolutional neural networks"</a>,&#160;<i>Advances in neural information processing systems, 2012.</i><br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Christian Szegedy, Software Engineer</span><br /><br />The ImageNet large-scale visual recognition challenge (<a href="http://www.image-net.org/challenges/LSVRC/2014/">ILSVRC</a>) is the largest academic challenge in computer vision, held annually to test state-of-the-art technology in image understanding, both in the sense of recognizing objects in images and locating where they are. Participants in the competition include leading academic institutions and industry labs. In 2012 it was won by DNNResearch using the convolutional neural network approach described in the now-seminal <a href="http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf">paper by Krizhevsky et al.</a><sup>[4]</sup><br /><br />In this year’s challenge, team GoogLeNet (named in homage to <a href="http://yann.lecun.com/exdb/lenet/">LeNet</a>, <a href="http://yann.lecun.com/">Yann LeCun</a>'s influential convolutional network) placed first in the classification and detection (with extra training data) tasks, doubling the quality on both tasks over last year's results. The team participated with an open submission, meaning that the exact details of its approach are shared with the wider computer vision community to foster collaboration and accelerate progress in the field.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-TDP7iU5MRSo/VAjer5jrPzI/AAAAAAAAAV0/xk3lWXsO2xQ/s1600/ILSVRC2012_val_00021791.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-TDP7iU5MRSo/VAjer5jrPzI/AAAAAAAAAV0/xk3lWXsO2xQ/s1600/ILSVRC2012_val_00021791.PNG" /></a></div>The competition has three tracks: classification, classification with localization, and detection. The <i>classification</i> track measures an algorithm’s ability to assign correct labels to an image. The <i>classification with localization</i> track is designed to assess how well an algorithm models both the labels of an image and the location of the underlying objects. Finally, the <i>detection challenge</i> is similar, but uses much stricter evaluation criteria. As an additional difficulty, this challenge includes a lot of images with tiny objects which are hard to recognize. Superior performance in the detection challenge requires pushing beyond annotating an image with a “bag of labels” -- a model must be able to describe a complex scene by accurately locating and identifying many objects in it. As examples, the images in this post are actual top-scoring inferences of the GoogleNet detection model on the validation set of the detection challenge.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-qiLxaXSfcp8/VAjllBsNOhI/AAAAAAAAAWM/fIPPugiUWfQ/s1600/1XNYvfGqsv.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-qiLxaXSfcp8/VAjllBsNOhI/AAAAAAAAAWM/fIPPugiUWfQ/s1600/1XNYvfGqsv.png" height="268" width="640" /></a></div>This work was a concerted effort by <a href="http://www.cs.unc.edu/~wliu/">Wei Liu</a>, <a href="http://www.eecs.berkeley.edu/~jiayq/">Yangqing Jia</a>, <a href="http://cs.nyu.edu/~sermanet/">Pierre Sermanet</a>, <a href="http://www-personal.umich.edu/~reedscot/">Scott Reed</a>, <a href="http://ai.stanford.edu/~drago/">Drago Anguelov</a>, <a href="http://research.google.com/pubs/DumitruErhan.html">Dumitru Erhan</a>, <a href="http://research.google.com/pubs/AndrewRabinovich.html">Andrew Rabinovich</a> and <a href="http://research.google.com/pubs/ChristianSzegedy.html">myself</a>. Two of the team members -- Wei Liu and Scott Reed -- are PhD students who are a part of the intern program here at Google, and actively participated in the work leading to the submissions. Without their dedication the team could not have won the detection challenge.<br /><br />This effort was accomplished by using the <a href="http://research.google.com/pubs/pub40565.html">DistBelief infrastructure</a>, which makes it possible to train neural networks in a distributed manner and rapidly iterate. At the core of the approach is a radically redesigned convolutional network architecture. Its seemingly complex structure (typical incarnations of which consist of over 100 layers with a maximum depth of over 20 parameter layers), is based on two insights: the <a href="http://en.wikipedia.org/wiki/Hebbian_theory">Hebbian principle</a> and <a href="http://en.wikipedia.org/wiki/Scale_invariance">scale invariance</a>. As the consequence of a careful balancing act, the depth and width of the network are both increased significantly at the cost of a modest growth in evaluation time. The resultant architecture leads to over 10x reduction in the number of parameters compared to most state of the art vision networks. This reduces overfitting during training and allows our system to perform inference with low memory footprint.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-R7rgVNYKYM8/VAjmduy-VqI/AAAAAAAAAWU/TzKcFhfAfXc/s1600/ojbE4A2LeM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-R7rgVNYKYM8/VAjmduy-VqI/AAAAAAAAAWU/TzKcFhfAfXc/s1600/ojbE4A2LeM.png" height="140" width="640" /></a></div>For the detection challenge, the improved neural network model is used in the sophisticated <a href="http://arxiv.org/abs/1311.2524">R-CNN detector by Ross Girshick et al.</a><sup>[2]</sup>, with additional proposals coming from the <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Erhan_Scalable_Object_Detection_2014_CVPR_paper.pdf">multibox method</a><sup>[1]</sup>. For the classification challenge entry, <a href="http://arxiv.org/abs/1312.5402">several ideas from the work of Andrew Howard</a><sup>[3]</sup> were incorporated and extended, specifically as they relate to image sampling during training and evaluation. The systems were evaluated both stand-alone and as ensembles (averaging the outputs of up to seven models) and their results were submitted as separate entries for transparency and comparison.<br /><br />These technological advances will enable even better image understanding on our side and the progress is directly transferable to Google products such as photo search, image search, YouTube, self-driving cars, and any place where it is useful to understand <i>what</i> is in an image as well as <i>where</i> things are.<br /><br /><b><i>References:</i></b><br /><br />[1] Erhan D., Szegedy C., Toshev, A., and Anguelov, D., <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Erhan_Scalable_Object_Detection_2014_CVPR_paper.pdf">"Scalable Object Detection using Deep Neural Networks"</a>, <i>The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 2147-2154.</i><br /><br />[2] Girshick, R., Donahue, J., Darrell, T., &amp; Malik, J., <a href="http://arxiv.org/abs/1311.2524">"Rich feature hierarchies for accurate object detection and semantic segmentation"</a>, <i>arXiv preprint arXiv:1311.2524, 2013.</i><br /><br />[3] Howard, A. G., <a href="http://arxiv.org/abs/1312.5402">"Some Improvements on Deep Convolutional Neural Network Based Image Classification"</a>, <i>arXiv preprint arXiv:1312.5402, 2013.</i><br /><br />[4] Krizhevsky, A., Sutskever I., and Hinton, G., <a href="http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf">"Imagenet classification with deep convolutional neural networks"</a>,&nbsp;<i>Advances in neural information processing systems, 2012.</i><br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/building-a-deeper-understanding-of-images/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Working Together to Support Computer Science Education</title>
		<link>https://googledata.org/google-research/working-together-to-support-computer-science-education/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=working-together-to-support-computer-science-education</link>
		<comments>https://googledata.org/google-research/working-together-to-support-computer-science-education/#comments</comments>
		<pubDate>Wed, 03 Sep 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=f610e5253cf3ac0fdd2f27758e38acfe</guid>
		<description><![CDATA[<span>Posted by Chris Stephenson, Computer Science Education Program Manager</span><br /><br /><i>(Cross-posted from the <a href="http://googleforeducation.blogspot.com/2014/09/working-together-to-support-computer.html">Google for Education blog</a>)</i><br /><br />Computer Science (CS) education in K-12 is receiving an increasing amount of attention from <a href="http://www.motherjones.com/media/2014/06/computer-science-programming-code-diversity-sexism-education">media</a> and <a href="https://beta.congress.gov/bill/113th-congress/house-bill/2536">policy makers</a>. Education groups have been working for years to build the infrastructure needed to support CS both inside and outside the school environment, including standards development and dissemination, models for teacher professional development, research, resources for educators, and the building of peer-driven and peer-supported communities of learning.<br /><br />At Google, we strive to increase opportunities in CS and be a strong contributor to the community of those seeking to improve CS education through our engagement in research, curriculum resource development and dissemination, professional development of teachers, tools development, and large-scale efforts to engage young women and underrepresented groups in computer science. However, despite these efforts, there are still many challenges to overcome to improve the state of CS education.<br /><br />For example, many people confuse computer science with education technology (the use of computing to support learning in other disciplines) and computer literacy (a very basic understanding of a limited number of computer applications). This confusion leads to the assumption that computer science education is taking place, when in fact in many schools it is not. <br /><br />Women and minorities are still underrepresented in computer science education and in the high tech workplace. In her introduction to Jane Margolis&#8217; <i><a href="http://idea.gseis.ucla.edu/projects/into-the-loop/stuck-in-the-shallow-end">Stuck in the Shallow End: Education, Race, and Computing</a></i>, distinguished scientist Shirley Malcolm refers to computer science as &#8220;privileged knowledge&#8221; to which minority students often have no access. This statement is supported by data from the <a href="http://home.cc.gatech.edu/ice-gt/321">College Board</a> and the <a href="http://www.ncwit.org/resources/ncwit-scorecard-report-status-women-information-technology">National Center for Women and Information Technology</a>. <br /><br />Poverty also has a significant but often ignored impact on access to technology and quality computer science education. At present there are <a href="http://www.nccp.org/topics/childpoverty.html">more than 16 million U.S. children living in poverty</a>; these children are the least likely to have access to computer science knowledge and tools in their schools and homes.<br /><br />There are many organizations and programs which focus on CS education, working hard to address these issues, and others. This gives Google the unique opportunity to analyze gaps in existing efforts and apply our resources towards programs that are most needed. In so doing, we hope to help uncover new strategies and create sustainable improvements to CS education.<br /><br />Achieving systemic and sustained change in K-12 CS education is a complex undertaking that requires strategic support that complements both existing formal school programs and extracurricular education. Google is proud to be a member of the community committed to making tangible improvements to the state of CS education. In future blog posts, we will introduce you so some of the programs and resources that Google has been working on.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Chris Stephenson, Computer Science Education Program Manager</span><br /><br /><i>(Cross-posted from the <a href="http://googleforeducation.blogspot.com/2014/09/working-together-to-support-computer.html">Google for Education blog</a>)</i><br /><br />Computer Science (CS) education in K-12 is receiving an increasing amount of attention from <a href="http://www.motherjones.com/media/2014/06/computer-science-programming-code-diversity-sexism-education">media</a> and <a href="https://beta.congress.gov/bill/113th-congress/house-bill/2536">policy makers</a>. Education groups have been working for years to build the infrastructure needed to support CS both inside and outside the school environment, including standards development and dissemination, models for teacher professional development, research, resources for educators, and the building of peer-driven and peer-supported communities of learning.<br /><br />At Google, we strive to increase opportunities in CS and be a strong contributor to the community of those seeking to improve CS education through our engagement in research, curriculum resource development and dissemination, professional development of teachers, tools development, and large-scale efforts to engage young women and underrepresented groups in computer science. However, despite these efforts, there are still many challenges to overcome to improve the state of CS education.<br /><br />For example, many people confuse computer science with education technology (the use of computing to support learning in other disciplines) and computer literacy (a very basic understanding of a limited number of computer applications). This confusion leads to the assumption that computer science education is taking place, when in fact in many schools it is not. <br /><br />Women and minorities are still underrepresented in computer science education and in the high tech workplace. In her introduction to Jane Margolis’ <i><a href="http://idea.gseis.ucla.edu/projects/into-the-loop/stuck-in-the-shallow-end">Stuck in the Shallow End: Education, Race, and Computing</a></i>, distinguished scientist Shirley Malcolm refers to computer science as “privileged knowledge” to which minority students often have no access. This statement is supported by data from the <a href="http://home.cc.gatech.edu/ice-gt/321">College Board</a> and the <a href="http://www.ncwit.org/resources/ncwit-scorecard-report-status-women-information-technology">National Center for Women and Information Technology</a>. <br /><br />Poverty also has a significant but often ignored impact on access to technology and quality computer science education. At present there are <a href="http://www.nccp.org/topics/childpoverty.html">more than 16 million U.S. children living in poverty</a>; these children are the least likely to have access to computer science knowledge and tools in their schools and homes.<br /><br />There are many organizations and programs which focus on CS education, working hard to address these issues, and others. This gives Google the unique opportunity to analyze gaps in existing efforts and apply our resources towards programs that are most needed. In so doing, we hope to help uncover new strategies and create sustainable improvements to CS education.<br /><br />Achieving systemic and sustained change in K-12 CS education is a complex undertaking that requires strategic support that complements both existing formal school programs and extracurricular education. Google is proud to be a member of the community committed to making tangible improvements to the state of CS education. In future blog posts, we will introduce you so some of the programs and resources that Google has been working on.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/working-together-to-support-computer-science-education/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Hardware Initiative at Quantum Artificial Intelligence Lab</title>
		<link>https://googledata.org/google-research/hardware-initiative-at-quantum-artificial-intelligence-lab/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=hardware-initiative-at-quantum-artificial-intelligence-lab</link>
		<comments>https://googledata.org/google-research/hardware-initiative-at-quantum-artificial-intelligence-lab/#comments</comments>
		<pubDate>Tue, 02 Sep 2014 18:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=3e0020166d5a3d2dddf6959845e509d5</guid>
		<description><![CDATA[<span>Posted by Hartmut Neven, Director of Engineering</span><br /><br />The Quantum Artificial Intelligence team at Google is launching a hardware initiative to design and build new quantum information processors based on superconducting electronics. We are pleased to announce that <a href="http://web.physics.ucsb.edu/~martinisgroup/index.shtml">John Martinis and his team at UC Santa Barbara</a> will join Google in this initiative. John and his group have made great strides in building <a href="http://www.news.ucsb.edu/2014/014074/superconducting-qubit-array-points-way-quantum-computers">superconducting quantum electronic components of very high fidelity</a>. He recently was awarded the <a href="http://www.news.ucsb.edu/2014/014064/and-fritz-london-memorial-prize-goes-to%E2%80%A6">London Prize</a> recognizing him for his pioneering advances in quantum control and quantum information processing. With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the <a href="http://en.wikipedia.org/wiki/D-Wave_Systems">D-Wave</a> <a href="http://en.wikipedia.org/wiki/Quantum_annealing">quantum annealing</a> architecture. We will continue to collaborate with D-Wave scientists and to experiment with the &#8220;Vesuvius&#8221; machine at NASA Ames which will be upgraded to a 1000 qubit &#8220;Washington&#8221; processor.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Hartmut Neven, Director of Engineering</span><br /><br />The Quantum Artificial Intelligence team at Google is launching a hardware initiative to design and build new quantum information processors based on superconducting electronics. We are pleased to announce that <a href="http://web.physics.ucsb.edu/~martinisgroup/index.shtml">John Martinis and his team at UC Santa Barbara</a> will join Google in this initiative. John and his group have made great strides in building <a href="http://www.news.ucsb.edu/2014/014074/superconducting-qubit-array-points-way-quantum-computers">superconducting quantum electronic components of very high fidelity</a>. He recently was awarded the <a href="http://www.news.ucsb.edu/2014/014064/and-fritz-london-memorial-prize-goes-to%E2%80%A6">London Prize</a> recognizing him for his pioneering advances in quantum control and quantum information processing. With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the <a href="http://en.wikipedia.org/wiki/D-Wave_Systems">D-Wave</a> <a href="http://en.wikipedia.org/wiki/Quantum_annealing">quantum annealing</a> architecture. We will continue to collaborate with D-Wave scientists and to experiment with the “Vesuvius” machine at NASA Ames which will be upgraded to a 1000 qubit “Washington” processor.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/hardware-initiative-at-quantum-artificial-intelligence-lab/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Teaching machines to read between the lines (and a new corpus with entity salience annotations)</title>
		<link>https://googledata.org/google-research/teaching-machines-to-read-between-the-lines-and-a-new-corpus-with-entity-salience-annotations/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=teaching-machines-to-read-between-the-lines-and-a-new-corpus-with-entity-salience-annotations</link>
		<comments>https://googledata.org/google-research/teaching-machines-to-read-between-the-lines-and-a-new-corpus-with-entity-salience-annotations/#comments</comments>
		<pubDate>Mon, 25 Aug 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=88b92eab96d33d9276eefa6adb31e208</guid>
		<description><![CDATA[<span>Posted by Dan Gillick, Research Scientist, and Dave Orr, Product Manager</span><br /><br />Language understanding systems are largely trained on freely available data, such as the <a href="http://www.cis.upenn.edu/~treebank/">Penn Treebank</a>, perhaps the most widely used linguistic resource ever created. We have previously released <a href="http://googleresearch.blogspot.com/2013/12/free-language-lessons-for-computers.html">lots of linguistic data</a> ourselves, to contribute to the language understanding community as well as encourage further research into these areas. <br /><br />Now, we&#8217;re releasing a new dataset, based on another great resource: the <a href="http://open.blogs.nytimes.com/2009/01/12/fatten-up-your-corpus/?_php=true&#38;_type=blogs&#38;_r=0">New York Times Annotated Corpus</a>, a set of 1.8 million articles spanning 20 years. 600,000 articles in the NYTimes Corpus have hand-written summaries, and more than 1.5 million of them are tagged with people, places, and organizations mentioned in the article. The Times encourages <a href="http://open.blogs.nytimes.com/2007/10/23/messing-around-with-metadata/">use of the metadata</a> for all kinds of things, and has set up <a href="https://groups.google.com/forum/#!forum/nytnlp">a forum</a> to discuss related research.<br /><br />We recently used this corpus to study a topic called &#8220;entity salience&#8221;. To understand salience, consider: how do you know what a news article or a web page is about? Reading comes pretty easily to people -- we can quickly identify the places or things or people most central to a piece of text. But how might we teach a machine to perform this same task? This problem is a key step towards being able to read and understand an article.<br /><br />One way to approach the problem is to look for words that appear more often than their ordinary rates. For example, if you see the word &#8220;coach&#8221; 5 times in a <a href="http://www.nytimes.com/2014/08/06/sports/basketball/spurs-hire-becky-hammon-as-nbas-first-full-time-female-coach.html">581 word article</a>, and compare that to the usual frequency of &#8220;coach&#8221; -- <a href="https://books.google.com/ngrams/graph?content=coach&#38;year_start=1800&#38;year_end=2008&#38;corpus=15&#38;smoothing=3&#38;share=&#38;direct_url=t1%3B%2Ccoach%3B%2Cc0">more like 5 in 330,000 words</a> -- you have reason to suspect the article has something to do with coaching. The term &#8220;basketball&#8221; is even more extreme, appearing 150,000 times more often than usual. This is the idea of the famous <a href="http://en.wikipedia.org/wiki/Tf%E2%80%93idf">TFIDF</a>, long used to index web pages.<br /><div></div><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://upload.wikimedia.org/wikipedia/commons/e/ed/Becky_Hammon.jpg"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/e/ed/Becky_Hammon.jpg" height="640" width="456"></a></td></tr><tr><td>Congratulations to <a href="http://en.wikipedia.org/wiki/Becky_Hammon">Becky Hammon</a>, first female NBA coach! Image via Wikipedia.</td></tr></tbody></table>Term ratios are a start, but we can do better. Search indexing these days is much more involved, using for example the distances between pairs of words on a page to capture their relatedness. Now, with the <a href="http://www.google.com/insidesearch/features/search/knowledge.html">Knowledge Graph</a>, we are beginning to think in terms of entities and relations rather than keywords. &#8220;Basketball&#8221; is more than a string of characters; it is a reference to something in the real word which we already already know quite a bit about.<br /><br />Background information about entities ought to help us decide which of them are most salient. After all, an article&#8217;s author assumes her readers have some general understanding of the world, and probably a bit about sports too. Using background knowledge, we might be able to infer that the WNBA is a salient entity in the Becky Hammon article even though it only appears once.<br /><br />To encourage research on leveraging background information, we are releasing a large dataset of annotations to accompany the New York Times Annotated Corpus, including resolved <a href="http://wiki.freebase.com/wiki/Machine_ID">Freebase entity IDs</a> and labels indicating which entities are salient. The salience annotations are determined by automatically aligning entities in the document with entities in accompanying human-written abstracts. Details of the salience annotations and some baseline results are described in our recent paper: <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/42235.pdf">A New Entity Salience Task with Millions of Training Examples</a> (Jesse Dunietz and Dan Gillick).<br /><br />Since our entity resolver works better for named entities like WNBA than for nominals like &#8220;coach&#8221; (this is the notoriously difficult <a href="http://en.wikipedia.org/wiki/Word-sense_disambiguation">word sense disambiguation</a> problem, which we&#8217;ve <a href="http://googleresearch.blogspot.com/2013/03/learning-from-big-data-40-million.html">previously touched on</a>), the annotations are limited to names. <br /><br />Below is sample output for a document. The first line contains the NYT document ID and the headline; each subsequent line includes an entity index, an indicator for salience, the mention count for this entity in the document as determined by our coreference system, the text of the first mention of the entity, the byte offsets (start and end) for the first mention of the entity, and the resolved Freebase MID.<br /><div><a href="http://4.bp.blogspot.com/-smWb9hkBo4k/U_Z0M9uJxeI/AAAAAAAAAVE/AHx1Zq4fKws/s1600/Screen%2BShot%2B2014-08-21%2Bat%2B3.12.53%2BPM.png"><img border="0" src="http://4.bp.blogspot.com/-smWb9hkBo4k/U_Z0M9uJxeI/AAAAAAAAAVE/AHx1Zq4fKws/s1600/Screen%2BShot%2B2014-08-21%2Bat%2B3.12.53%2BPM.png" height="170" width="640"></a></div>Features like mention count and document positioning give reasonable salience predictions. But because they only describe what&#8217;s explicitly in the document, we expect a system that uses background information to expose what&#8217;s implicit could give better results.<br /><div></div><br />Download the data directly <a href="https://drive.google.com/open?id=0B6s2fODjrU7Kc2JtTTNVWlY1c1U&#38;authuser=1">from Google Drive</a>, or visit the project home page with more information at <a href="https://code.google.com/p/nyt-salience/">our Google Code site</a>. We look forward to seeing what you come up with!]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Dan Gillick, Research Scientist, and Dave Orr, Product Manager</span><br /><br />Language understanding systems are largely trained on freely available data, such as the <a href="http://www.cis.upenn.edu/~treebank/">Penn Treebank</a>, perhaps the most widely used linguistic resource ever created. We have previously released <a href="http://googleresearch.blogspot.com/2013/12/free-language-lessons-for-computers.html">lots of linguistic data</a> ourselves, to contribute to the language understanding community as well as encourage further research into these areas. <br /><br />Now, we’re releasing a new dataset, based on another great resource: the <a href="http://open.blogs.nytimes.com/2009/01/12/fatten-up-your-corpus/?_php=true&amp;_type=blogs&amp;_r=0">New York Times Annotated Corpus</a>, a set of 1.8 million articles spanning 20 years. 600,000 articles in the NYTimes Corpus have hand-written summaries, and more than 1.5 million of them are tagged with people, places, and organizations mentioned in the article. The Times encourages <a href="http://open.blogs.nytimes.com/2007/10/23/messing-around-with-metadata/">use of the metadata</a> for all kinds of things, and has set up <a href="https://groups.google.com/forum/#!forum/nytnlp">a forum</a> to discuss related research.<br /><br />We recently used this corpus to study a topic called “entity salience”. To understand salience, consider: how do you know what a news article or a web page is about? Reading comes pretty easily to people -- we can quickly identify the places or things or people most central to a piece of text. But how might we teach a machine to perform this same task? This problem is a key step towards being able to read and understand an article.<br /><br />One way to approach the problem is to look for words that appear more often than their ordinary rates. For example, if you see the word “coach” 5 times in a <a href="http://www.nytimes.com/2014/08/06/sports/basketball/spurs-hire-becky-hammon-as-nbas-first-full-time-female-coach.html">581 word article</a>, and compare that to the usual frequency of “coach” -- <a href="https://books.google.com/ngrams/graph?content=coach&amp;year_start=1800&amp;year_end=2008&amp;corpus=15&amp;smoothing=3&amp;share=&amp;direct_url=t1%3B%2Ccoach%3B%2Cc0">more like 5 in 330,000 words</a> -- you have reason to suspect the article has something to do with coaching. The term “basketball” is even more extreme, appearing 150,000 times more often than usual. This is the idea of the famous <a href="http://en.wikipedia.org/wiki/Tf%E2%80%93idf">TFIDF</a>, long used to index web pages.<br /><div class="separator" style="clear: both; text-align: center;"></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://upload.wikimedia.org/wikipedia/commons/e/ed/Becky_Hammon.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/e/ed/Becky_Hammon.jpg" height="640" width="456" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Congratulations to <a href="http://en.wikipedia.org/wiki/Becky_Hammon">Becky Hammon</a>, first female NBA coach! Image via Wikipedia.</td></tr></tbody></table>Term ratios are a start, but we can do better. Search indexing these days is much more involved, using for example the distances between pairs of words on a page to capture their relatedness. Now, with the <a href="http://www.google.com/insidesearch/features/search/knowledge.html">Knowledge Graph</a>, we are beginning to think in terms of entities and relations rather than keywords. “Basketball” is more than a string of characters; it is a reference to something in the real word which we already already know quite a bit about.<br /><br />Background information about entities ought to help us decide which of them are most salient. After all, an article’s author assumes her readers have some general understanding of the world, and probably a bit about sports too. Using background knowledge, we might be able to infer that the WNBA is a salient entity in the Becky Hammon article even though it only appears once.<br /><br />To encourage research on leveraging background information, we are releasing a large dataset of annotations to accompany the New York Times Annotated Corpus, including resolved <a href="http://wiki.freebase.com/wiki/Machine_ID">Freebase entity IDs</a> and labels indicating which entities are salient. The salience annotations are determined by automatically aligning entities in the document with entities in accompanying human-written abstracts. Details of the salience annotations and some baseline results are described in our recent paper: <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/42235.pdf">A New Entity Salience Task with Millions of Training Examples</a> (Jesse Dunietz and Dan Gillick).<br /><br />Since our entity resolver works better for named entities like WNBA than for nominals like “coach” (this is the notoriously difficult <a href="http://en.wikipedia.org/wiki/Word-sense_disambiguation">word sense disambiguation</a> problem, which we’ve <a href="http://googleresearch.blogspot.com/2013/03/learning-from-big-data-40-million.html">previously touched on</a>), the annotations are limited to names. <br /><br />Below is sample output for a document. The first line contains the NYT document ID and the headline; each subsequent line includes an entity index, an indicator for salience, the mention count for this entity in the document as determined by our coreference system, the text of the first mention of the entity, the byte offsets (start and end) for the first mention of the entity, and the resolved Freebase MID.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-smWb9hkBo4k/U_Z0M9uJxeI/AAAAAAAAAVE/AHx1Zq4fKws/s1600/Screen%2BShot%2B2014-08-21%2Bat%2B3.12.53%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-smWb9hkBo4k/U_Z0M9uJxeI/AAAAAAAAAVE/AHx1Zq4fKws/s1600/Screen%2BShot%2B2014-08-21%2Bat%2B3.12.53%2BPM.png" height="170" width="640" /></a></div>Features like mention count and document positioning give reasonable salience predictions. But because they only describe what’s explicitly in the document, we expect a system that uses background information to expose what’s implicit could give better results.<br /><div class="separator" style="clear: both; text-align: center;"></div><br />Download the data directly <a href="https://drive.google.com/open?id=0B6s2fODjrU7Kc2JtTTNVWlY1c1U&amp;authuser=1">from Google Drive</a>, or visit the project home page with more information at <a href="https://code.google.com/p/nyt-salience/">our Google Code site</a>. We look forward to seeing what you come up with!]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/teaching-machines-to-read-between-the-lines-and-a-new-corpus-with-entity-salience-annotations/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Google Research Awards: Summer 2014</title>
		<link>https://googledata.org/google-research/google-research-awards-summer-2014/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-research-awards-summer-2014</link>
		<comments>https://googledata.org/google-research/google-research-awards-summer-2014/#comments</comments>
		<pubDate>Wed, 20 Aug 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=16ae91984428d73c0032a8c30f23daff</guid>
		<description><![CDATA[<span>posted by Maggie Johnson, Director of Education and University Relations</span><br /><br />We have just completed another round of the <a href="http://research.google.com/university/relations/research_awards.html">Google Research Awards</a>, our biannual open call for proposals on computer science-related topics including systems, machine perception, structured data, robotics, and mobile. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google researchers and engineers.<br /><br />This round we received 722 proposals, an increase of 5% over last round, covering 44 countries on 6 continents. After expert reviews and committee discussions, we decided to fund 110 projects. The subject areas that received the highest level of support were systems, human-computer interaction, mobile, and machine perception, with 22% of the funding awarded to universities outside the U.S.<br /><br />We introduced three new topics this round, representing important new research areas for Google. Computational neuroscience looks at the information processing properties of the brain and nervous system. One funded proposal will study scene recognition in this context. A second new area is physical interactions with devices. With the introduction of new paradigms such as <a href="http://www.google.com/glass/start/">Google Glass</a>, we can study how such devices expand our processing capabilities. The third new area is online learning at scale, which covers topics such as teacher-student interaction at scale, data-driven adaptive learning, and innovative assessment methods. <br /><br />Congratulations to the well-deserving <a href="http://services.google.com/fh/files/blogs/googlefras-aug2014.pdf">recipients of this round&#8217;s awards</a>. If you are interested in applying for the next round (deadline is October 15), please visit <a href="http://research.google.com/university/relations/research_awards.html">our website</a> for more information.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">posted by Maggie Johnson, Director of Education and University Relations</span><br /><br />We have just completed another round of the <a href="http://research.google.com/university/relations/research_awards.html">Google Research Awards</a>, our biannual open call for proposals on computer science-related topics including systems, machine perception, structured data, robotics, and mobile. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google researchers and engineers.<br /><br />This round we received 722 proposals, an increase of 5% over last round, covering 44 countries on 6 continents. After expert reviews and committee discussions, we decided to fund 110 projects. The subject areas that received the highest level of support were systems, human-computer interaction, mobile, and machine perception, with 22% of the funding awarded to universities outside the U.S.<br /><br />We introduced three new topics this round, representing important new research areas for Google. Computational neuroscience looks at the information processing properties of the brain and nervous system. One funded proposal will study scene recognition in this context. A second new area is physical interactions with devices. With the introduction of new paradigms such as <a href="http://www.google.com/glass/start/">Google Glass</a>, we can study how such devices expand our processing capabilities. The third new area is online learning at scale, which covers topics such as teacher-student interaction at scale, data-driven adaptive learning, and innovative assessment methods. <br /><br />Congratulations to the well-deserving <a href="http://services.google.com/fh/files/blogs/googlefras-aug2014.pdf">recipients of this round’s awards</a>. If you are interested in applying for the next round (deadline is October 15), please visit <a href="http://research.google.com/university/relations/research_awards.html">our website</a> for more information.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/google-research-awards-summer-2014/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Summer Games: Learn to Program</title>
		<link>https://googledata.org/google-research/summer-games-learn-to-program/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=summer-games-learn-to-program</link>
		<comments>https://googledata.org/google-research/summer-games-learn-to-program/#comments</comments>
		<pubDate>Mon, 11 Aug 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=436d4a47e2da2ac42e9631fb049c231d</guid>
		<description><![CDATA[<span>Posted by Jennifer Vaden Barth, Executive Assistant</span>  <br /><br />Looking for ways to engage your kids in constructive, meaningful learning? We&#8217;ve just launched <a href="https://blockly-games.appspot.com/">Blockly Games</a>, our next extension of Blockly, a web&#173;-based graphical programming environment. As part of the generation of new programming environments that provide a more accessible introduction to coding, Blockly Games allows users to create and run programs by arranging blocks with a simple click, drag and drop. <br /><div><a href="http://2.bp.blogspot.com/-zul24LM3bIU/U-jedUgaB8I/AAAAAAAAAUs/sD6FQUtAs8o/s1600/Screen+Shot+2014-08-11+at+8.16.25+AM.png"><img border="0" src="http://2.bp.blogspot.com/-zul24LM3bIU/U-jedUgaB8I/AAAAAAAAAUs/sD6FQUtAs8o/s1600/Screen+Shot+2014-08-11+at+8.16.25+AM.png" height="236" width="400"></a></div>Blockly Games requires little or no typing, which facilitates young or novice programmers to learn core coding principles in an intuitive way. By minimizing the use of syntax, users are able to focus on the logic and concepts used by computer scientists, progressing at their own pace as they venture through mazes and more advanced arenas.<br /><br />Blockly was featured during the 2013 <a href="http://csedweek.org/learn">Computer Science Education week</a> where people of all ages tried programming for the first time.  Blockly is universally accessible with translations for a number of languages, including German, Vietnamese, Russian and even <a href="http://googleresearch.blogspot.com/2014/04/making-blockly-universally-accessible.html">Klingon</a>.<br /><br />We encourage you and your child to explore Blockly Games, where novice programmers of any age begin to learn together. With Blockly Games, the whole family can learn and master basic computer science concepts.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Jennifer Vaden Barth, Executive Assistant</span>  <br /><br />Looking for ways to engage your kids in constructive, meaningful learning? We’ve just launched <a href="https://blockly-games.appspot.com/">Blockly Games</a>, our next extension of Blockly, a web­-based graphical programming environment. As part of the generation of new programming environments that provide a more accessible introduction to coding, Blockly Games allows users to create and run programs by arranging blocks with a simple click, drag and drop. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-zul24LM3bIU/U-jedUgaB8I/AAAAAAAAAUs/sD6FQUtAs8o/s1600/Screen+Shot+2014-08-11+at+8.16.25+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-zul24LM3bIU/U-jedUgaB8I/AAAAAAAAAUs/sD6FQUtAs8o/s1600/Screen+Shot+2014-08-11+at+8.16.25+AM.png" height="236" width="400" /></a></div>Blockly Games requires little or no typing, which facilitates young or novice programmers to learn core coding principles in an intuitive way. By minimizing the use of syntax, users are able to focus on the logic and concepts used by computer scientists, progressing at their own pace as they venture through mazes and more advanced arenas.<br /><br />Blockly was featured during the 2013 <a href="http://csedweek.org/learn">Computer Science Education week</a> where people of all ages tried programming for the first time.  Blockly is universally accessible with translations for a number of languages, including German, Vietnamese, Russian and even <a href="http://googleresearch.blogspot.com/2014/04/making-blockly-universally-accessible.html">Klingon</a>.<br /><br />We encourage you and your child to explore Blockly Games, where novice programmers of any age begin to learn together. With Blockly Games, the whole family can learn and master basic computer science concepts. ]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/summer-games-learn-to-program/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Doing Data Science with coLaboratory</title>
		<link>https://googledata.org/google-research/doing-data-science-with-colaboratory/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=doing-data-science-with-colaboratory</link>
		<comments>https://googledata.org/google-research/doing-data-science-with-colaboratory/#comments</comments>
		<pubDate>Fri, 08 Aug 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[open source]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=9739fbef9dc73d7745d55c57bd1e647a</guid>
		<description><![CDATA[Posted by Kayur Patel, Kester Tong, Mark Sandler, and Corinna Cortes, Google Research  Building products and making decisions based on data is at the core of what we do at Google. Increasingly common among fields such as journalism and government, this...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Kayur Patel, Kester Tong, Mark Sandler, and Corinna Cortes, Google Research</span>  <br /><br />Building products and making decisions based on data is at the core of what we do at Google. Increasingly common among fields such as journalism and government, this data-driven mindset is changing the way traditionally non-technical organizations do work. In order to bring this approach to even more fields, Google Research is excited to be a partner in the <a href="https://github.com/jupyter/colaboratory">coLaboratory project</a>, a new tool for data science and analysis, designed to make collaborating on data easier.<br /><br />Created by Google Research, <a href="https://sites.google.com/site/matthewturk/">Matthew Turk</a> (creator of the <a href="http://yt-project.org/">yt</a> visualization package), and the <a href="http://ipython.org/">IPython</a>/<a href="http://jupyter.org/">Jupyter</a> development team, coLaboratory merges successful open source products with Google technologies, enabling multiple people to collaborate directly through simultaneous access and analysis of data. This provides a big improvement over ad-hoc workflows involving emailing documents back and forth.  <br /><br />Setting up an environment for collaborative data analysis can be a hurdle, as requirements vary among different machines and operating systems, and installation errors can be cryptic. The <a href="https://chrome.google.com/webstore/detail/colaboratory-notebook/pianggobfjcgeihlmfhfgkfalopndooo">coLaboratory Chrome App</a> addresses this hurdle. One-click installs coLaboratory, IPython, and a large set of popular scientific python libraries (with more on the way). Furthermore, because we use <a href="http://www.chromium.org/nativeclient/pnacl">Portable Native Client (PNaCl)</a>, coLaboratory runs at native speeds and is secure, allowing new users to start working with IPython faster than ever. <br /><br />In addition to ease of installation, coLaboratory enables collaboration between people with different skill sets. One example of this would be interactions between programmers who write complex logic in code and non-programmers who are more familiar with GUIs. As shown below, a programmer writes code (step 1) and then annotates that code with simple markup to create an interactive form (step 2). The programmer can then hide the complexity of code to show only the form (step 3), which allows a non-programmer to re-run the code by changing the slider and dropdowns in the form (step 4). This interaction allows programmers to write complex logic in code and allows non-programmers to manipulate that logic through simple GUI hooks.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/--eMZrWU1qBc/U-Pl11iskUI/AAAAAAAAAUc/Nh27EuV4D5w/s1600/new+notebook+image.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/--eMZrWU1qBc/U-Pl11iskUI/AAAAAAAAAUc/Nh27EuV4D5w/s1600/new+notebook+image.png" height="372" width="640" /></a></div>For more information about this project please see our talks on <a href="https://www.youtube.com/watch?v=JDrhn0-r9Eg&amp;feature=youtu.be&amp;t=9m">collaborative data science</a> and <a href="https://www.youtube.com/watch?v=bITvUUyvUAY">zero dependency python</a>. In addition to our external partners in the coLaboratory project, we would like to thank everyone at Google who contributed: the Chromium Native Client team, the Google Drive team, the Open Source team, and the Security team.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/doing-data-science-with-colaboratory/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Facilitating Genomics Research with Google Cloud Platform</title>
		<link>https://googledata.org/google-research/facilitating-genomics-research-with-google-cloud-platform/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=facilitating-genomics-research-with-google-cloud-platform</link>
		<comments>https://googledata.org/google-research/facilitating-genomics-research-with-google-cloud-platform/#comments</comments>
		<pubDate>Wed, 30 Jul 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[open source]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=0e096bd1947e4605457cd602a46ac723</guid>
		<description><![CDATA[<span>Posted by Paul C. Boutros, Ontario Institute for Cancer Research, Josh Stuart, UC Santa Cruz, Adam Margolin, Oregon Health &#38; Science University; Nicole Deflaux and Jonathan Bingham, Google Cloud Platform and Google Genomics</span><br /><br />The understanding of the origin and progression of cancer remains in its infancy.  However, due to rapid advances in the ability to accurately read and identify (i.e. sequence) the DNA of cancerous cells, the knowledge in this field is growing rapidly.  Several <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2902243/">comprehensive</a> <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2671642/">sequencing</a> <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3776390/">studies</a> have shown that alterations of single base pairs within the DNA, known as <a href="http://en.wikipedia.org/wiki/Single_Nucleotide_Polymorphism">Single Nucleotide Variants</a> (SNVs), or duplications, deletions and rearrangements of larger segments of the genome, known as <a href="http://en.wikipedia.org/wiki/Structural_variation">Structural Variations</a> (SVs), are the <a href="https://en.wikipedia.org/wiki/BCR/ABL">primary causes of cancer</a> and can influence what drugs will be effective against an individual tumor.<br /><br />However, one of the major roadblocks hampering progress is the availability of accurate methods for interpreting genome sequence data.  Due to the sheer volume of genomics data (the entire genome of just one person produces more than 100 gigabytes of raw data!), the ability to precisely localize a genomic alteration (SNV or SV) and resolve its association with cancer remains a considerable research challenge. Furthermore, preliminary benchmark studies conducted by the International Cancer Genome Consortium (ICGC) and The Cancer Genome Atlas (TCGA) have discovered that different mutation calling software run on the same data can result in detection of different sets of mutations. Clearly, optimization and standardization of mutation detection methods is a prerequisite for realizing personalized medicine applications based on a patient&#8217;s own genome.<br /><br />The ICGC and TCGA are working to address this issue through an open community-based collaborative competition, run in conjunction with leading research institutions: the <a href="http://oicr.on.ca/">Ontario Institute for Cancer Research</a>, <a href="http://www.ucsc.edu/">University of California Santa Cruz</a>, <a href="http://www.sagebase.org/">Sage Bionetworks</a>, <a href="http://www.the-dream-project.org/">IBM-DREAM</a>, and <a href="http://www.ohsu.edu/xd/">Oregon Health and Sciences University</a>. Together, they are running the <a href="https://www.synapse.org/#!Synapse:syn312572/wiki/">DREAM Somatic Mutation Calling Challenge</a>, in which researchers from across the world &#8220;compete&#8221; to find the most accurate SNV and SV detection algorithms. By creating a living benchmark for mutation detection, the DREAM Challenge aims to improve standard methods for identifying cancer-associated mutations and rearrangements in tumor and normal samples from <a href="http://en.wikipedia.org/wiki/Whole_genome_sequencing">whole-genome sequencing</a> data.<br /><br />Given Google&#8217;s recent partnership with the <a href="http://googleresearch.blogspot.com/2014/02/google-joins-global-alliance-for.html">Global Alliance for Genomics and Health</a>, we are excited to provide cloud computing resources on Google Cloud Platform for competitors in the DREAM Challenge, enabling scientists who do not have ready access to large local computer clusters to participate with open access to contest data as well as credits that can be used for Google Compute Engine virtual machines. By leveraging the power of cloud technologies for genomics computing, contestants have access to powerful computational resources and a platform that allows the sharing of data. We hope to democratize research, foster the open access of data, and spur collaboration.<br /><br />In addition to the core Google Cloud Platform infrastructure, the Google Genomics team has implemented a <a href="https://developers.google.com/genomics/">simple web-based API</a> to store, process, explore, and share genomic data at scale. We have made the Challenge datasets available through the Google Genomics API. The challenge includes both simulated tumor data for which the correct answers are known and real tumor data for which the correct answers are not known.<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://3.bp.blogspot.com/-v2MxuA7Zrpk/U9f-Quy89wI/AAAAAAAAATo/an_BBBrf3sg/s1600/Screen+Shot+2014-06-17+at+8.43.13+AM.png"><img border="0" src="http://3.bp.blogspot.com/-v2MxuA7Zrpk/U9f-Quy89wI/AAAAAAAAATo/an_BBBrf3sg/s1600/Screen+Shot+2014-06-17+at+8.43.13+AM.png" height="403" width="640"></a></td></tr><tr><td><a href="http://gabrowse.appspot.com/#backend=GOOGLE&#38;readsetId=CPHG3MzoCRCd7Zmq0KTSjOEB&#38;location=18:66211523">Genomics API Browser</a> showing a particular cancer variant position (highlighted) in dataset <i>in silico #1</i> that was missed by many challenge participants.</td></tr></tbody></table>Although submissions for the simulated data can be scored immediately, the winners on the real tumor data will not immediately be known when the challenge closes. This is a consequence of the fact that current DNA sequencing technology does not provide 100% accurate data, which adds to the complexity of the problem these algorithms are attempting to tackle. Therefore, to identify the winners, researchers must turn to alternative laboratory technologies to verify if a particular mutation that was found in sequencing data is actually (or likely) to be true. As such, additional data will be collected after the Challenge is complete in order to determine the winner. The organizers will re-sequence DNA from the cells of the real tumor using an independent sequencing technology (Ion Torrent), specifically examining regions overlapping the positions of the cancer mutations submitted by the contest participants.<br /><br />As an analogy, a "scratched magnifying glass" is used to examine the genome the first time around. The second time around, a "stronger magnifying glass with scratches in different places" is used to look at the specific locations in the genome reported by the challenge participants. By combining the data collected by those two different "magnifying glasses", and then comparing that against the cancer mutations submitted by the contest participants, the winner will then be determined.<br /><br />We believe we are at the beginning of a transformation in medicine and basic research, driven by advances in genome sequencing and computing at scale. With the DREAM Challenge, we are all excited to be part of bringing researchers around the world to focus on this particular cancer research problem. To learn more about how to participate in the challenge <a href="https://www.synapse.org/#!Synapse:syn312572/wiki/60903">register here</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Paul C. Boutros, Ontario Institute for Cancer Research, Josh Stuart, UC Santa Cruz, Adam Margolin, Oregon Health &amp; Science University; Nicole Deflaux and Jonathan Bingham, Google Cloud Platform and Google Genomics</span><br /><br />The understanding of the origin and progression of cancer remains in its infancy.  However, due to rapid advances in the ability to accurately read and identify (i.e. sequence) the DNA of cancerous cells, the knowledge in this field is growing rapidly.  Several <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2902243/">comprehensive</a> <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2671642/">sequencing</a> <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3776390/">studies</a> have shown that alterations of single base pairs within the DNA, known as <a href="http://en.wikipedia.org/wiki/Single_Nucleotide_Polymorphism">Single Nucleotide Variants</a> (SNVs), or duplications, deletions and rearrangements of larger segments of the genome, known as <a href="http://en.wikipedia.org/wiki/Structural_variation">Structural Variations</a> (SVs), are the <a href="https://en.wikipedia.org/wiki/BCR/ABL">primary causes of cancer</a> and can influence what drugs will be effective against an individual tumor.<br /><br />However, one of the major roadblocks hampering progress is the availability of accurate methods for interpreting genome sequence data.  Due to the sheer volume of genomics data (the entire genome of just one person produces more than 100 gigabytes of raw data!), the ability to precisely localize a genomic alteration (SNV or SV) and resolve its association with cancer remains a considerable research challenge. Furthermore, preliminary benchmark studies conducted by the International Cancer Genome Consortium (ICGC) and The Cancer Genome Atlas (TCGA) have discovered that different mutation calling software run on the same data can result in detection of different sets of mutations. Clearly, optimization and standardization of mutation detection methods is a prerequisite for realizing personalized medicine applications based on a patient’s own genome.<br /><br />The ICGC and TCGA are working to address this issue through an open community-based collaborative competition, run in conjunction with leading research institutions: the <a href="http://oicr.on.ca/">Ontario Institute for Cancer Research</a>, <a href="http://www.ucsc.edu/">University of California Santa Cruz</a>, <a href="http://www.sagebase.org/">Sage Bionetworks</a>, <a href="http://www.the-dream-project.org/">IBM-DREAM</a>, and <a href="http://www.ohsu.edu/xd/">Oregon Health and Sciences University</a>. Together, they are running the <a href="https://www.synapse.org/#!Synapse:syn312572/wiki/">DREAM Somatic Mutation Calling Challenge</a>, in which researchers from across the world “compete” to find the most accurate SNV and SV detection algorithms. By creating a living benchmark for mutation detection, the DREAM Challenge aims to improve standard methods for identifying cancer-associated mutations and rearrangements in tumor and normal samples from <a href="http://en.wikipedia.org/wiki/Whole_genome_sequencing">whole-genome sequencing</a> data.<br /><br />Given Google’s recent partnership with the <a href="http://googleresearch.blogspot.com/2014/02/google-joins-global-alliance-for.html">Global Alliance for Genomics and Health</a>, we are excited to provide cloud computing resources on Google Cloud Platform for competitors in the DREAM Challenge, enabling scientists who do not have ready access to large local computer clusters to participate with open access to contest data as well as credits that can be used for Google Compute Engine virtual machines. By leveraging the power of cloud technologies for genomics computing, contestants have access to powerful computational resources and a platform that allows the sharing of data. We hope to democratize research, foster the open access of data, and spur collaboration.<br /><br />In addition to the core Google Cloud Platform infrastructure, the Google Genomics team has implemented a <a href="https://developers.google.com/genomics/">simple web-based API</a> to store, process, explore, and share genomic data at scale. We have made the Challenge datasets available through the Google Genomics API. The challenge includes both simulated tumor data for which the correct answers are known and real tumor data for which the correct answers are not known.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-v2MxuA7Zrpk/U9f-Quy89wI/AAAAAAAAATo/an_BBBrf3sg/s1600/Screen+Shot+2014-06-17+at+8.43.13+AM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-v2MxuA7Zrpk/U9f-Quy89wI/AAAAAAAAATo/an_BBBrf3sg/s1600/Screen+Shot+2014-06-17+at+8.43.13+AM.png" height="403" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="http://gabrowse.appspot.com/#backend=GOOGLE&amp;readsetId=CPHG3MzoCRCd7Zmq0KTSjOEB&amp;location=18:66211523">Genomics API Browser</a> showing a particular cancer variant position (highlighted) in dataset <i>in silico #1</i> that was missed by many challenge participants.</td></tr></tbody></table>Although submissions for the simulated data can be scored immediately, the winners on the real tumor data will not immediately be known when the challenge closes. This is a consequence of the fact that current DNA sequencing technology does not provide 100% accurate data, which adds to the complexity of the problem these algorithms are attempting to tackle. Therefore, to identify the winners, researchers must turn to alternative laboratory technologies to verify if a particular mutation that was found in sequencing data is actually (or likely) to be true. As such, additional data will be collected after the Challenge is complete in order to determine the winner. The organizers will re-sequence DNA from the cells of the real tumor using an independent sequencing technology (Ion Torrent), specifically examining regions overlapping the positions of the cancer mutations submitted by the contest participants.<br /><br />As an analogy, a "scratched magnifying glass" is used to examine the genome the first time around. The second time around, a "stronger magnifying glass with scratches in different places" is used to look at the specific locations in the genome reported by the challenge participants. By combining the data collected by those two different "magnifying glasses", and then comparing that against the cancer mutations submitted by the contest participants, the winner will then be determined.<br /><br />We believe we are at the beginning of a transformation in medicine and basic research, driven by advances in genome sequencing and computing at scale. With the DREAM Challenge, we are all excited to be part of bringing researchers around the world to focus on this particular cancer research problem. To learn more about how to participate in the challenge <a href="https://www.synapse.org/#!Synapse:syn312572/wiki/60903">register here</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/facilitating-genomics-research-with-google-cloud-platform/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Focus Areas for Policy &amp; Standards Research Proposals</title>
		<link>https://googledata.org/google-research/focus-areas-for-policy-standards-research-proposals/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=focus-areas-for-policy-standards-research-proposals</link>
		<comments>https://googledata.org/google-research/focus-areas-for-policy-standards-research-proposals/#comments</comments>
		<pubDate>Thu, 24 Jul 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=34f7a37d8bb44bfcc4d39b0663c96a28</guid>
		<description><![CDATA[<span>Posted by Vint Cerf, VP &#38; Chief Internet Evangelist</span><br /><br />Twice a year, Google&#8217;s <a href="http://research.google.com/university/relations/research_awards.html">Faculty Research Awards</a> program seeks and reviews proposals in 23 research areas, assigning to each area a group of experienced Googlers who assess and deliberate over which proposals we should and can fund. With each call for proposals, we receive a wide array of research ideas in fields that fall within the realm of Internet policy.<br /><br />We would like to share with you the areas of Internet policy in which we are particularly interested to see progress and stimulate further research:<br /><ul><li><b>Accessibility:</b> Google is committed to supporting research that generates insights about what helps make technology a usable reality for everyone, regardless of cognitive, physical, sensory, or other form of impairment.</li><li><b>Access:</b> What policies help bring open, robust, competitive and affordable Internet access to everyone in the world? What are the economic and social impacts of improved Internet access? In particular, what are the emerging impacts of gigabit access networks?</li><li><b>Intellectual property (IP) in the digital era:</b> The growth of digital industries has meant that IP law is an increasingly important policy tool governing innovation and economic growth. We would like to better understand how IP legislation can enable new technologies, and what effect different national or regional IP regimes have on innovation, such as the effect of patent litigation on invention, and how copyright exceptions affect the creation of online technologies.</li><li><b>Freedom of Expression:</b> As an advocate of freedom of expression on the Internet, Google is interested in research that produces insights into how discourse and expression in the global online (public) sphere happens, and how stakeholders best allow freedom of expression, balance it with other rights and resolve conflicts or interest/disputes.</li><li><b>Internet Governance:</b> The Internet is a universal space that many expect to remain open, free, and borderless. Multiple stakeholders (internet companies, governments and civil society) work together to design the governance practices and institutions to maintain order and innovation in the global Internet ecosystem.  We are interested in supporting top researchers who analyze and contribute insights into which practices and institutional structures work and which don&#8217;t.</li><li><b>Open Standards and Interoperability:</b> Open Standards and interoperability of services are at the core of the Internet&#8217;s successful international propagation and usefulness. Google is interested in research that contributes analysis and best practices for standardization and interoperability. Among them we see resource management, access control and authorities for the Internet of Things, as well as questions regarding convergence and security. Also, cloud computing and storage could benefit from open standards that enable interoperability.</li></ul>Additionally, there are several important research areas like <b>Privacy</b>, <b>Economics and market algorithms</b>, and <b>Security</b>, which have a significant policy component but are dealt with as research topics distinct from policy &#38; standards.<br /><br />Researchers who are interested in applying for a Faculty Research Award can do so twice a year following the instructions laid out on the Google <a href="http://research.google.com/university/relations/research_awards.html">Faculty Research Awards website</a>. Additional information about Internet Policy research support from Google, including the <a href="https://www.google.com/policyfellowship/index.html">Google Policy Fellowship</a> program, can be found in the <a href="http://googlepolicyeurope.blogspot.com/2014/03/apply-now-to-explore-our-digital-policy.html">recent post on the Google Europe Blog</a>.<br /><br />We look forward to your proposals.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Vint Cerf, VP &amp; Chief Internet Evangelist</span><br /><br />Twice a year, Google’s <a href="http://research.google.com/university/relations/research_awards.html">Faculty Research Awards</a> program seeks and reviews proposals in 23 research areas, assigning to each area a group of experienced Googlers who assess and deliberate over which proposals we should and can fund. With each call for proposals, we receive a wide array of research ideas in fields that fall within the realm of Internet policy.<br /><br />We would like to share with you the areas of Internet policy in which we are particularly interested to see progress and stimulate further research:<br /><ul><li><b>Accessibility:</b> Google is committed to supporting research that generates insights about what helps make technology a usable reality for everyone, regardless of cognitive, physical, sensory, or other form of impairment.</li><li><b>Access:</b> What policies help bring open, robust, competitive and affordable Internet access to everyone in the world? What are the economic and social impacts of improved Internet access? In particular, what are the emerging impacts of gigabit access networks?</li><li><b>Intellectual property (IP) in the digital era:</b> The growth of digital industries has meant that IP law is an increasingly important policy tool governing innovation and economic growth. We would like to better understand how IP legislation can enable new technologies, and what effect different national or regional IP regimes have on innovation, such as the effect of patent litigation on invention, and how copyright exceptions affect the creation of online technologies.</li><li><b>Freedom of Expression:</b> As an advocate of freedom of expression on the Internet, Google is interested in research that produces insights into how discourse and expression in the global online (public) sphere happens, and how stakeholders best allow freedom of expression, balance it with other rights and resolve conflicts or interest/disputes.</li><li><b>Internet Governance:</b> The Internet is a universal space that many expect to remain open, free, and borderless. Multiple stakeholders (internet companies, governments and civil society) work together to design the governance practices and institutions to maintain order and innovation in the global Internet ecosystem.  We are interested in supporting top researchers who analyze and contribute insights into which practices and institutional structures work and which don’t.</li><li><b>Open Standards and Interoperability:</b> Open Standards and interoperability of services are at the core of the Internet’s successful international propagation and usefulness. Google is interested in research that contributes analysis and best practices for standardization and interoperability. Among them we see resource management, access control and authorities for the Internet of Things, as well as questions regarding convergence and security. Also, cloud computing and storage could benefit from open standards that enable interoperability.</li></ul>Additionally, there are several important research areas like <b>Privacy</b>, <b>Economics and market algorithms</b>, and <b>Security</b>, which have a significant policy component but are dealt with as research topics distinct from policy &amp; standards.<br /><br />Researchers who are interested in applying for a Faculty Research Award can do so twice a year following the instructions laid out on the Google <a href="http://research.google.com/university/relations/research_awards.html">Faculty Research Awards website</a>. Additional information about Internet Policy research support from Google, including the <a href="https://www.google.com/policyfellowship/index.html">Google Policy Fellowship</a> program, can be found in the <a href="http://googlepolicyeurope.blogspot.com/2014/03/apply-now-to-explore-our-digital-policy.html">recent post on the Google Europe Blog</a>.<br /><br />We look forward to your proposals.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/focus-areas-for-policy-standards-research-proposals/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Academics and the Little Box Challenge</title>
		<link>https://googledata.org/google-research/academics-and-the-little-box-challenge/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=academics-and-the-little-box-challenge</link>
		<comments>https://googledata.org/google-research/academics-and-the-little-box-challenge/#comments</comments>
		<pubDate>Tue, 22 Jul 2014 14:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=4f22f031e435c156fc921b97772a4569</guid>
		<description><![CDATA[<span>Posted by Maggie Johnson, Director of Education and University Relations</span><br /><br /><div><a href="http://1.bp.blogspot.com/-qV0Cazd_QF8/U82mucXsHFI/AAAAAAAAATY/fzkr4mnJZJg/s1600/HiResLBCGoogleIEEELogo-UR.jpg"><img border="0" src="http://1.bp.blogspot.com/-qV0Cazd_QF8/U82mucXsHFI/AAAAAAAAATY/fzkr4mnJZJg/s1600/HiResLBCGoogleIEEELogo-UR.jpg" height="251" width="400"></a></div><span></span><span></span><br />Think shrink! Min it to win it! Smaller is baller! That's what the <a href="http://www.littleboxchallenge.com/">Little Box Challenge</a> is all about: developing a high power density inverter. It&#8217;s a competition presented by Google and the Institute of Electrical and Electronics Engineers Power Electronics Society (IEEE PELS) -- not only a grand engineering challenge, but your chance to make a big impact on the future of renewables and electricity.<br /><br />With the rise of solar photovoltaic panels, electric vehicles (EV) and large format batteries, we&#8217;ve seen a resurgence in the <a href="http://en.wikipedia.org/wiki/War_of_Currents">over-a-century-long feud</a> between Thomas Edison&#8217;s direct current (DC) and Nikola Tesla&#8217;s alternating current (AC). The electric grid and most higher power household and commercial devices use AC; batteries, photovoltaics, and electric vehicles work in DC. So the power electronics that convert between the two -- rectifiers (AC-&#62;DC), and inverters (DC-&#62;AC) -- are also gaining increased prominence, as well as the DC/DC and AC/AC converters that switch between different voltages or frequencies.<br /><br />While different flavors of these devices have been around for well over a century, some of them are starting to show their age and limitations versus newer technologies. For example, conventional string inverters have power densities around 0.5-3 Watts/Inch<sup>3</sup>, and microinverters around 5 Watts/Inch<sup>3</sup> -- but lithium ion batteries can now get 4-10 Watt Hours/Inch<sup>3</sup>. So for a 1-2 hour battery pack, your inverter could end up being bigger than your battery -- a lot to carry around.<br /><br />Some recent advances may change what&#8217;s possible in power electronics. For example, <a href="http://en.wikipedia.org/wiki/Wide-bandgap_semiconductor">Wide-bandgap</a> (WBG) semiconductors -- such as gallium-nitride (GaN) and silicon-carbide (SiC) -- not only enable higher power densities than conventional silicon-based devices do, but can also convert between DC and AC at higher temperatures, using higher switching frequencies, and with greater efficiency.<br /><br />But even WBG materials and other new technologies for power electronics run into limits on the power density of inverters. Photovoltaic power and batteries suffer when they see oscillations on their power output and thus require some form of energy storage -- electrolytic capacitors store that energy and bridge the power differential between the DC input and the AC output, but that makes the devices much larger. Household and consumer devices also need to add filters to prevent electromagnetic interference, so that&#8217;s even more bulk.<br /><br />When it comes to shrinking these devices, inverters may have the most potential. And because inverters are so common in household applications, we hope The Little Box Challenge may lead to improvements not only in power density, but also in reliability, efficiency, safety, and cost. Furthermore, it is our hope that some of these advances can also improve the other types of power electronics listed above. If these devices can be made very small, reliable and inexpensive, we could see all kinds of useful applications to the electric grid, consumer devices and beyond, maybe including some we have yet to imagine.<br /><br />To recognize the role academics have played in pushing the forefront of new technologies, Google has taken a couple of special steps to help them participate:<br /><br /><ul><li>Research at Google will provide unrestricted gifts to to academics pursuing the prize. This funding can be used for research equipment and to support students. Visit the <a href="https://research.google.com/university/relations/littlebox.html">Little Box Challenge awards for academics</a> page for more info -- <b>proposals are due September 30, 2014.</b></li></ul><ul><li>Academics often have trouble getting the latest technology from device manufacturers to tinker on. So Google has reached out to a number of WBG manufacturers who&#8217;ve put up dedicated pages detailing their devices. Check out the <a href="http://www.littleboxchallenge.com/">Little Box Challenge</a> site to get started.</li></ul><br />We hope you&#8217;ll consider entering, and please tell your colleagues, professors, students and dreamers -- you can print and post <a href="http://storage.googleapis.com/lbc/Little_Box_Challenge_Posters.pdf">these posters</a> on your campus to spread the word.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Maggie Johnson, Director of Education and University Relations</span><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-qV0Cazd_QF8/U82mucXsHFI/AAAAAAAAATY/fzkr4mnJZJg/s1600/HiResLBCGoogleIEEELogo-UR.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-qV0Cazd_QF8/U82mucXsHFI/AAAAAAAAATY/fzkr4mnJZJg/s1600/HiResLBCGoogleIEEELogo-UR.jpg" height="251" width="400" /></a></div><span id="goog_1083568706"></span><span id="goog_1083568707"></span><br />Think shrink! Min it to win it! Smaller is baller! That's what the <a href="http://www.littleboxchallenge.com/">Little Box Challenge</a> is all about: developing a high power density inverter. It’s a competition presented by Google and the Institute of Electrical and Electronics Engineers Power Electronics Society (IEEE PELS) -- not only a grand engineering challenge, but your chance to make a big impact on the future of renewables and electricity.<br /><br />With the rise of solar photovoltaic panels, electric vehicles (EV) and large format batteries, we’ve seen a resurgence in the <a href="http://en.wikipedia.org/wiki/War_of_Currents">over-a-century-long feud</a> between Thomas Edison’s direct current (DC) and Nikola Tesla’s alternating current (AC). The electric grid and most higher power household and commercial devices use AC; batteries, photovoltaics, and electric vehicles work in DC. So the power electronics that convert between the two -- rectifiers (AC-&gt;DC), and inverters (DC-&gt;AC) -- are also gaining increased prominence, as well as the DC/DC and AC/AC converters that switch between different voltages or frequencies.<br /><br />While different flavors of these devices have been around for well over a century, some of them are starting to show their age and limitations versus newer technologies. For example, conventional string inverters have power densities around 0.5-3 Watts/Inch<sup>3</sup>, and microinverters around 5 Watts/Inch<sup>3</sup> -- but lithium ion batteries can now get 4-10 Watt Hours/Inch<sup>3</sup>. So for a 1-2 hour battery pack, your inverter could end up being bigger than your battery -- a lot to carry around.<br /><br />Some recent advances may change what’s possible in power electronics. For example, <a href="http://en.wikipedia.org/wiki/Wide-bandgap_semiconductor">Wide-bandgap</a> (WBG) semiconductors -- such as gallium-nitride (GaN) and silicon-carbide (SiC) -- not only enable higher power densities than conventional silicon-based devices do, but can also convert between DC and AC at higher temperatures, using higher switching frequencies, and with greater efficiency.<br /><br />But even WBG materials and other new technologies for power electronics run into limits on the power density of inverters. Photovoltaic power and batteries suffer when they see oscillations on their power output and thus require some form of energy storage -- electrolytic capacitors store that energy and bridge the power differential between the DC input and the AC output, but that makes the devices much larger. Household and consumer devices also need to add filters to prevent electromagnetic interference, so that’s even more bulk.<br /><br />When it comes to shrinking these devices, inverters may have the most potential. And because inverters are so common in household applications, we hope The Little Box Challenge may lead to improvements not only in power density, but also in reliability, efficiency, safety, and cost. Furthermore, it is our hope that some of these advances can also improve the other types of power electronics listed above. If these devices can be made very small, reliable and inexpensive, we could see all kinds of useful applications to the electric grid, consumer devices and beyond, maybe including some we have yet to imagine.<br /><br />To recognize the role academics have played in pushing the forefront of new technologies, Google has taken a couple of special steps to help them participate:<br /><br /><ul><li>Research at Google will provide unrestricted gifts to to academics pursuing the prize. This funding can be used for research equipment and to support students. Visit the <a href="https://research.google.com/university/relations/littlebox.html">Little Box Challenge awards for academics</a> page for more info -- <b>proposals are due September 30, 2014.</b></li></ul><ul><li>Academics often have trouble getting the latest technology from device manufacturers to tinker on. So Google has reached out to a number of WBG manufacturers who’ve put up dedicated pages detailing their devices. Check out the <a href="http://www.littleboxchallenge.com/">Little Box Challenge</a> site to get started.</li></ul><br />We hope you’ll consider entering, and please tell your colleagues, professors, students and dreamers -- you can print and post <a href="http://storage.googleapis.com/lbc/Little_Box_Challenge_Posters.pdf">these posters</a> on your campus to spread the word.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/academics-and-the-little-box-challenge/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Simple is better &#8211; Making your web forms easy to use pays off</title>
		<link>https://googledata.org/google-research/simple-is-better-making-your-web-forms-easy-to-use-pays-off/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=simple-is-better-making-your-web-forms-easy-to-use-pays-off</link>
		<comments>https://googledata.org/google-research/simple-is-better-making-your-web-forms-easy-to-use-pays-off/#comments</comments>
		<pubDate>Mon, 14 Jul 2014 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=faffd714ee6f6b69d75010d41f3c76df</guid>
		<description><![CDATA[<br /><span>Posted by Javier Bargas-Avila and Mirjam Seckler, User Experience Research at Google</span>  <br /><br />Imagine yourself filling out a long and cumbersome form on a website to register for a service. After several minutes of filling out fields, coming up with a password, and handling captchas, you click the submit button to encounter your form filled with red error messages. Suddenly the &#8220;close tab&#8221; button seems much more tempting than before. <br /><br />Despite the rapid evolution of the Internet, web forms, with their limited and unilateral way of interaction, remain one of the core barriers between users and website owners. Any kind of obstacle or difficulty in filling in online forms can lead to increased frustration by the user, resulting in drop-outs and information loss.<br /><br />In 2010, a set of <a href="http://sciyo.com/articles/show/title/simple-but-crucial-user-interfaces-in-the-world-wide-web-introducing-20-guidelines-for-usable-web-fo">20 guidelines to optimize web forms</a> was published by researchers from the University of Basel in Switzerland, including best practices aimed to improve web forms and reduce frustration, errors and drop-outs. For instance, guideline no. 13 states that if answers are required in a specific format, the imposed rule should communicated in advance; or no. 15 that states that forms should never clear already completed fields after an error occurs. <br /><br />To investigate the impact of applying these rules, we conducted a study and presented our results at <a href="http://chi2014.acm.org/">CHI 2014</a>:&#160;<a href="http://dl.acm.org/citation.cfm?id=2557265">Designing usable web forms: empirical evaluation of web form improvement guidelines</a>. In the study, we examined a sample of high traffic online forms and rated them based on how well they followed the form guidelines outlined by the 2010 publication. We then selected three different online forms of varying qualities (low, medium and high), and improved them by applying the guidelines, with the high quality form needing less modification than the medium and low quality forms. We then tested both the original and improved forms extensively with 65 participants in a controlled lab environment.<br /><br />In our study, the modified forms showed significant improvements over the original forms in the time users needed to complete a form, an increase in successful first-trial submissions and higher user satisfaction. As expected, the impact was highest when the original form was of low quality, but even high quality forms showed improved metrics.<br /><br />Furthermore, user interviews with participants in the study revealed which guidelines were most impactful in improving the forms:<br /><br /><ul><li>Format specifications (e.g., requiring a minimum password length) should be stated in the form, prior to submission. The application of this guideline had a large positive impact on user performance, subjective user ratings and was also mentioned frequently in user interviews.</li><li>Error messages must be placed next to the erroneous field and designed in a way users are easily able to fix the problem. Doing this reduced form-filling time and increased subjective ratings.</li><li>Most frequently users mentioned that it was key to be able to tell apart optional and mandatory fields.</li></ul><br /><table cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://4.bp.blogspot.com/-cOjbbaM43Tc/U8QA6Srnj6I/AAAAAAAAASQ/uuKzFQP4atI/s1600/Example1.jpg"><img border="0" src="http://4.bp.blogspot.com/-cOjbbaM43Tc/U8QA6Srnj6I/AAAAAAAAASQ/uuKzFQP4atI/s1600/Example1.jpg" height="130" width="400"></a></td></tr><tr><td>Example Guideline: State format specification in advance</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://4.bp.blogspot.com/-V0EVzaThJvg/U8QA6JUbTfI/AAAAAAAAASU/Q09ovzV5Iko/s1600/Example2.jpg"><img border="0" src="http://4.bp.blogspot.com/-V0EVzaThJvg/U8QA6JUbTfI/AAAAAAAAASU/Q09ovzV5Iko/s1600/Example2.jpg" height="130" width="400"></a></td></tr><tr><td>&#160;Example Guideline :Place error message next to erroneous fields</td></tr></tbody></table><table cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://3.bp.blogspot.com/-HiX5RNGO-gI/U8QA6JlG1oI/AAAAAAAAASc/wtke4EzDonc/s1600/Example3.jpg"><img border="0" src="http://3.bp.blogspot.com/-HiX5RNGO-gI/U8QA6JlG1oI/AAAAAAAAASc/wtke4EzDonc/s1600/Example3.jpg" height="296" width="320"></a></td></tr><tr><td>Example Guideline: Distinguish optional and mandatory fields</td></tr></tbody></table>Putting field labels above rather than adjacent to the fields in the form led also to improvements in the way users scanned the form. Using&#160;<a href="http://en.wikipedia.org/wiki/Eye_tracking">eye-tracking technology</a>, our study shows that users needed less&#160;<a href="http://en.wikipedia.org/wiki/Fixation_(visual)">number of fixations</a>, less fixation time and fewer&#160;<a href="http://en.wikipedia.org/wiki/Saccade">saccades</a>&#160;before submitting the form for the first time.<br /><div></div><div></div><br /><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><img alt="Figure 4.png" src="https://lh4.googleusercontent.com/n-E6moZ0QeaT8yewnxaPhrllsejo3oK9XnqKlEX1hi8VLa-iLvzXBrsS-bLbTIZd09ulq_Ly3aB-0bvXMR1EqRbLjwT1gaSskT2qINAzME76V9wSXv3nX3nC2Ah_56W_LA"></td></tr><tr><td><span><span>Scan path for an original and improved form</span></span></td></tr></tbody></table>From our study, we conclude that optimizing online forms is well worth the resource investment. With easy to implement actions, you can improve your forms, increase the number of successful transactions, and end up with more satisfied users. Google is currently working on implementing these findings on our own forms.<br /><br />We wish to thank our co-authors at the University of Basel, Switzerland for their collaboration in this work: Silvia Heinz, Klaus Opwis and Alexandre Tuch.]]></description>
				<content:encoded><![CDATA[<br /><span class="byline-author">Posted by Javier Bargas-Avila and Mirjam Seckler, User Experience Research at Google</span>  <br /><br />Imagine yourself filling out a long and cumbersome form on a website to register for a service. After several minutes of filling out fields, coming up with a password, and handling captchas, you click the submit button to encounter your form filled with red error messages. Suddenly the “close tab” button seems much more tempting than before. <br /><br />Despite the rapid evolution of the Internet, web forms, with their limited and unilateral way of interaction, remain one of the core barriers between users and website owners. Any kind of obstacle or difficulty in filling in online forms can lead to increased frustration by the user, resulting in drop-outs and information loss.<br /><br />In 2010, a set of <a href="http://sciyo.com/articles/show/title/simple-but-crucial-user-interfaces-in-the-world-wide-web-introducing-20-guidelines-for-usable-web-fo">20 guidelines to optimize web forms</a> was published by researchers from the University of Basel in Switzerland, including best practices aimed to improve web forms and reduce frustration, errors and drop-outs. For instance, guideline no. 13 states that if answers are required in a specific format, the imposed rule should communicated in advance; or no. 15 that states that forms should never clear already completed fields after an error occurs. <br /><br />To investigate the impact of applying these rules, we conducted a study and presented our results at <a href="http://chi2014.acm.org/">CHI 2014</a>:&nbsp;<a href="http://dl.acm.org/citation.cfm?id=2557265">Designing usable web forms: empirical evaluation of web form improvement guidelines</a>. In the study, we examined a sample of high traffic online forms and rated them based on how well they followed the form guidelines outlined by the 2010 publication. We then selected three different online forms of varying qualities (low, medium and high), and improved them by applying the guidelines, with the high quality form needing less modification than the medium and low quality forms. We then tested both the original and improved forms extensively with 65 participants in a controlled lab environment.<br /><br />In our study, the modified forms showed significant improvements over the original forms in the time users needed to complete a form, an increase in successful first-trial submissions and higher user satisfaction. As expected, the impact was highest when the original form was of low quality, but even high quality forms showed improved metrics.<br /><br />Furthermore, user interviews with participants in the study revealed which guidelines were most impactful in improving the forms:<br /><br /><ul><li>Format specifications (e.g., requiring a minimum password length) should be stated in the form, prior to submission. The application of this guideline had a large positive impact on user performance, subjective user ratings and was also mentioned frequently in user interviews.</li><li>Error messages must be placed next to the erroneous field and designed in a way users are easily able to fix the problem. Doing this reduced form-filling time and increased subjective ratings.</li><li>Most frequently users mentioned that it was key to be able to tell apart optional and mandatory fields.</li></ul><br /><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-cOjbbaM43Tc/U8QA6Srnj6I/AAAAAAAAASQ/uuKzFQP4atI/s1600/Example1.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-cOjbbaM43Tc/U8QA6Srnj6I/AAAAAAAAASQ/uuKzFQP4atI/s1600/Example1.jpg" height="130" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Example Guideline: State format specification in advance</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-V0EVzaThJvg/U8QA6JUbTfI/AAAAAAAAASU/Q09ovzV5Iko/s1600/Example2.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-V0EVzaThJvg/U8QA6JUbTfI/AAAAAAAAASU/Q09ovzV5Iko/s1600/Example2.jpg" height="130" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">&nbsp;Example Guideline :Place error message next to erroneous fields</td></tr></tbody></table><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-HiX5RNGO-gI/U8QA6JlG1oI/AAAAAAAAASc/wtke4EzDonc/s1600/Example3.jpg" imageanchor="1" style="clear: left; display: inline !important; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-HiX5RNGO-gI/U8QA6JlG1oI/AAAAAAAAASc/wtke4EzDonc/s1600/Example3.jpg" height="296" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Example Guideline: Distinguish optional and mandatory fields</td></tr></tbody></table>Putting field labels above rather than adjacent to the fields in the form led also to improvements in the way users scanned the form. Using&nbsp;<a href="http://en.wikipedia.org/wiki/Eye_tracking">eye-tracking technology</a>, our study shows that users needed less&nbsp;<a href="http://en.wikipedia.org/wiki/Fixation_(visual)">number of fixations</a>, less fixation time and fewer&nbsp;<a href="http://en.wikipedia.org/wiki/Saccade">saccades</a>&nbsp;before submitting the form for the first time.<br /><div class="separator" style="clear: both; text-align: center;"></div><div style="text-align: left;"></div><br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><img alt="Figure 4.png" src="https://lh4.googleusercontent.com/n-E6moZ0QeaT8yewnxaPhrllsejo3oK9XnqKlEX1hi8VLa-iLvzXBrsS-bLbTIZd09ulq_Ly3aB-0bvXMR1EqRbLjwT1gaSskT2qINAzME76V9wSXv3nX3nC2Ah_56W_LA" style="-webkit-transform: rotate(0rad); border: none; margin-left: auto; margin-right: auto; transform: rotate(0rad);" /></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-81f4bd5c-35ce-909d-37c0-1912748d5dc5"><span style="font-family: Arial; font-size: x-small; vertical-align: baseline; white-space: pre-wrap;">Scan path for an original and improved form</span></span></td></tr></tbody></table>From our study, we conclude that optimizing online forms is well worth the resource investment. With easy to implement actions, you can improve your forms, increase the number of successful transactions, and end up with more satisfied users. Google is currently working on implementing these findings on our own forms.<br /><br />We wish to thank our co-authors at the University of Basel, Switzerland for their collaboration in this work: Silvia Heinz, Klaus Opwis and Alexandre Tuch.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/simple-is-better-making-your-web-forms-easy-to-use-pays-off/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Influential Papers for 2013</title>
		<link>https://googledata.org/google-research/influential-papers-for-2013/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=influential-papers-for-2013</link>
		<comments>https://googledata.org/google-research/influential-papers-for-2013/#comments</comments>
		<pubDate>Mon, 30 Jun 2014 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=c78f519e7d837c4cd65f7d3a34fe2ecf</guid>
		<description><![CDATA[<span>Posted by Corinna Cortes and Alfred Spector, Google Research</span>  <br /><br />Googlers across the company actively engage with the scientific community by publishing technical papers, contributing open-source packages, working on standards, introducing new APIs and tools, giving talks and presentations, participating in ongoing technical debates, and much more. Our <a href="http://research.google.com/pubs/papers.html">publications</a> offer technical and algorithmic advances, feature aspects we learn as we develop novel products and services, and shed light on some of the technical challenges we face at Google. Below are some of the especially influential papers co-authored by Googlers in 2013.  In the coming weeks we will be offering a more in-depth look at some of these publications.<br /><br /><b>Algorithms</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41870.pdf">Online Matching and Ad Allocation</a>, <i>by Aranyak Mehta      [Foundations and Trends in Theoretical Computer Science]</i><br />Matching is a classic problem with a rich history and a significant impact, both on the theory of algorithms and in practice. There has recently been a surge of interest in the online version of the matching problem, due to its application in the domain of Internet advertising. The theory of online matching and allocation has played a critical role in the design of algorithms for ad allocation. This monograph provides a survey of the key problems and algorithmic techniques in this area, and provides a glimpse into their practical impact.<br /><br /><b>Computer Vision</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40814.pdf">Fast, Accurate Detection of 100,000 Object Classes on a Single Machine</a>, <i>by Thomas Dean, Mark Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan, Jay Yagnik    [Proceedings of IEEE Conference on Computer Vision and Pattern Recognition]</i><br />In this paper, we show how to use hash table lookups to replace the dot products in a convolutional filter bank with the number of lookups independent of the number of filters. We apply the technique to evaluate 100,000 deformable-part models requiring over a million (part) filters on multiple scales of a target image in less than 20 seconds using a single multi-core processor with 20GB of RAM.<br /><br /><b>Distributed Systems</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41318.html">Photon: Fault-tolerant and Scalable Joining of Continuous Data Streams</a>, <i>by Rajagopal Ananthanarayanan, Venkatesh Basker, Sumit Das, Ashish Gupta, Haifeng Jiang, Tianhao Qiu, Alexey Reznichenko, Deomid Ryabkov, Manpreet Singh, Shivakumar Venkataraman  [SIGMOD]</i><br />In this paper, we talk about Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency. The streams may be unordered or delayed. Photon fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention while joining every event exactly once. Photon is currently deployed in production, processing millions of events per minute at peak with an average end-to-end latency of less than 10 seconds.<br /><br /><a href="http://research.google.com/pubs/pub41684.html">Omega: flexible, scalable schedulers for large compute clusters</a>, <i>by Malte Schwarzkopf, Andy Konwinski, Michael Abd-El-Malek, John Wilkes     [SIGOPS European Conference on Computer Systems (EuroSys)]</i><br />Omega addresses the need for increasing scale and speed in cluster schedulers using parallelism, shared state, and lock-free optimistic concurrency control. The paper presents a taxonomy of design approaches and evaluates Omega using simulations driven by Google production workloads.<br /><br /><b>Human-Computer Interaction</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41645.pdf">FFitts Law: Modeling Finger Touch with Fitts' Law</a>, <i>by Xiaojun Bi, Yang Li, Shumin Zhai     [Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2013)]</i><br />Fitts&#8217; law is a cornerstone of graphical user interface research and evaluation. It can precisely predict cursor movement time given an on screen target&#8217;s location and size. In the era of finger-touch based mobile computing world, the conventional form of Fitts&#8217; law loses its power when the targets are often smaller than the finger width.  Researchers at Google, Xiaojun Bi, Yang Li, and Shumin Zhai,  devised finger Fitts&#8217; law (FFitts law) to fix such a fundamental problem. <br /><br /><b>Information Retrieval</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub40815.html">Top-k Publish-Subscribe for Social Annotation of News</a>, <i>by Alexander Shraer, Maxim Gurevich, Marcus Fontoura, Vanja Josifovski     [Proceedings of the 39th International Conference on Very Large Data Bases]</i><br />The paper describes how scalable, low latency content-based publish-subscribe systems can be implemented using inverted indices and modified top-k document retrieval algorithms. The feasibility of this approach is demonstrated in the application of annotating news articles with social updates (such as Google+ posts or tweets). This application is casted as publish-subscribe, where news articles are treated as subscriptions (continuous queries) and social updates as published items with large update frequency.<br /><br /><b>Machine Learning</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41159.html">Ad Click Prediction: a View from the Trenches</a>, <i>by H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, Jeremy Kubica [KDD]</i><br />How should one go about making predictions in extremely large scale production systems?  We provide a case study for ad click prediction, and illustrate best practices for combining rigorous theory with careful engineering and evaluation.  The paper contains a mix of novel algorithms, practical approaches, and some surprising negative results.<br /><br /><a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/42029.pdf">Learning kernels using local rademacher complexity</a>, <i>by Corinna Cortes, Marius Kloft, Mehryar Mohri     [Advances in Neural Information Processing Systems (NIPS 2013)]</i><br />This paper shows how the notion of local Rademacher complexity, which leads to sharp learning guarantees, can be used to derive algorithms for the important problem of learning kernels. It also reports the results of several experiments with these algorithms which yield performance improvements in some challenging tasks.<br /><br /><a href="http://word2vec.googlecode.com/">Efficient Estimation of Word Representations in Vector Space</a>, <i>by Tomas Mikolov, Kai Chen, Greg S. Corrado, Jeffrey Dean     [ICLR Workshop 2013]</i><br />We describe a simple and speedy method for training vector representations of words.  The resulting vectors naturally capture the semantics and syntax of word use, such that simple analogies can be solved with vector arithmetic.  For example, the vector difference between 'man' and 'woman' is approximately equal to the difference between 'king' and 'queen', and vector displacements between any given country's name and its capital are aligned.  We provide an open source implementation as well as pre trained vector representations at http://word2vec.googlecode.com<br /><br /><a href="http://research.google.com/pubs/pub40813.html">Large-Scale Learning with Less RAM via Randomization</a>, <i>by Daniel Golovin, D. Sculley, H. Brendan McMahan, Michael Young     [Proceedings of the 30 International Conference on Machine Learning (ICML)]</i><br />We show how a simple technique -- using limited precision coefficients and randomized rounding -- can dramatically reduce the RAM needed to train models with online convex optimization methods such as stochastic gradient descent.  In addition to demonstrating excellent empirical performance, we provide strong theoretical guarantees. <br /><br /><b>Machine Translation</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41651.html">Source-Side Classifier Preordering for Machine Translation</a>, <i>by Uri Lerner, Slav Petrov      [Proc. of EMNLP '13]</i><br />When translating from one language to another, it is important to not only choose the correct translation for each word, but to also put the words in the correct word order. In this paper we present a novel approach that uses a syntactic parser and a feature-rich classifier to perform long-distance reordering. We demonstrate significant improvements over alternative approaches on a large number of language pairs.<br /><br /><b>Natural Language Processing</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40758.pdf">Token and Type Constraints for Cross-Lingual Part-of-Speech Tagging</a>, <i>by Oscar Tackstrom, Dipanjan Das, Slav Petrov, Ryan McDonald, Joakim Nivre    [Transactions of the Association for Computational Linguistics (TACL '13)]</i><br />Knowing the parts of speech (verb, noun, etc.) of words is important for many natural language processing applications, such as information extraction and machine translation. Constructing part-of-speech taggers typically requires large amounts of manually annotated data, which is missing in many languages and domains. In this paper, we introduce a method that instead relies on a combination of incomplete annotations projected from English with incomplete crowdsourced dictionaries in each target language. The result is a 25 percent error reduction compared to the previous state of the art.<br /><br /><a href="http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41377.pdf">Universal Dependency Annotation for Multilingual Parsing</a>, <i>by Ryan McDonald, Joakim Nivre, Yoav Goldberg, Yvonne Quirmbach-Brundage, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Tackstrom, Claudia Bedini, Nuria Bertomeu Castello, Jungmee Lee, [Association for Computational Linguistics]</i><br />This paper discusses a public release of syntactic dependency treebanks (https://code.google.com/p/uni-dep-tb/). Syntactic treebanks are manually annotated data sets containing full syntactic analysis for a large number of sentences (http://en.wikipedia.org/wiki/Dependency_grammar). Unlike other syntactic treebanks, the universal data set tries to normalize syntactic phenomena across languages when it can to produce a harmonized set of multilingual data. Such a resource will help large scale multilingual text analysis and evaluation.<br /><br /><b>Networks</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41761.pdf">B4: Experience with a Globally Deployed Software Defined WAN</a>, <i>by Sushant Jain, Alok Kumar, Subhasree Mandal, Joon Ong, Leon Poutievski, Arjun Singh, Subbaiah Venkata, Jim Wanderer, Junlan Zhou, Min Zhu, Jonathan Zolla, Urs H&#246;lzle, Stephen Stuart, Amin Vahdat [Proceedings of the ACM SIGCOMM Conference]</i><br />This paper presents the motivation, design, and evaluation of B4, a Software Defined WAN for our data center to data center connectivity. We present our approach to separating the network&#8217;s control plane from the data plane to enable rapid deployment of new network control services. Our first such service, centralized traffic engineering allocates bandwidth among competing services based on application priority, dynamically shifting communication patterns, and prevailing failure conditions.<br /><br /><b>Policy</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/42544.pdf">When the Cloud Goes Local: The Global Problem with Data Localization</a>, <i>by Patrick Ryan, Sarah Falvey, Ronak Merchant     [IEEE Computer]</i><br />Ongoing efforts to legally define cloud computing and regulate separate parts of the Internet are unlikely to address underlying concerns about data security and privacy. Data localization initiatives, led primarily by European countries, could actually bring the cloud to the ground and make the Internet less secure.<br /><br /><b>Robotics</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41434.html">Cloud-based robot grasping with the google object recognition engine</a>, <i>by Ben Kehoe, Akihiro Matsukawa, Sal Candido, James Kuffner, Ken Goldberg    [IEEE Int&#8217;l Conf. on Robotics and Automation]</i><br />What if robots were not limited by onboard computation, algorithms did not need to be implemented on every class of robot, and model improvements from sensor data could be shared across many robots? With wireless networking and rapidly expanding cloud computing resources this possibility is rapidly becoming reality. We present a system architecture, implemented prototype, and initial experimental data for a cloud-based robot grasping system that incorporates a Willow Garage PR2 robot with onboard color and depth cameras, Google&#8217;s proprietary object recognition engine, the Point Cloud Library (PCL) for pose estimation, Columbia University&#8217;s GraspIt! toolkit and OpenRAVE for 3D grasping and our prior approach to sampling-based grasp analysis to address uncertainty in pose. <br /><br /><b>Security, Cryptography, and Privacy</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41323.html">Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness</a>, <i>by Devdatta Akhawe, Adrienne Porter Felt      [USENIX Security Symposium]</i><br />Browsers show security warnings to keep users safe. How well do these warnings work? We empirically assess the effectiveness of browser security warnings, using more than 25 million warning impressions from Google Chrome and Mozilla Firefox.<br /><br /><b>Social Systems</b><br /><b><br /></b> <a href="http://www.tomkins-family.com/static/papers/src/WSF+13.pdf">Arrival and departure dynamics in Social Networks</a>, <i>by Shaomei Wu, Atish Das Sarma, Alex Fabrikant, Silvio Lattanzi, Andrew Tomkins    [WSDM]</i><br />In this paper, we consider the natural arrival and departure of users in a social network, and show that the dynamics of arrival, which have been studied in some depth, are quite different from the dynamics of departure, which are not as well studied.  We show unexpected properties of a node's local neighborhood that are predictive of departure.  We also suggest that, globally, nodes at the fringe are more likely to depart, and subsequent departures are correlated among neighboring nodes in tightly-knit communities.<br /><br /><a href="http://research.google.com/pubs/pub41200.html">All the news that's fit to read: a study of social annotations for news reading</a>, <i>by Chinmay Kulkarni, Ed H. Chi      [In Proc. of CHI2013]</i><br />As news reading becomes more social, how do different types of annotations affect people's selection of news articles? This crowdsourcing experiment show that strangers' opinion, unsurprisingly, has no persuasive effects, while surprisingly unknown branded companies still have persuasive effects.  What works best are friend annotations, helping users decide what to read, and provide social context that improves engagement.<br /><br /><b>Software Engineering</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41145.pdf">Does Bug Prediction Support Human Developers? Findings from a Google Case Study</a>, <i>by Chris Lewis, Zhongpeng Lin, Caitlin Sadowski, Xiaoyan Zhu, Rong Ou, E. James Whitehead Jr.    [International Conference on Software Engineering (ICSE)]</i><br />"Does Bug Prediction Support Human Developers?" was a study that investigated whether software engineers changed their code review habits when presented with information about where bug-prone code might be lurking. Much to our surprise we found out that developer behavior didn't change at all! We went on to suggest features that bug prediction algorithms need in order to fit with developer workflows, which will hopefully result in more supportive algorithms being developed in the future.<br /><br /><b>Speech Processing</b><br /><b><br /></b> <a href="http://research.google.com/pubs/archive/40837.pdf">Statistical Parametric Speech Synthesis Using Deep Neural Networks</a>, <i>by Heiga Zen, Andrew Senior, Mike Schuster     [Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)]</i><br />Conventional approaches to statistical parametric speech synthesis use decision tree-clustered context-dependent hidden Markov models (HMMs) to represent probability densities of speech given text. This paper examines an alternative scheme in which the mapping from an input text to its acoustic realization is modeled by a deep neural network (DNN).  Experimental results show that DNN-based speech synthesizers can produce more natural-sounding speech than conventional HMM-based ones using similar model sizes.<br /><br /><a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41176.pdf">Accurate and Compact Large Vocabulary Speech Recognition on Mobile Devices</a>, <i>by Xin Lei, Andrew Senior, Alexander Gruenstein, Jeffrey Sorensen     [Interspeech]</i><br />In this paper we describe the neural network-based speech recognition system that runs in real-time on android phones. With the neural network acoustic model replacing the previous Gaussian mixture model and a compressed language model using on-the-fly rescoring, the word-error-rate is reduced by 27% while the storage requirement is reduced by 63%<br /><br /><b>Statistics</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub40700.html">Pay by the Bit: An Information-Theoretic Metric for Collective Human Judgment</a>, <i>by Tamsyn P. Waterhouse      [Proc CSCW]</i><br />There's a lot of confusion around quality control in crowdsourcing.  For the broad problem subtype we call collective judgment, I discovered that information theory provides a natural and elegant metric for the value of contributors' work, in the form of the mutual information between their judgments and the questions' answers, each treated as random variables<br /><br /><b>Structured Data Management</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41344.html">F1: A Distributed SQL Database That Scales</a>, <i>by Jeff Shute, Radek Vingralek, Bart Samwel, Ben Handy, Chad Whipkey, Eric Rollins, Mircea Oancea, Kyle Little&#64257;eld, David Menestrina, Stephan Ellner, John Cieslewicz, Ian Rae, Traian Stancescu, Himani Apte [VLDB]</i><br />In recent years, conventional wisdom has been that when you need a highly scalable, high throughput data store, the only viable options are NoSQL key/value stores, and you need to work around the lack of transactional consistency, indexes, and SQL.  F1 is a hybrid database we built that combines the strengths of traditional relational databases with the scalability of NoSQL systems, showing it's not necessary to compromise on database functionality to achieve scalability and high availability.  The paper describes the F1 system, how we use Spanner underneath, and how we've designed schema and applications to hide the increased commit latency inherent in distributed commit protocols.<br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Corinna Cortes and Alfred Spector, Google Research</span>  <br /><br />Googlers across the company actively engage with the scientific community by publishing technical papers, contributing open-source packages, working on standards, introducing new APIs and tools, giving talks and presentations, participating in ongoing technical debates, and much more. Our <a href="http://research.google.com/pubs/papers.html">publications</a> offer technical and algorithmic advances, feature aspects we learn as we develop novel products and services, and shed light on some of the technical challenges we face at Google. Below are some of the especially influential papers co-authored by Googlers in 2013.  In the coming weeks we will be offering a more in-depth look at some of these publications.<br /><br /><b>Algorithms</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41870.pdf">Online Matching and Ad Allocation</a>, <i>by Aranyak Mehta      [Foundations and Trends in Theoretical Computer Science]</i><br />Matching is a classic problem with a rich history and a significant impact, both on the theory of algorithms and in practice. There has recently been a surge of interest in the online version of the matching problem, due to its application in the domain of Internet advertising. The theory of online matching and allocation has played a critical role in the design of algorithms for ad allocation. This monograph provides a survey of the key problems and algorithmic techniques in this area, and provides a glimpse into their practical impact.<br /><br /><b>Computer Vision</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40814.pdf">Fast, Accurate Detection of 100,000 Object Classes on a Single Machine</a>, <i>by Thomas Dean, Mark Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan, Jay Yagnik    [Proceedings of IEEE Conference on Computer Vision and Pattern Recognition]</i><br />In this paper, we show how to use hash table lookups to replace the dot products in a convolutional filter bank with the number of lookups independent of the number of filters. We apply the technique to evaluate 100,000 deformable-part models requiring over a million (part) filters on multiple scales of a target image in less than 20 seconds using a single multi-core processor with 20GB of RAM.<br /><br /><b>Distributed Systems</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41318.html">Photon: Fault-tolerant and Scalable Joining of Continuous Data Streams</a>, <i>by Rajagopal Ananthanarayanan, Venkatesh Basker, Sumit Das, Ashish Gupta, Haifeng Jiang, Tianhao Qiu, Alexey Reznichenko, Deomid Ryabkov, Manpreet Singh, Shivakumar Venkataraman  [SIGMOD]</i><br />In this paper, we talk about Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency. The streams may be unordered or delayed. Photon fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention while joining every event exactly once. Photon is currently deployed in production, processing millions of events per minute at peak with an average end-to-end latency of less than 10 seconds.<br /><br /><a href="http://research.google.com/pubs/pub41684.html">Omega: flexible, scalable schedulers for large compute clusters</a>, <i>by Malte Schwarzkopf, Andy Konwinski, Michael Abd-El-Malek, John Wilkes     [SIGOPS European Conference on Computer Systems (EuroSys)]</i><br />Omega addresses the need for increasing scale and speed in cluster schedulers using parallelism, shared state, and lock-free optimistic concurrency control. The paper presents a taxonomy of design approaches and evaluates Omega using simulations driven by Google production workloads.<br /><br /><b>Human-Computer Interaction</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41645.pdf">FFitts Law: Modeling Finger Touch with Fitts' Law</a>, <i>by Xiaojun Bi, Yang Li, Shumin Zhai     [Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2013)]</i><br />Fitts’ law is a cornerstone of graphical user interface research and evaluation. It can precisely predict cursor movement time given an on screen target’s location and size. In the era of finger-touch based mobile computing world, the conventional form of Fitts’ law loses its power when the targets are often smaller than the finger width.  Researchers at Google, Xiaojun Bi, Yang Li, and Shumin Zhai,  devised finger Fitts’ law (FFitts law) to fix such a fundamental problem. <br /><br /><b>Information Retrieval</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub40815.html">Top-k Publish-Subscribe for Social Annotation of News</a>, <i>by Alexander Shraer, Maxim Gurevich, Marcus Fontoura, Vanja Josifovski     [Proceedings of the 39th International Conference on Very Large Data Bases]</i><br />The paper describes how scalable, low latency content-based publish-subscribe systems can be implemented using inverted indices and modified top-k document retrieval algorithms. The feasibility of this approach is demonstrated in the application of annotating news articles with social updates (such as Google+ posts or tweets). This application is casted as publish-subscribe, where news articles are treated as subscriptions (continuous queries) and social updates as published items with large update frequency.<br /><br /><b>Machine Learning</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41159.html">Ad Click Prediction: a View from the Trenches</a>, <i>by H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, Jeremy Kubica [KDD]</i><br />How should one go about making predictions in extremely large scale production systems?  We provide a case study for ad click prediction, and illustrate best practices for combining rigorous theory with careful engineering and evaluation.  The paper contains a mix of novel algorithms, practical approaches, and some surprising negative results.<br /><br /><a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/42029.pdf">Learning kernels using local rademacher complexity</a>, <i>by Corinna Cortes, Marius Kloft, Mehryar Mohri     [Advances in Neural Information Processing Systems (NIPS 2013)]</i><br />This paper shows how the notion of local Rademacher complexity, which leads to sharp learning guarantees, can be used to derive algorithms for the important problem of learning kernels. It also reports the results of several experiments with these algorithms which yield performance improvements in some challenging tasks.<br /><br /><a href="http://word2vec.googlecode.com/">Efficient Estimation of Word Representations in Vector Space</a>, <i>by Tomas Mikolov, Kai Chen, Greg S. Corrado, Jeffrey Dean     [ICLR Workshop 2013]</i><br />We describe a simple and speedy method for training vector representations of words.  The resulting vectors naturally capture the semantics and syntax of word use, such that simple analogies can be solved with vector arithmetic.  For example, the vector difference between 'man' and 'woman' is approximately equal to the difference between 'king' and 'queen', and vector displacements between any given country's name and its capital are aligned.  We provide an open source implementation as well as pre trained vector representations at http://word2vec.googlecode.com<br /><br /><a href="http://research.google.com/pubs/pub40813.html">Large-Scale Learning with Less RAM via Randomization</a>, <i>by Daniel Golovin, D. Sculley, H. Brendan McMahan, Michael Young     [Proceedings of the 30 International Conference on Machine Learning (ICML)]</i><br />We show how a simple technique -- using limited precision coefficients and randomized rounding -- can dramatically reduce the RAM needed to train models with online convex optimization methods such as stochastic gradient descent.  In addition to demonstrating excellent empirical performance, we provide strong theoretical guarantees. <br /><br /><b>Machine Translation</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41651.html">Source-Side Classifier Preordering for Machine Translation</a>, <i>by Uri Lerner, Slav Petrov      [Proc. of EMNLP '13]</i><br />When translating from one language to another, it is important to not only choose the correct translation for each word, but to also put the words in the correct word order. In this paper we present a novel approach that uses a syntactic parser and a feature-rich classifier to perform long-distance reordering. We demonstrate significant improvements over alternative approaches on a large number of language pairs.<br /><br /><b>Natural Language Processing</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40758.pdf">Token and Type Constraints for Cross-Lingual Part-of-Speech Tagging</a>, <i>by Oscar Tackstrom, Dipanjan Das, Slav Petrov, Ryan McDonald, Joakim Nivre    [Transactions of the Association for Computational Linguistics (TACL '13)]</i><br />Knowing the parts of speech (verb, noun, etc.) of words is important for many natural language processing applications, such as information extraction and machine translation. Constructing part-of-speech taggers typically requires large amounts of manually annotated data, which is missing in many languages and domains. In this paper, we introduce a method that instead relies on a combination of incomplete annotations projected from English with incomplete crowdsourced dictionaries in each target language. The result is a 25 percent error reduction compared to the previous state of the art.<br /><br /><a href="http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41377.pdf">Universal Dependency Annotation for Multilingual Parsing</a>, <i>by Ryan McDonald, Joakim Nivre, Yoav Goldberg, Yvonne Quirmbach-Brundage, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Tackstrom, Claudia Bedini, Nuria Bertomeu Castello, Jungmee Lee, [Association for Computational Linguistics]</i><br />This paper discusses a public release of syntactic dependency treebanks (https://code.google.com/p/uni-dep-tb/). Syntactic treebanks are manually annotated data sets containing full syntactic analysis for a large number of sentences (http://en.wikipedia.org/wiki/Dependency_grammar). Unlike other syntactic treebanks, the universal data set tries to normalize syntactic phenomena across languages when it can to produce a harmonized set of multilingual data. Such a resource will help large scale multilingual text analysis and evaluation.<br /><br /><b>Networks</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41761.pdf">B4: Experience with a Globally Deployed Software Defined WAN</a>, <i>by Sushant Jain, Alok Kumar, Subhasree Mandal, Joon Ong, Leon Poutievski, Arjun Singh, Subbaiah Venkata, Jim Wanderer, Junlan Zhou, Min Zhu, Jonathan Zolla, Urs Hölzle, Stephen Stuart, Amin Vahdat [Proceedings of the ACM SIGCOMM Conference]</i><br />This paper presents the motivation, design, and evaluation of B4, a Software Defined WAN for our data center to data center connectivity. We present our approach to separating the network’s control plane from the data plane to enable rapid deployment of new network control services. Our first such service, centralized traffic engineering allocates bandwidth among competing services based on application priority, dynamically shifting communication patterns, and prevailing failure conditions.<br /><br /><b>Policy</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/42544.pdf">When the Cloud Goes Local: The Global Problem with Data Localization</a>, <i>by Patrick Ryan, Sarah Falvey, Ronak Merchant     [IEEE Computer]</i><br />Ongoing efforts to legally define cloud computing and regulate separate parts of the Internet are unlikely to address underlying concerns about data security and privacy. Data localization initiatives, led primarily by European countries, could actually bring the cloud to the ground and make the Internet less secure.<br /><br /><b>Robotics</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41434.html">Cloud-based robot grasping with the google object recognition engine</a>, <i>by Ben Kehoe, Akihiro Matsukawa, Sal Candido, James Kuffner, Ken Goldberg    [IEEE Int’l Conf. on Robotics and Automation]</i><br />What if robots were not limited by onboard computation, algorithms did not need to be implemented on every class of robot, and model improvements from sensor data could be shared across many robots? With wireless networking and rapidly expanding cloud computing resources this possibility is rapidly becoming reality. We present a system architecture, implemented prototype, and initial experimental data for a cloud-based robot grasping system that incorporates a Willow Garage PR2 robot with onboard color and depth cameras, Google’s proprietary object recognition engine, the Point Cloud Library (PCL) for pose estimation, Columbia University’s GraspIt! toolkit and OpenRAVE for 3D grasping and our prior approach to sampling-based grasp analysis to address uncertainty in pose. <br /><br /><b>Security, Cryptography, and Privacy</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41323.html">Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness</a>, <i>by Devdatta Akhawe, Adrienne Porter Felt      [USENIX Security Symposium]</i><br />Browsers show security warnings to keep users safe. How well do these warnings work? We empirically assess the effectiveness of browser security warnings, using more than 25 million warning impressions from Google Chrome and Mozilla Firefox.<br /><br /><b>Social Systems</b><br /><b><br /></b> <a href="http://www.tomkins-family.com/static/papers/src/WSF+13.pdf">Arrival and departure dynamics in Social Networks</a>, <i>by Shaomei Wu, Atish Das Sarma, Alex Fabrikant, Silvio Lattanzi, Andrew Tomkins    [WSDM]</i><br />In this paper, we consider the natural arrival and departure of users in a social network, and show that the dynamics of arrival, which have been studied in some depth, are quite different from the dynamics of departure, which are not as well studied.  We show unexpected properties of a node's local neighborhood that are predictive of departure.  We also suggest that, globally, nodes at the fringe are more likely to depart, and subsequent departures are correlated among neighboring nodes in tightly-knit communities.<br /><br /><a href="http://research.google.com/pubs/pub41200.html">All the news that's fit to read: a study of social annotations for news reading</a>, <i>by Chinmay Kulkarni, Ed H. Chi      [In Proc. of CHI2013]</i><br />As news reading becomes more social, how do different types of annotations affect people's selection of news articles? This crowdsourcing experiment show that strangers' opinion, unsurprisingly, has no persuasive effects, while surprisingly unknown branded companies still have persuasive effects.  What works best are friend annotations, helping users decide what to read, and provide social context that improves engagement.<br /><br /><b>Software Engineering</b><br /><b><br /></b> <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41145.pdf">Does Bug Prediction Support Human Developers? Findings from a Google Case Study</a>, <i>by Chris Lewis, Zhongpeng Lin, Caitlin Sadowski, Xiaoyan Zhu, Rong Ou, E. James Whitehead Jr.    [International Conference on Software Engineering (ICSE)]</i><br />"Does Bug Prediction Support Human Developers?" was a study that investigated whether software engineers changed their code review habits when presented with information about where bug-prone code might be lurking. Much to our surprise we found out that developer behavior didn't change at all! We went on to suggest features that bug prediction algorithms need in order to fit with developer workflows, which will hopefully result in more supportive algorithms being developed in the future.<br /><br /><b>Speech Processing</b><br /><b><br /></b> <a href="http://research.google.com/pubs/archive/40837.pdf">Statistical Parametric Speech Synthesis Using Deep Neural Networks</a>, <i>by Heiga Zen, Andrew Senior, Mike Schuster     [Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)]</i><br />Conventional approaches to statistical parametric speech synthesis use decision tree-clustered context-dependent hidden Markov models (HMMs) to represent probability densities of speech given text. This paper examines an alternative scheme in which the mapping from an input text to its acoustic realization is modeled by a deep neural network (DNN).  Experimental results show that DNN-based speech synthesizers can produce more natural-sounding speech than conventional HMM-based ones using similar model sizes.<br /><br /><a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41176.pdf">Accurate and Compact Large Vocabulary Speech Recognition on Mobile Devices</a>, <i>by Xin Lei, Andrew Senior, Alexander Gruenstein, Jeffrey Sorensen     [Interspeech]</i><br />In this paper we describe the neural network-based speech recognition system that runs in real-time on android phones. With the neural network acoustic model replacing the previous Gaussian mixture model and a compressed language model using on-the-fly rescoring, the word-error-rate is reduced by 27% while the storage requirement is reduced by 63%<br /><br /><b>Statistics</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub40700.html">Pay by the Bit: An Information-Theoretic Metric for Collective Human Judgment</a>, <i>by Tamsyn P. Waterhouse      [Proc CSCW]</i><br />There's a lot of confusion around quality control in crowdsourcing.  For the broad problem subtype we call collective judgment, I discovered that information theory provides a natural and elegant metric for the value of contributors' work, in the form of the mutual information between their judgments and the questions' answers, each treated as random variables<br /><br /><b>Structured Data Management</b><br /><b><br /></b> <a href="http://research.google.com/pubs/pub41344.html">F1: A Distributed SQL Database That Scales</a>, <i>by Jeff Shute, Radek Vingralek, Bart Samwel, Ben Handy, Chad Whipkey, Eric Rollins, Mircea Oancea, Kyle Littleﬁeld, David Menestrina, Stephan Ellner, John Cieslewicz, Ian Rae, Traian Stancescu, Himani Apte [VLDB]</i><br />In recent years, conventional wisdom has been that when you need a highly scalable, high throughput data store, the only viable options are NoSQL key/value stores, and you need to work around the lack of transactional consistency, indexes, and SQL.  F1 is a hybrid database we built that combines the strengths of traditional relational databases with the scalability of NoSQL systems, showing it's not necessary to compromise on database functionality to achieve scalability and high availability.  The paper describes the F1 system, how we use Spanner underneath, and how we've designed schema and applications to hide the increased commit latency inherent in distributed commit protocols.<br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/influential-papers-for-2013/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>2014 Google PhD Fellowships: Supporting the Future of Computer Science</title>
		<link>https://googledata.org/google-research/2014-google-phd-fellowships-supporting-the-future-of-computer-science/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=2014-google-phd-fellowships-supporting-the-future-of-computer-science</link>
		<comments>https://googledata.org/google-research/2014-google-phd-fellowships-supporting-the-future-of-computer-science/#comments</comments>
		<pubDate>Wed, 18 Jun 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=cc174df6c9c016cb8ec1308bbf567d4e</guid>
		<description><![CDATA[<span>Posted by David Harper, Google University Relations &#38; Beate List, Google Research Programs<br /></span><br /><br />Nurturing and maintaining strong relations with the academic community is a top priority at Google. Today, we&#8217;re announcing the <a href="http://services.google.com/fh/files/blogs/2014googlephdfellowshiprecipients.pdf">2014 Google PhD Fellowship recipients</a>. These students, recognized for their incredible creativity, knowledge and skills, represent some of the most outstanding graduate researchers in computer science across the globe. We&#8217;re excited to support them, and we extend our warmest congratulations.<br /><br />The Google PhD Fellowship program supports PhD students in computer science or closely related fields and reflects our commitment to building strong relations with the global academic community. Now in its sixth year, the program covers North America, Europe, China, India and Australia. To date we&#8217;ve awarded 193 Fellowships in 72 universities across 17 countries.<br /><br />As we welcome the 2014 PhD Fellows, we hear from two past recipients, <a href="http://googleforstudents.blogspot.co.at/2014/03/getting-to-know-phd.html">Cynthia Liem</a> and <a href="http://googleforstudents.blogspot.ch/2014/06/getting-to-know-phd-ian-goodfellow.html">Ian Goodfellow</a>. Cynthia studies at the <a href="http://www.ewi.tudelft.nl/en">Delft University of Technology</a>, and was awarded a Fellowship in Multimedia. Ian is about to complete his PhD at the <a href="http://en.diro.umontreal.ca/home/">Universit&#233; de Montr&#233;al in Qu&#233;bec</a>, and was awarded a Fellowship in Deep Learning. Recently interviewed on the Google Student blog, they expressed their views on how the Fellowship affected their careers. <br /><br />Cynthia has combined her dual passions of music and computing to pursue a PhD in music information retrieval. She speaks about the fellowship and her links with Google: <br /><br /><i>&#8220;Through the Google European Doctoral Fellowship, I was assigned a Google mentor who works on topics related to my PhD interests. In my case, this was <a href="http://research.google.com/pubs/author39086.html">Dr. Douglas Eck</a> in Mountain View, who is part of Google Research and leads a team focusing on music recommendation. Doug has been encouraging me in several of my academic activities, most notably the initiation of the <a href="http://mmc.tudelft.nl/content/need-music-information-retrieval-user-centered-and-multimodal-strategies">ACM MIRUM Workshop</a>, which managed to successfully bring music retrieval into the spotlight of the prestigious ACM Multimedia conference.&#8221;</i><br /><br />Ian is about to start as a research scientist on <a href="http://research.google.com/pubs/jeff.html">Jeff Dean</a>&#8217;s deep learning infrastructure team.  He was also an intern at Google, and contributed to the development of a neural network capable of <a href="http://www.technologyreview.com/view/523326/how-google-cracked-house-number-identification-in-street-view/">transcribing the address numbers on houses from Google Street View photos</a>.  He describes the connection between this intern project and his PhD study supported by the Fellowship: <br /><br /><i>&#8220;The project I worked on during my internship was the basis for a <a href="http://research.google.com/pubs/pub42241.html">publication</a> at the <a href="https://sites.google.com/site/representationlearning2014/">International Conference on Learning Representations</a> &#8230;. my advisor let me include this paper in my PhD thesis since there was a close connection to the subject area.&#8230; I can show that some of the work developed early in the thesis has had a real impact.&#8220;</i><br /><br />We&#8217;re proud to have supported Cynthia, Ian, and all the other recipients of the Google PhD Fellowship. We continue to look forward to working with, and learning from, the academic community with great excitement and high expectations.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by David Harper, Google University Relations &amp; Beate List, Google Research Programs<br /></span><br /><br />Nurturing and maintaining strong relations with the academic community is a top priority at Google. Today, we’re announcing the <a href="http://services.google.com/fh/files/blogs/2014googlephdfellowshiprecipients.pdf">2014 Google PhD Fellowship recipients</a>. These students, recognized for their incredible creativity, knowledge and skills, represent some of the most outstanding graduate researchers in computer science across the globe. We’re excited to support them, and we extend our warmest congratulations.<br /><br />The Google PhD Fellowship program supports PhD students in computer science or closely related fields and reflects our commitment to building strong relations with the global academic community. Now in its sixth year, the program covers North America, Europe, China, India and Australia. To date we’ve awarded 193 Fellowships in 72 universities across 17 countries.<br /><br />As we welcome the 2014 PhD Fellows, we hear from two past recipients, <a href="http://googleforstudents.blogspot.co.at/2014/03/getting-to-know-phd.html">Cynthia Liem</a> and <a href="http://googleforstudents.blogspot.ch/2014/06/getting-to-know-phd-ian-goodfellow.html">Ian Goodfellow</a>. Cynthia studies at the <a href="http://www.ewi.tudelft.nl/en">Delft University of Technology</a>, and was awarded a Fellowship in Multimedia. Ian is about to complete his PhD at the <a href="http://en.diro.umontreal.ca/home/">Université de Montréal in Québec</a>, and was awarded a Fellowship in Deep Learning. Recently interviewed on the Google Student blog, they expressed their views on how the Fellowship affected their careers. <br /><br />Cynthia has combined her dual passions of music and computing to pursue a PhD in music information retrieval. She speaks about the fellowship and her links with Google: <br /><br /><i>“Through the Google European Doctoral Fellowship, I was assigned a Google mentor who works on topics related to my PhD interests. In my case, this was <a href="http://research.google.com/pubs/author39086.html">Dr. Douglas Eck</a> in Mountain View, who is part of Google Research and leads a team focusing on music recommendation. Doug has been encouraging me in several of my academic activities, most notably the initiation of the <a href="http://mmc.tudelft.nl/content/need-music-information-retrieval-user-centered-and-multimodal-strategies">ACM MIRUM Workshop</a>, which managed to successfully bring music retrieval into the spotlight of the prestigious ACM Multimedia conference.”</i><br /><br />Ian is about to start as a research scientist on <a href="http://research.google.com/pubs/jeff.html">Jeff Dean</a>’s deep learning infrastructure team.  He was also an intern at Google, and contributed to the development of a neural network capable of <a href="http://www.technologyreview.com/view/523326/how-google-cracked-house-number-identification-in-street-view/">transcribing the address numbers on houses from Google Street View photos</a>.  He describes the connection between this intern project and his PhD study supported by the Fellowship: <br /><br /><i>“The project I worked on during my internship was the basis for a <a href="http://research.google.com/pubs/pub42241.html">publication</a> at the <a href="https://sites.google.com/site/representationlearning2014/">International Conference on Learning Representations</a> …. my advisor let me include this paper in my PhD thesis since there was a close connection to the subject area.… I can show that some of the work developed early in the thesis has had a real impact.“</i><br /><br />We’re proud to have supported Cynthia, Ian, and all the other recipients of the Google PhD Fellowship. We continue to look forward to working with, and learning from, the academic community with great excitement and high expectations.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/2014-google-phd-fellowships-supporting-the-future-of-computer-science/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>A skill-based approach to creating open online courses</title>
		<link>https://googledata.org/google-research/a-skill-based-approach-to-creating-open-online-courses/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-skill-based-approach-to-creating-open-online-courses</link>
		<comments>https://googledata.org/google-research/a-skill-based-approach-to-creating-open-online-courses/#comments</comments>
		<pubDate>Tue, 27 May 2014 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=691e02e70a73bf08e85713f99df49b27</guid>
		<description><![CDATA[<span>Posted by Sean Lip, Software Engineer, Open Online Education</span><br /><br />Google has offered a number of open online courses in the past two years, and some of our recent <a href="http://googleresearch.blogspot.com/2014/03/making-sense-of-mooc-data.html">research</a> highlights the importance of having effective and relevant activities in these courses. Over the past decade, the Open Learning Initiative (OLI) at <a href="http://oli.cmu.edu/">Carnegie Mellon</a>, and now at <a href="http://oli.stanford.edu/">Stanford</a>, has successfully offered free open online courses that are centered around goal-directed activities that provide students with targeted feedback on their work. In order to improve understanding about how to design online courses based around effective activities, Google and OLI recently collaborated on a <a href="http://static.squarespace.com/static/5330c47be4b03ea35b2645a8/t/5361cb4be4b0c9d8aaa7d3be/1398917963378/AnApproachtoSkillMappinginOnlineCourses04302014.pdf">white paper</a> that outlines the skill-based approach that OLI uses to create its courses.<br /><br />OLI courses are focused around a set of learning objectives which identify what students should be able to do by the time they have completed a course module. These learning objectives are broken down into skills, and individual activities in the course are aimed towards developing students&#8217; mastery with these skills. A typical activity from the <a href="https://oli.cmu.edu/courses/free-open/engineering-statics-course-details/">Engineering Statics course</a> is shown below:<br /><br /><div><a href="http://1.bp.blogspot.com/-zwq1JIWMA5s/U4TD8H86EDI/AAAAAAAAARo/5JqyEMtAoF8/s1600/statics.png"><img border="0" src="http://1.bp.blogspot.com/-zwq1JIWMA5s/U4TD8H86EDI/AAAAAAAAARo/5JqyEMtAoF8/s400/statics.png"></a></div><br />During the course, students&#8217; attempts at questions related to a particular skill are then fed as inputs into a probabilistic model which treats the degrees of mastery for each skill as mathematically independent variables. This model estimates how likely a student is to have mastered individual skills, and its output can help instructors determine which students are struggling and take appropriate interventions, as well as inform the design of future versions of the same course. The paper also outlines the advantages and limitations of the existing system, which could be useful starting points for further research. <br /><br />We hope that this white paper provides useful insight for creators of online courses and course platforms, and that it stimulates further discussion about how to help people learn online more effectively.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Sean Lip, Software Engineer, Open Online Education</span><br /><br />Google has offered a number of open online courses in the past two years, and some of our recent <a href="http://googleresearch.blogspot.com/2014/03/making-sense-of-mooc-data.html">research</a> highlights the importance of having effective and relevant activities in these courses. Over the past decade, the Open Learning Initiative (OLI) at <a href="http://oli.cmu.edu/">Carnegie Mellon</a>, and now at <a href="http://oli.stanford.edu/">Stanford</a>, has successfully offered free open online courses that are centered around goal-directed activities that provide students with targeted feedback on their work. In order to improve understanding about how to design online courses based around effective activities, Google and OLI recently collaborated on a <a href="http://static.squarespace.com/static/5330c47be4b03ea35b2645a8/t/5361cb4be4b0c9d8aaa7d3be/1398917963378/AnApproachtoSkillMappinginOnlineCourses04302014.pdf">white paper</a> that outlines the skill-based approach that OLI uses to create its courses.<br /><br />OLI courses are focused around a set of learning objectives which identify what students should be able to do by the time they have completed a course module. These learning objectives are broken down into skills, and individual activities in the course are aimed towards developing students’ mastery with these skills. A typical activity from the <a href="https://oli.cmu.edu/courses/free-open/engineering-statics-course-details/">Engineering Statics course</a> is shown below:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-zwq1JIWMA5s/U4TD8H86EDI/AAAAAAAAARo/5JqyEMtAoF8/s1600/statics.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-zwq1JIWMA5s/U4TD8H86EDI/AAAAAAAAARo/5JqyEMtAoF8/s400/statics.png" /></a></div><br />During the course, students’ attempts at questions related to a particular skill are then fed as inputs into a probabilistic model which treats the degrees of mastery for each skill as mathematically independent variables. This model estimates how likely a student is to have mastered individual skills, and its output can help instructors determine which students are struggling and take appropriate interventions, as well as inform the design of future versions of the same course. The paper also outlines the advantages and limitations of the existing system, which could be useful starting points for further research. <br /><br />We hope that this white paper provides useful insight for creators of online courses and course platforms, and that it stimulates further discussion about how to help people learn online more effectively.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/a-skill-based-approach-to-creating-open-online-courses/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>A Billion Words: Because today&#8217;s language modeling standard should be higher</title>
		<link>https://googledata.org/google-research/a-billion-words-because-todays-language-modeling-standard-should-be-higher/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-billion-words-because-todays-language-modeling-standard-should-be-higher</link>
		<comments>https://googledata.org/google-research/a-billion-words-because-todays-language-modeling-standard-should-be-higher/#comments</comments>
		<pubDate>Wed, 30 Apr 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=4e4273d93031b48cd12a69113d17b111</guid>
		<description><![CDATA[<span>Posted by Dave Orr, Product Manager, and Ciprian Chelba, Research Scientist</span><br /><br />Language is chock full of ambiguity, and it can turn up in surprising places. Many words are hard to tell apart without context: most Americans <a href="http://languagelog.ldc.upenn.edu/nll/?p=8697">pronounce &#8220;ladder&#8221; and &#8220;latter&#8221; identically</a>, for instance. Keyboard inputs on mobile devices have a similar problem, especially for <a href="https://play.google.com/store/apps/details?id=com.google.android.inputmethod.latin">IME keyboards</a>. For example, the input patterns for &#8220;Yankees&#8221; and &#8220;takes&#8221; look very similar:<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://4.bp.blogspot.com/-9dKSAtM4ACA/U2EWtiF1qII/AAAAAAAAARY/wVlql4Ave1k/s1600/ime_keyboard.png"><img border="0" src="http://4.bp.blogspot.com/-9dKSAtM4ACA/U2EWtiF1qII/AAAAAAAAARY/wVlql4Ave1k/s1600/ime_keyboard.png" height="342" width="400"></a></td></tr><tr><td><span><span>Photo credit: Kurt Partridge</span></span></td></tr></tbody></table><br />But in this context -- the previous two words, &#8220;New York&#8221; -- &#8220;Yankees&#8221; is much more likely. <br /><br />One key way computers use context is with <a href="http://en.wikipedia.org/wiki/Language_model">language models</a>.  These are used for predictive keyboards, but also speech recognition, machine translation, spelling correction, query suggestions, and so on. Often those are specialized: word order for queries versus web pages can be very different. Either way, having an accurate language model with wide coverage drives the quality of all these applications.<br /><br />Due to interactions between components, one thing that can be tricky when evaluating the quality of such complex systems is error attribution. Good engineering practice is to evaluate the quality of each module separately, including the language model. We believe that the field could benefit from a large, standard set with benchmarks for easy comparison and experiments with new modeling techniques. <br /><br />To that end, <a href="http://www.statmt.org/lm-benchmark/">we are releasing scripts</a> that convert a set of public data into a language model consisting of over a billion words, with standardized training and test splits, described in an <a href="http://arxiv.org/abs/1312.3005">arXiv paper</a>. Along with the scripts, we&#8217;re releasing the processed data in one convenient location, along with the training and test data. This will make it much easier for the research community to quickly reproduce results, and we hope will speed up progress on these tasks.<br /><br />The benchmark scripts and data are freely available, and can be found here: <a href="http://www.statmt.org/lm-benchmark/">http://www.statmt.org/lm-benchmark/</a><br /><br />The field needs a new and better standard benchmark. Currently, researchers report from a set of their choice, and results are very hard to reproduce because of a lack of a standard in preprocessing. We hope that this will solve both those problems, and become the standard benchmark for language modeling experiments. As more researchers use the new benchmark, comparisons will be easier and more accurate, and progress will be faster.<br /><br />For all the researchers out there, try out this model, run your experiments, and let us know how it goes -- or publish, and we&#8217;ll enjoy finding your results at conferences and in journals.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Dave Orr, Product Manager, and Ciprian Chelba, Research Scientist</span><br /><br />Language is chock full of ambiguity, and it can turn up in surprising places. Many words are hard to tell apart without context: most Americans <a href="http://languagelog.ldc.upenn.edu/nll/?p=8697">pronounce “ladder” and “latter” identically</a>, for instance. Keyboard inputs on mobile devices have a similar problem, especially for <a href="https://play.google.com/store/apps/details?id=com.google.android.inputmethod.latin">IME keyboards</a>. For example, the input patterns for “Yankees” and “takes” look very similar:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-9dKSAtM4ACA/U2EWtiF1qII/AAAAAAAAARY/wVlql4Ave1k/s1600/ime_keyboard.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-9dKSAtM4ACA/U2EWtiF1qII/AAAAAAAAARY/wVlql4Ave1k/s1600/ime_keyboard.png" height="342" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: right;"><span style="text-align: start;"><span style="font-size: xx-small;">Photo credit: Kurt Partridge</span></span></td></tr></tbody></table><br />But in this context -- the previous two words, “New York” -- “Yankees” is much more likely. <br /><br />One key way computers use context is with <a href="http://en.wikipedia.org/wiki/Language_model">language models</a>.  These are used for predictive keyboards, but also speech recognition, machine translation, spelling correction, query suggestions, and so on. Often those are specialized: word order for queries versus web pages can be very different. Either way, having an accurate language model with wide coverage drives the quality of all these applications.<br /><br />Due to interactions between components, one thing that can be tricky when evaluating the quality of such complex systems is error attribution. Good engineering practice is to evaluate the quality of each module separately, including the language model. We believe that the field could benefit from a large, standard set with benchmarks for easy comparison and experiments with new modeling techniques. <br /><br />To that end, <a href="http://www.statmt.org/lm-benchmark/">we are releasing scripts</a> that convert a set of public data into a language model consisting of over a billion words, with standardized training and test splits, described in an <a href="http://arxiv.org/abs/1312.3005">arXiv paper</a>. Along with the scripts, we’re releasing the processed data in one convenient location, along with the training and test data. This will make it much easier for the research community to quickly reproduce results, and we hope will speed up progress on these tasks.<br /><br />The benchmark scripts and data are freely available, and can be found here: <a href="http://www.statmt.org/lm-benchmark/">http://www.statmt.org/lm-benchmark/</a><br /><br />The field needs a new and better standard benchmark. Currently, researchers report from a set of their choice, and results are very hard to reproduce because of a lack of a standard in preprocessing. We hope that this will solve both those problems, and become the standard benchmark for language modeling experiments. As more researchers use the new benchmark, comparisons will be easier and more accurate, and progress will be faster.<br /><br />For all the researchers out there, try out this model, run your experiments, and let us know how it goes -- or publish, and we’ll enjoy finding your results at conferences and in journals.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/a-billion-words-because-todays-language-modeling-standard-should-be-higher/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Lens Blur in the new Google Camera app</title>
		<link>https://googledata.org/google-research/lens-blur-in-the-new-google-camera-app/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=lens-blur-in-the-new-google-camera-app</link>
		<comments>https://googledata.org/google-research/lens-blur-in-the-new-google-camera-app/#comments</comments>
		<pubDate>Wed, 16 Apr 2014 17:42:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=678294af7d363943b6a8c2366e9993ca</guid>
		<description><![CDATA[<span>Posted by Carlos Hern&#225;ndez, Software Engineer</span><br /><br />One of the biggest advantages of <a href="http://en.wikipedia.org/wiki/Digital_single-lens_reflex_camera">SLR cameras</a> over camera phones is the ability to achieve shallow depth of field and <i>bokeh</i> effects. Shallow depth of field makes the object of interest "pop" by bringing the foreground into focus and de-emphasizing the background. Achieving this optical effect has traditionally required a big lens and aperture, and therefore hasn&#8217;t been possible using the camera on your mobile phone or tablet.<br /><br />That all changes with <i>Lens Blur,</i> a new mode in the <a href="https://play.google.com/store/apps/details?id=com.google.android.GoogleCamera">Google Camera</a>&#160;app. It lets you take a photo with a shallow depth of field using just your Android phone or tablet.  Unlike a regular photo, Lens Blur lets you change the point or level of focus <i><b>after</b></i> the photo is taken.  You can choose to make any object come into focus simply by tapping on it in the image.  By changing the depth-of-field slider, you can simulate different aperture sizes, to achieve bokeh effects ranging from subtle to surreal (e.g., <a href="http://en.wikipedia.org/wiki/Miniature_faking">tilt-shift</a>).  The new image is rendered instantly, allowing you to see your changes in real time.<br /><br /><div><a href="http://4.bp.blogspot.com/-U6zxXHMNqAY/U03ZkNg1AoI/AAAAAAAAAQ0/SwrfQIAx-3U/s1600/image1.png"><img border="0" src="http://4.bp.blogspot.com/-U6zxXHMNqAY/U03ZkNg1AoI/AAAAAAAAAQ0/SwrfQIAx-3U/s1600/image1.png"></a></div>Lens Blur replaces the need for a large optical system with algorithms that <b><i>simulate</i></b> a larger lens and aperture.  Instead of capturing a single photo, you move the camera in an upward sweep to capture a whole series of frames.  From these photos, Lens Blur uses computer vision algorithms to create a 3D model of the world, estimating the <a href="https://developers.google.com/depthmap-metadata/">depth</a> (distance) to every point in the scene. Here&#8217;s an example -- on the left is a raw input photo, in the middle is a &#8220;depth map&#8221; where darker things are close and lighter things are far away, and on the right is the result blurred by distance:<br /><br /><div><a href="http://4.bp.blogspot.com/-89KApjt0Qr0/U03aCxvyQFI/AAAAAAAAAQ8/18BFoFaOuIE/s1600/image2.png"><img border="0" src="http://4.bp.blogspot.com/-89KApjt0Qr0/U03aCxvyQFI/AAAAAAAAAQ8/18BFoFaOuIE/s1600/image2.png"></a></div>Here&#8217;s how we do it.  First, we pick out visual features in the scene and track them over time, across the series of images. Using computer vision algorithms known as Structure-from-Motion (SfM) and <a href="https://code.google.com/p/ceres-solver/">bundle adjustment</a>, we compute the camera&#8217;s 3D position and orientation and the 3D positions of all those image features throughout the series.<br /><br />Once we&#8217;ve got the 3D pose of each photo, we compute the depth of each pixel in the reference photo using <a href="http://vision.middlebury.edu/mview/">Multi-View Stereo</a> (MVS) algorithms. MVS works the way human stereo vision does: given the location of the same object in two different images, we can <a href="http://en.wikipedia.org/wiki/Triangulation">triangulate</a> the 3D position of the object and compute the distance to it.  How do we figure out which pixel in one image corresponds to a pixel in another image? MVS measures how similar they are -- on mobile devices, one particularly simple and efficient way is computing the Sum of Absolute Differences (SAD) of the RGB colors of the two pixels.<br /><br />Now it&#8217;s an optimization problem: we try to build a depth map where all the corresponding pixels are most similar to each other. But that&#8217;s typically not a well-posed optimization problem -- you can get the same similarity score for different depth maps. To address this ambiguity, the optimization also incorporates assumptions about the 3D geometry of a scene, called a "prior,&#8221; that favors reasonable solutions. For example, you can often assume two pixels near each other are at a similar depth. Finally, we use <a href="http://en.wikipedia.org/wiki/Markov_random_field">Markov Random Field</a> inference methods to solve the optimization problem.<br /><br />Having computed the depth map, we can re-render the photo, blurring pixels by differing amounts depending on the pixel&#8217;s depth, aperture and location relative to the focal plane. The focal plane determines which pixels to blur, with the amount of blur increasing proportionally with the distance of each pixel to that focal plane. This is all achieved by simulating a physical lens using the <a href="http://en.wikipedia.org/wiki/Thin_lens">thin lens </a>approximation. <br /><br /><div><a href="http://4.bp.blogspot.com/-bZJNDZGLS_U/U03bQE2VzKI/AAAAAAAAARI/yQwRovcDWRQ/s1600/image3.png"><img border="0" src="http://4.bp.blogspot.com/-bZJNDZGLS_U/U03bQE2VzKI/AAAAAAAAARI/yQwRovcDWRQ/s1600/image3.png"></a></div>The algorithms used to create the 3D photo run entirely on the mobile device, and are closely related to the computer vision algorithms used in 3D mapping features like Google Maps <a href="http://google-latlong.blogspot.com/2012/04/visit-global-landmarks-with-photo-tours.html">Photo Tours</a> and <a href="http://www.google.com/earth/">Google Earth</a>. We hope you have fun with your <i>bokeh</i> experiments!]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Carlos Hernández, Software Engineer</span><br /><br />One of the biggest advantages of <a href="http://en.wikipedia.org/wiki/Digital_single-lens_reflex_camera">SLR cameras</a> over camera phones is the ability to achieve shallow depth of field and <i>bokeh</i> effects. Shallow depth of field makes the object of interest "pop" by bringing the foreground into focus and de-emphasizing the background. Achieving this optical effect has traditionally required a big lens and aperture, and therefore hasn’t been possible using the camera on your mobile phone or tablet.<br /><br />That all changes with <i>Lens Blur,</i> a new mode in the <a href="https://play.google.com/store/apps/details?id=com.google.android.GoogleCamera">Google Camera</a>&nbsp;app. It lets you take a photo with a shallow depth of field using just your Android phone or tablet.  Unlike a regular photo, Lens Blur lets you change the point or level of focus <i><b>after</b></i> the photo is taken.  You can choose to make any object come into focus simply by tapping on it in the image.  By changing the depth-of-field slider, you can simulate different aperture sizes, to achieve bokeh effects ranging from subtle to surreal (e.g., <a href="http://en.wikipedia.org/wiki/Miniature_faking">tilt-shift</a>).  The new image is rendered instantly, allowing you to see your changes in real time.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-U6zxXHMNqAY/U03ZkNg1AoI/AAAAAAAAAQ0/SwrfQIAx-3U/s1600/image1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-U6zxXHMNqAY/U03ZkNg1AoI/AAAAAAAAAQ0/SwrfQIAx-3U/s1600/image1.png" /></a></div>Lens Blur replaces the need for a large optical system with algorithms that <b><i>simulate</i></b> a larger lens and aperture.  Instead of capturing a single photo, you move the camera in an upward sweep to capture a whole series of frames.  From these photos, Lens Blur uses computer vision algorithms to create a 3D model of the world, estimating the <a href="https://developers.google.com/depthmap-metadata/">depth</a> (distance) to every point in the scene. Here’s an example -- on the left is a raw input photo, in the middle is a “depth map” where darker things are close and lighter things are far away, and on the right is the result blurred by distance:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-89KApjt0Qr0/U03aCxvyQFI/AAAAAAAAAQ8/18BFoFaOuIE/s1600/image2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-89KApjt0Qr0/U03aCxvyQFI/AAAAAAAAAQ8/18BFoFaOuIE/s1600/image2.png" /></a></div>Here’s how we do it.  First, we pick out visual features in the scene and track them over time, across the series of images. Using computer vision algorithms known as Structure-from-Motion (SfM) and <a href="https://code.google.com/p/ceres-solver/">bundle adjustment</a>, we compute the camera’s 3D position and orientation and the 3D positions of all those image features throughout the series.<br /><br />Once we’ve got the 3D pose of each photo, we compute the depth of each pixel in the reference photo using <a href="http://vision.middlebury.edu/mview/">Multi-View Stereo</a> (MVS) algorithms. MVS works the way human stereo vision does: given the location of the same object in two different images, we can <a href="http://en.wikipedia.org/wiki/Triangulation">triangulate</a> the 3D position of the object and compute the distance to it.  How do we figure out which pixel in one image corresponds to a pixel in another image? MVS measures how similar they are -- on mobile devices, one particularly simple and efficient way is computing the Sum of Absolute Differences (SAD) of the RGB colors of the two pixels.<br /><br />Now it’s an optimization problem: we try to build a depth map where all the corresponding pixels are most similar to each other. But that’s typically not a well-posed optimization problem -- you can get the same similarity score for different depth maps. To address this ambiguity, the optimization also incorporates assumptions about the 3D geometry of a scene, called a "prior,” that favors reasonable solutions. For example, you can often assume two pixels near each other are at a similar depth. Finally, we use <a href="http://en.wikipedia.org/wiki/Markov_random_field">Markov Random Field</a> inference methods to solve the optimization problem.<br /><br />Having computed the depth map, we can re-render the photo, blurring pixels by differing amounts depending on the pixel’s depth, aperture and location relative to the focal plane. The focal plane determines which pixels to blur, with the amount of blur increasing proportionally with the distance of each pixel to that focal plane. This is all achieved by simulating a physical lens using the <a href="http://en.wikipedia.org/wiki/Thin_lens">thin lens </a>approximation. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-bZJNDZGLS_U/U03bQE2VzKI/AAAAAAAAARI/yQwRovcDWRQ/s1600/image3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-bZJNDZGLS_U/U03bQE2VzKI/AAAAAAAAARI/yQwRovcDWRQ/s1600/image3.png" /></a></div>The algorithms used to create the 3D photo run entirely on the mobile device, and are closely related to the computer vision algorithms used in 3D mapping features like Google Maps <a href="http://google-latlong.blogspot.com/2012/04/visit-global-landmarks-with-photo-tours.html">Photo Tours</a> and <a href="http://www.google.com/earth/">Google Earth</a>. We hope you have fun with your <i>bokeh</i> experiments!]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/lens-blur-in-the-new-google-camera-app/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Sawasdeee ka Voice Search</title>
		<link>https://googledata.org/google-research/sawasdeee-ka-voice-search/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=sawasdeee-ka-voice-search</link>
		<comments>https://googledata.org/google-research/sawasdeee-ka-voice-search/#comments</comments>
		<pubDate>Wed, 02 Apr 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[search]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=34755143894514b88b53f21dd426e569</guid>
		<description><![CDATA[<span>Posted by Keith Hall and Richard Sproat, Staff Research Scientists, Speech</span><br /><br /><br />Typing on mobile devices can be difficult, especially when you're on the go. Google Voice Search gives you a fast, easy, and natural way to search by speaking your queries instead of typing them. In Thailand, Voice Search has been one of the most requested services, so we&#8217;re excited to now offer users there the ability to speak queries in Thai, adding to over 75 languages and accents in which you can talk to Google.<br /><br />To power Voice Search, we teach computers to understand the sounds and words that build spoken language. We trained our speech recognizer to understand Thai by collecting speech samples from hundreds of volunteers in Bangkok, which enabled us to build this recognizer in just a fraction of the time it took to build other models. Our helpers are asked to read popular queries in their native tongue, in a variety of acoustic conditions such as in restaurants, out on busy streets, and inside cars.<br /><br />Each new language for voice recognition often requires our research team to tackle new challenges, including Thai.<br /><ul><li>Segmentation is a major challenge in Thai, as the Thai script has no spaces between words, so it is harder to know when a word begins and ends. Therefore, we created a Thai segmenter to help our system recognize words better. For example: &#3605;&#3634;&#3585;&#3621;&#3617; can be segmented to &#3605;&#3634;&#3585; &#3621;&#3617; or &#3605;&#3634; &#3585;&#3621;&#3617;.  We collected a large corpus of text and asked Thai speakers to manually annotate plausible segmentations.  We then trained a sequence segmenter on this data allowing it to generalize beyond the annotated data.</li><li>Numbers are an important part of any language: the string &#8220;87&#8221; appears on a web page and we need to know how people would say that.  As with over 40 other languages, we included a number grammar for Thai, that tells you that &#8220;87&#8221; would be read as &#3649;&#3611;&#3604;&#3626;&#3636;&#3610;&#3648;&#3592;&#3655;&#3604;.</li><li>Thai users often mix English words with Thai, such as brand or artist names, in both spoken and written Thai which adds complexity to our acoustic models, lexicon models, and segmentation models. We addressed this by introducing &#8216;code switching&#8217;, which allows Voice Search to recognize when different languages are being spoken interchangeably and adjust phonetic transliteration accordingly.</li><li>Many Thai users frequently leave out accents and tone markers when they search (eg &#3650;&#3609;&#3658;&#3605;&#3610;&#3640;&#3585; instead of &#3650;&#3609;&#3657;&#3605;&#3610;&#3640;&#3658;&#3585; OR &#3627;&#3617;&#3641;&#3627;&#3618;&#3629;&#3591; instead of &#3627;&#3617;&#3641;&#3627;&#3618;&#3655;&#3629;&#3591;) so we had to create a special algorithm to ensure accents and tones were restored in search results provided and our Thai users would see properly formatted text in the majority of cases.</li></ul><br />We&#8217;re particularly excited that Voice Search can help people find locally relevant information, ranging from travel directions to the nearest restaurant, without having to type long phrases in Thai.<br /><br />Voice Search is available for Android devices running Jelly Bean and above. It will be available for older Android releases and iOS users soon.<br /><br /><div><a href="http://1.bp.blogspot.com/-qFebw6hd2oE/UzxTDkliaHI/AAAAAAAAAQM/aQ5is7OxclY/s1600/thai.jpg"><img border="0" src="http://1.bp.blogspot.com/-qFebw6hd2oE/UzxTDkliaHI/AAAAAAAAAQM/aQ5is7OxclY/s1600/thai.jpg"></a><a href="http://2.bp.blogspot.com/-1JAnpEsUTXQ/UzxTDhubxgI/AAAAAAAAAQI/eGaHaORlUZs/s1600/thai2.png"><img border="0" src="http://2.bp.blogspot.com/-1JAnpEsUTXQ/UzxTDhubxgI/AAAAAAAAAQI/eGaHaORlUZs/s1600/thai2.png"></a></div><div><br /></div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Keith Hall and Richard Sproat, Staff Research Scientists, Speech</span><br /><br /><br />Typing on mobile devices can be difficult, especially when you're on the go. Google Voice Search gives you a fast, easy, and natural way to search by speaking your queries instead of typing them. In Thailand, Voice Search has been one of the most requested services, so we’re excited to now offer users there the ability to speak queries in Thai, adding to over 75 languages and accents in which you can talk to Google.<br /><br />To power Voice Search, we teach computers to understand the sounds and words that build spoken language. We trained our speech recognizer to understand Thai by collecting speech samples from hundreds of volunteers in Bangkok, which enabled us to build this recognizer in just a fraction of the time it took to build other models. Our helpers are asked to read popular queries in their native tongue, in a variety of acoustic conditions such as in restaurants, out on busy streets, and inside cars.<br /><br />Each new language for voice recognition often requires our research team to tackle new challenges, including Thai.<br /><ul><li>Segmentation is a major challenge in Thai, as the Thai script has no spaces between words, so it is harder to know when a word begins and ends. Therefore, we created a Thai segmenter to help our system recognize words better. For example: ตากลม can be segmented to ตาก ลม or ตา กลม.  We collected a large corpus of text and asked Thai speakers to manually annotate plausible segmentations.  We then trained a sequence segmenter on this data allowing it to generalize beyond the annotated data.</li><li>Numbers are an important part of any language: the string “87” appears on a web page and we need to know how people would say that.  As with over 40 other languages, we included a number grammar for Thai, that tells you that “87” would be read as แปดสิบเจ็ด.</li><li>Thai users often mix English words with Thai, such as brand or artist names, in both spoken and written Thai which adds complexity to our acoustic models, lexicon models, and segmentation models. We addressed this by introducing ‘code switching’, which allows Voice Search to recognize when different languages are being spoken interchangeably and adjust phonetic transliteration accordingly.</li><li>Many Thai users frequently leave out accents and tone markers when they search (eg โน๊ตบุก instead of โน้ตบุ๊ก OR หมูหยอง instead of หมูหย็อง) so we had to create a special algorithm to ensure accents and tones were restored in search results provided and our Thai users would see properly formatted text in the majority of cases.</li></ul><br />We’re particularly excited that Voice Search can help people find locally relevant information, ranging from travel directions to the nearest restaurant, without having to type long phrases in Thai.<br /><br />Voice Search is available for Android devices running Jelly Bean and above. It will be available for older Android releases and iOS users soon.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-qFebw6hd2oE/UzxTDkliaHI/AAAAAAAAAQM/aQ5is7OxclY/s1600/thai.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-qFebw6hd2oE/UzxTDkliaHI/AAAAAAAAAQM/aQ5is7OxclY/s1600/thai.jpg" /></a><a href="http://2.bp.blogspot.com/-1JAnpEsUTXQ/UzxTDhubxgI/AAAAAAAAAQI/eGaHaORlUZs/s1600/thai2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-1JAnpEsUTXQ/UzxTDhubxgI/AAAAAAAAAQI/eGaHaORlUZs/s1600/thai2.png" /></a></div><div style="text-align: center;"><br /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/sawasdeee-ka-voice-search/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Making Blockly Universally Accessible</title>
		<link>https://googledata.org/google-research/making-blockly-universally-accessible/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=making-blockly-universally-accessible</link>
		<comments>https://googledata.org/google-research/making-blockly-universally-accessible/#comments</comments>
		<pubDate>Tue, 01 Apr 2014 15:39:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=78cba03c78251dc4107679e7dca1f1c9</guid>
		<description><![CDATA[<span>Posted by Neil Fraser, Chief Interplanetary Liaison</span><br /><span><br /></span>We work hard to make our products accessible to people everywhere, in every culture. Today we&#8217;re expanding our outreach efforts to support a traditionally underserved community -- those who call themselves "tlhIngan."<br /><br />Google's Blockly programming environment is used in K-12 classrooms around the world to teach programming. But the world is not enough. Students on <a href="http://en.memory-alpha.org/wiki/Qo'noS">Qo'noS</a> have had difficulty learning to code because most of the teaching tools aren't available in their native language. Additionally, many existing tools are too fragile for their pedagogical approach.  As a result, Klingons have found it challenging to enter computer science. This is reflected in the fact that less than 2% of Google engineers are Klingon.<br /><br />Today we launch a full translation of Blockly in Klingon. It incorporates Klingon cultural norms to facilitate learning in this unique population:<br /><br /><ul><li>Blockly has no syntax errors.  This reduces frustration, and reduces the number of computers thrown through bulkheads.</li><li>Variables are untyped.  Type errors can too easily be perceived as a challenge to the honor of a student's family (and we&#8217;ve seen where that ends).</li><li>Debugging and bug reports have been omitted, our research indicates that in the event of a bug, they prefer the entire program to just blow up.</li></ul><br />Get a little keyboard dirt under your fingernails. Learn that although <a href="http://en.memory-alpha.org/wiki/Serpent_worm">ghargh</a> is delicious, code structure should not resemble it. And above all, be proud that <a href="http://hol.kag.org/page/sayings">tlhIngan maH</a>. Qapla'!<br /><br />You can try out the demo <a href="https://blockly-demo.appspot.com/static/apps/code/index.html?lang=tlh#ortpyd">here</a> or get involved <a href="https://code.google.com/p/blockly/wiki/Klingon">here</a>.<br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Neil Fraser, Chief Interplanetary Liaison</span><br /><span class="byline-author"><br /></span>We work hard to make our products accessible to people everywhere, in every culture. Today we’re expanding our outreach efforts to support a traditionally underserved community -- those who call themselves "tlhIngan."<br /><br />Google's Blockly programming environment is used in K-12 classrooms around the world to teach programming. But the world is not enough. Students on <a href="http://en.memory-alpha.org/wiki/Qo'noS">Qo'noS</a> have had difficulty learning to code because most of the teaching tools aren't available in their native language. Additionally, many existing tools are too fragile for their pedagogical approach.  As a result, Klingons have found it challenging to enter computer science. This is reflected in the fact that less than 2% of Google engineers are Klingon.<br /><br />Today we launch a full translation of Blockly in Klingon. It incorporates Klingon cultural norms to facilitate learning in this unique population:<br /><br /><ul><li>Blockly has no syntax errors.  This reduces frustration, and reduces the number of computers thrown through bulkheads.</li><li>Variables are untyped.  Type errors can too easily be perceived as a challenge to the honor of a student's family (and we’ve seen where that ends).</li><li>Debugging and bug reports have been omitted, our research indicates that in the event of a bug, they prefer the entire program to just blow up.</li></ul><br />Get a little keyboard dirt under your fingernails. Learn that although <a href="http://en.memory-alpha.org/wiki/Serpent_worm">ghargh</a> is delicious, code structure should not resemble it. And above all, be proud that <a href="http://hol.kag.org/page/sayings">tlhIngan maH</a>. Qapla'!<br /><br />You can try out the demo <a href="https://blockly-demo.appspot.com/static/apps/code/index.html?lang=tlh#ortpyd">here</a> or get involved <a href="https://code.google.com/p/blockly/wiki/Klingon">here</a>.<br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/making-blockly-universally-accessible/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Celebrating the First Set of Google Geo Education Awardees and Announcing Round Two</title>
		<link>https://googledata.org/google-research/celebrating-the-first-set-of-google-geo-education-awardees-and-announcing-round-two/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=celebrating-the-first-set-of-google-geo-education-awardees-and-announcing-round-two</link>
		<comments>https://googledata.org/google-research/celebrating-the-first-set-of-google-geo-education-awardees-and-announcing-round-two/#comments</comments>
		<pubDate>Mon, 31 Mar 2014 14:13:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=e2d7ed84bd774419c50fe02fd2b3d6a4</guid>
		<description><![CDATA[<span>Posted by Dave Thau, Senior Developer Advocate</span><br /><br />Google's GeoEDU Outreach program is excited to announce the opening of the second round of our Geo Education Awards, aimed at supporting qualifying educational institutions who are creating content and curricula for their mapping, remote sensing, or GIS initiatives.<br /><br />If you are an educator in these areas, we encourage you to <a href="https://docs.google.com/document/d/1zf_NT0R2Qsa7IchNDPgwfYQQex8XvgZF8SePDIe8NGU/edit?usp=sharing">apply for an award</a>.  To celebrate the first round of awardees, and give a sense of the kind of work we have supported in the past, here are brief descriptions of some of our previous awards.<br /><br /><b>Nicholas Clinton, Tsinghua University<br />Development of online <a href="https://drive.google.com/file/d/0B3H1GYZLzLKCb1JKb2xON0hUcG8/edit?usp=sharing">remote sensing course</a> content using Google Earth Engine</b><br />Nick is building 10 labs for an introductory remote sensing class. Topics include studying electromagnetic radiation, image processing, time series analysis, and change detection.  The labs are being taught currently, and materials will be made available when the course has been completed.  From Lab 6:<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><img alt="truecolor.png" height="235" src="https://lh3.googleusercontent.com/DnhlrRqfnOSf2pWNUOvcuI6BYA78i0_QWP82mx0kUb6F7YorGUm6-lTvu3cxgYULxs9rjsqsJuCY7lk_s_xvTrtO-uZwR24Zq8q5tCyWG2WFFlkOTk3Ug9ff3MVGVw" width="400"></td></tr><tr><td>Let's look at some imagery in Earth Engine. &#160;Search for the place 'Mountain View, CA, USA.' &#160;What the heck is all that stuff!? &#160;We are looking at this scene because of the diverse mix of things on the Earth surface.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><img alt="ndvi.png" height="254" src="https://lh5.googleusercontent.com/wkLAEQJkWQ46DMRwjXvsMxyzTU6eQeiMjbjtdyTUUPD-UKENOclSspSyTPaKDrYEOpgru5jbxTEMsGMYzW4ljSg4mTjFDtORXiyEUlgHu818dMNCnBv0HXUdDHVP9g" width="400"></td></tr><tr><td>Add the Landsat 8 32-day EVI composite. &#160;What do you observe? &#160;Recall that the more vegetative cover the higher the index. &#160;It looks like the "greenest" targets in this scene are golf courses.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><img alt="ndwi.png" border="0" height="254" src="https://lh6.googleusercontent.com/17au7XsZlrZAhxfbaWIWrim3giy0CPDcpW8TtnuAMI-ud621xTZR-L8TdlqN0fmw9eCvmAGN0po43Xu-9JiA6LoZPC3wtZjwyYM10XXy2ID4QQUr3UFt28t1RpXgiw" width="400"></td></tr><tr><td>Let's say we don't really care about vegetation (not true, of course!), but we do care about water. &#160;Let's see if the water indices can help us decipher our Mountain View mystery scene.</td></tr></tbody></table><br /><b>Dana Tomlin, University of Pennsylvania<br />Geospatial Programming: Child's Play</b><br />Dana is creating documentation, lesson plans, sample scripts, and homework assignments for each week in a 13-week, university-level course on geospatial programming.  The course uses the Python computer programming language to utilize, customize,  and extend the capabilities of three geographic information systems: Google&#8217;s Earth Engine, ESRI&#8217;s  ArcGIS, and the open-source QGIS.  <br /><br /><b>Declan G. De Paor, Old Dominion University<br />A Modular Approach to Introducing Google Mapping Technologies into Geoscience Curricula Worldwide</b><br />Declan's award supports senior student Chloe Constants who is helping design Google Maps Engine and Google Earth Engine modules for existing geoscience coursework, primarily focused on volcanic and tectonic hazards, and digital mapping. Declan and Chloe will present the modules at faculty development workshops in person and online. They see GME/GEE as a terrific way to offer authentic undergraduate research experiences to non-traditional geoscience students.<br /><br /><b>Mary Elizabeth Killilea, New York University<br />Google Geospatial Tools in a Global Classroom: &#8220;Where the City Meets the Sea: Studies in Coastal Urban Environments"</b><br />Mary and the Global Technology Services team at NYU are developing a land&#173; cover change lab using Google Earth Engine. NYU has campuses around the world, so their labs are written to be used globally. In fact, students in four campuses around the globe are currently collecting and sharing data for the lab.  Students at their sites analyze their local cities, but do so in a global context.<br /><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><img alt="DataCollection.jpg" height="300" src="https://lh6.googleusercontent.com/TCW-ItEBZ-PguA34cKil3gEA9XuH5LXpyl9QP0aj3j6fl1sZcYYWdIY8R-7kb8YAUB2in7H3GOD8a4Y35wzxxp3ijg2HjlhxEKxse_8PXjcMm3JOSlW5A6YZmrMFrw" width="400"></td></tr><tr><td>One group of students used Android mobile devices to collect land use data in New York's Battery Park.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><img alt="AbuDhabiLocations.jpg" height="239" src="https://lh6.googleusercontent.com/7y9YHfhU2qed45NxcVXrOj_0Vhj_2nxJyKXpiY1BvbHoKoG9Pw7PD3MZaG-Wc0GaGbA4-DiLTcT2yBaZ_UsJJb56ujHY8QFVTKWW1IoL4_Ua27ohbSvqU_46mRaHwA" width="400"></td></tr><tr><td>While others in the same course collected these points in Abu Dhabi.  Upon collection, the observations were automatically uploaded, mapped, and shared.</td></tr></tbody></table><br /><b>Scott Nowicki and Chris Edwards, University of Nevada at Las Vegas<br />Advanced Manipulation and Visualization of Remote Sensing Datasets with Google Earth Engine</b><br />Scott and Chris are taking biology, geoscience, and social science students on a field trip to collect geological data, and are generating screencast tutorials to show how these data can be queried, downloaded, calibrated, manipulated and interpreted using free tools including Google Earth Engine. These tutorials may be freely incorporated into any geospatial course, and all the field site data and analyses will be publicly released and published, giving a full description of what features are available to investigate, and how best to interpret both the remote sensing datasets and ground truth activities.<br /><br /><b>Steven Whitmeyer and Shelley Whitmeyer, James Madison University<br />Using Google Earth to Model Geologic Change Through Time</b><br />Steven and Shelley are building exercises for introductory geoscience courses focusing on coastal change, and glacial landform change.  These exercises incorporate targets and goals of the Next Generation Science Standards.  They are also developing tools to create new tectonic reconstructions of how continents and tectonic plates have moved since Pangaea breakup. Some of the current animations are available <a href="http://csmres.jmu.edu/Geollab/Whitmeyer/GeoEduCurriculum/India_Animation.kmz">here</a> and <a href="http://csmres.jmu.edu/Geollab/Whitmeyer/GeoEduCurriculum/Africa-India.kmz">here</a>.<br /><br />We hope this overview of previous award recipients gives you a sense for the range of educational activities our GeoEDU awards are supporting.  If you are working on innovative geospatial education projects, we invite you to <a href="https://docs.google.com/document/d/1zf_NT0R2Qsa7IchNDPgwfYQQex8XvgZF8SePDIe8NGU/edit?usp=sharing">apply for a GeoEDU award</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Dave Thau, Senior Developer Advocate</span><br /><br />Google's GeoEDU Outreach program is excited to announce the opening of the second round of our Geo Education Awards, aimed at supporting qualifying educational institutions who are creating content and curricula for their mapping, remote sensing, or GIS initiatives.<br /><br />If you are an educator in these areas, we encourage you to <a href="https://docs.google.com/document/d/1zf_NT0R2Qsa7IchNDPgwfYQQex8XvgZF8SePDIe8NGU/edit?usp=sharing">apply for an award</a>.  To celebrate the first round of awardees, and give a sense of the kind of work we have supported in the past, here are brief descriptions of some of our previous awards.<br /><br /><b>Nicholas Clinton, Tsinghua University<br />Development of online <a href="https://drive.google.com/file/d/0B3H1GYZLzLKCb1JKb2xON0hUcG8/edit?usp=sharing">remote sensing course</a> content using Google Earth Engine</b><br />Nick is building 10 labs for an introductory remote sensing class. Topics include studying electromagnetic radiation, image processing, time series analysis, and change detection.  The labs are being taught currently, and materials will be made available when the course has been completed.  From Lab 6:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><img alt="truecolor.png" height="235" src="https://lh3.googleusercontent.com/DnhlrRqfnOSf2pWNUOvcuI6BYA78i0_QWP82mx0kUb6F7YorGUm6-lTvu3cxgYULxs9rjsqsJuCY7lk_s_xvTrtO-uZwR24Zq8q5tCyWG2WFFlkOTk3Ug9ff3MVGVw" style="-webkit-transform: rotate(0rad); border: none; margin-left: auto; margin-right: auto;" width="400" /></td></tr><tr><td class="tr-caption" style="text-align: center;">Let's look at some imagery in Earth Engine. &nbsp;Search for the place 'Mountain View, CA, USA.' &nbsp;What the heck is all that stuff!? &nbsp;We are looking at this scene because of the diverse mix of things on the Earth surface.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><img alt="ndvi.png" height="254" src="https://lh5.googleusercontent.com/wkLAEQJkWQ46DMRwjXvsMxyzTU6eQeiMjbjtdyTUUPD-UKENOclSspSyTPaKDrYEOpgru5jbxTEMsGMYzW4ljSg4mTjFDtORXiyEUlgHu818dMNCnBv0HXUdDHVP9g" style="-webkit-transform: rotate(0rad); border: none; margin-left: auto; margin-right: auto;" width="400" /></td></tr><tr><td class="tr-caption" style="text-align: center;">Add the Landsat 8 32-day EVI composite. &nbsp;What do you observe? &nbsp;Recall that the more vegetative cover the higher the index. &nbsp;It looks like the "greenest" targets in this scene are golf courses.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><img alt="ndwi.png" border="0" height="254" src="https://lh6.googleusercontent.com/17au7XsZlrZAhxfbaWIWrim3giy0CPDcpW8TtnuAMI-ud621xTZR-L8TdlqN0fmw9eCvmAGN0po43Xu-9JiA6LoZPC3wtZjwyYM10XXy2ID4QQUr3UFt28t1RpXgiw" style="-webkit-transform: rotate(0rad); border: none; margin-left: auto; margin-right: auto;" width="400" /></td></tr><tr><td class="tr-caption" style="text-align: center;">Let's say we don't really care about vegetation (not true, of course!), but we do care about water. &nbsp;Let's see if the water indices can help us decipher our Mountain View mystery scene.</td></tr></tbody></table><br /><b>Dana Tomlin, University of Pennsylvania<br />Geospatial Programming: Child's Play</b><br />Dana is creating documentation, lesson plans, sample scripts, and homework assignments for each week in a 13-week, university-level course on geospatial programming.  The course uses the Python computer programming language to utilize, customize,  and extend the capabilities of three geographic information systems: Google’s Earth Engine, ESRI’s  ArcGIS, and the open-source QGIS.  <br /><br /><b>Declan G. De Paor, Old Dominion University<br />A Modular Approach to Introducing Google Mapping Technologies into Geoscience Curricula Worldwide</b><br />Declan's award supports senior student Chloe Constants who is helping design Google Maps Engine and Google Earth Engine modules for existing geoscience coursework, primarily focused on volcanic and tectonic hazards, and digital mapping. Declan and Chloe will present the modules at faculty development workshops in person and online. They see GME/GEE as a terrific way to offer authentic undergraduate research experiences to non-traditional geoscience students.<br /><br /><b>Mary Elizabeth Killilea, New York University<br />Google Geospatial Tools in a Global Classroom: “Where the City Meets the Sea: Studies in Coastal Urban Environments"</b><br />Mary and the Global Technology Services team at NYU are developing a land­ cover change lab using Google Earth Engine. NYU has campuses around the world, so their labs are written to be used globally. In fact, students in four campuses around the globe are currently collecting and sharing data for the lab.  Students at their sites analyze their local cities, but do so in a global context.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><img alt="DataCollection.jpg" height="300" src="https://lh6.googleusercontent.com/TCW-ItEBZ-PguA34cKil3gEA9XuH5LXpyl9QP0aj3j6fl1sZcYYWdIY8R-7kb8YAUB2in7H3GOD8a4Y35wzxxp3ijg2HjlhxEKxse_8PXjcMm3JOSlW5A6YZmrMFrw" style="-webkit-transform: rotate(0rad); border: none; margin-left: auto; margin-right: auto;" width="400" /></td></tr><tr><td class="tr-caption" style="text-align: center;">One group of students used Android mobile devices to collect land use data in New York's Battery Park.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><img alt="AbuDhabiLocations.jpg" height="239" src="https://lh6.googleusercontent.com/7y9YHfhU2qed45NxcVXrOj_0Vhj_2nxJyKXpiY1BvbHoKoG9Pw7PD3MZaG-Wc0GaGbA4-DiLTcT2yBaZ_UsJJb56ujHY8QFVTKWW1IoL4_Ua27ohbSvqU_46mRaHwA" style="-webkit-transform: rotate(0rad); border: none; margin-left: auto; margin-right: auto;" width="400" /></td></tr><tr><td class="tr-caption" style="text-align: center;">While others in the same course collected these points in Abu Dhabi.  Upon collection, the observations were automatically uploaded, mapped, and shared.</td></tr></tbody></table><br /><b>Scott Nowicki and Chris Edwards, University of Nevada at Las Vegas<br />Advanced Manipulation and Visualization of Remote Sensing Datasets with Google Earth Engine</b><br />Scott and Chris are taking biology, geoscience, and social science students on a field trip to collect geological data, and are generating screencast tutorials to show how these data can be queried, downloaded, calibrated, manipulated and interpreted using free tools including Google Earth Engine. These tutorials may be freely incorporated into any geospatial course, and all the field site data and analyses will be publicly released and published, giving a full description of what features are available to investigate, and how best to interpret both the remote sensing datasets and ground truth activities.<br /><br /><b>Steven Whitmeyer and Shelley Whitmeyer, James Madison University<br />Using Google Earth to Model Geologic Change Through Time</b><br />Steven and Shelley are building exercises for introductory geoscience courses focusing on coastal change, and glacial landform change.  These exercises incorporate targets and goals of the Next Generation Science Standards.  They are also developing tools to create new tectonic reconstructions of how continents and tectonic plates have moved since Pangaea breakup. Some of the current animations are available <a href="http://csmres.jmu.edu/Geollab/Whitmeyer/GeoEduCurriculum/India_Animation.kmz">here</a> and <a href="http://csmres.jmu.edu/Geollab/Whitmeyer/GeoEduCurriculum/Africa-India.kmz">here</a>.<br /><br />We hope this overview of previous award recipients gives you a sense for the range of educational activities our GeoEDU awards are supporting.  If you are working on innovative geospatial education projects, we invite you to <a href="https://docs.google.com/document/d/1zf_NT0R2Qsa7IchNDPgwfYQQex8XvgZF8SePDIe8NGU/edit?usp=sharing">apply for a GeoEDU award</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/celebrating-the-first-set-of-google-geo-education-awardees-and-announcing-round-two/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Making Sense of MOOC Data</title>
		<link>https://googledata.org/google-research/making-sense-of-mooc-data/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=making-sense-of-mooc-data</link>
		<comments>https://googledata.org/google-research/making-sense-of-mooc-data/#comments</comments>
		<pubDate>Thu, 27 Mar 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=6d48aa2a5623abaeeef85e2bc32e65b1</guid>
		<description><![CDATA[<span>Posted by Julia Wilkowski, Staff Instructional Designer</span><br /><br />In order to further evolve the open education system and online platforms, Google&#8217;s course design and development teams continually experiment with massive, open online courses. Recently, at the Association for Computing Machinery&#8217;s recent <a href="http://learningatscale.acm.org/index.html">Learning@Scale conference</a> in Atlanta, GA, several members of our team presented findings about our online courses. Our research focuses on <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41929.pdf">learners&#8217; goals and activities</a> as well as <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41928.pdf">self-evaluation as an assessment tool</a>. In this post, I will present highlights from our research as well as how we&#8217;ve applied this research to our current course, <a href="https://datasense.withgoogle.com/">Making Sense of Data</a>.<br /><br />Google&#8217;s five online courses over the past two years have provided an opportunity for us to identify learning trends and refine instructional design. As we <a href="http://googleresearch.blogspot.com/2013/09/a-comparison-of-five-google-online.html">posted previously</a>, learners register for online courses for a variety of reasons. During registration, we ask learners to identify their primary goal for taking the class. We found that just over half (52.5%) of 41,000 registrants intended to complete the <a href="https://mapping.withgoogle.com/">Mapping with Google course</a>; the other half aimed to learn portions of the curriculum without earning a certificate. Next we measured how well participants achieved those goals by observing various interaction behaviors in the course, such as watching videos, viewing text lessons, and activity completion. We found that 42.4% of 21,000 active learners (who did something in the course other than register) achieved the goals they selected during registration. Similarly, for our <a href="https://webaccessibility.withgoogle.com/">Introduction to Web Accessibility course</a>, we found that 56.1% of 4,993 registrants intended to complete the course. Based on their interactions with course materials, we measured that 49.5% of 1,037 active learners achieved their goals. <br /><br />Although imperfect, these numbers are more accurate measures of course success than completion rates. Because students come to the course for many different reasons, course designers should make it easier for learners to meet a variety of objectives. Since many participants in online courses may just want to learn a few new things, we can help them by releasing all course content at the outset of the course and enabling them to search for specific topics of interest. We are exploring other ways of personalizing courses to help learners achieve individual goals.<br /><br />Our research also indicates that learners who complete activities are more likely to complete the course than peers who completed no activities. Activities include auto-graded multiple-choice or short-answer questions that encourage learners to practice skills from the course and receive instant feedback. In the Mapping with Google course, learners who completed at least sixty percent of course activities were much more likely to submit final projects than peers who finished fewer activities. This leads us to believe that as course designers, we should be paying more attention to creating effective, relevant activities than focusing so heavily on course content. We hypothesize that learners also use activities&#8217; instant feedback to help them determine whether they should spend time reviewing the associated content. In this scenario, we believe that learners could benefit from experiencing activities before course content. <br /><br />As technological solutions for assessing qualitative work are still evolving, an active area of our research involves self-evaluation. We are also intrigued by <a href="http://hci.stanford.edu/publications/2012/SelfAssessmentInDesignSpring2012TR.pdf">previous research</a> showing the links between <a href="http://www.cogsci.ucsd.edu/~mboyle/COGS1/readings/Klemmer-COGS1-Peer%20and%20self%20assesment%20in%20massive%20online%20classes.pdf">self-evaluation</a> and <a href="http://www.cfa.harvard.edu/sed/staff/Sadler/articles/Sadler%20and%20Good%20EA.pdf">enhanced metacognition</a>. In several courses, we have asked learners to submit projects aligned with course objectives, calibrate themselves by evaluating sample work, then apply a rubric to assess their own work. Course staff graded a random sample of project submissions then compared the learners&#8217; scores with course staff&#8217;s scores. In general, we found a moderate agreement on Advanced Power Searching (APS) case studies (55.1% within 1 point of each other on a 16-point scale), with an increased agreement on the Mapping projects (71.6% within 2 points of each other on a 27-point scale). We also observed that students submitted high quality projects overall, with course staff scoring 73% of APS assignments a B (80%) or above; similarly, course staff evaluated 94% of Mapping projects as a B or above. <br /><br />What changed between the two courses that allowed for a higher agreement with the mapping course? The most important change seems to be more objective criteria for the mapping project rubric. We also believe that we haven&#8217;t given enough weight to teaching learners how to evaluate their own work. We plan to keep experimenting with self-evaluation in future courses.<br /><br /><div><a href="http://2.bp.blogspot.com/-FCbS3DHnfX0/UzNlsm0liPI/AAAAAAAAAP4/hNwOQ-gJCQA/s1600/datacoursescreenshotformoocpost.png"><img border="0" src="http://2.bp.blogspot.com/-FCbS3DHnfX0/UzNlsm0liPI/AAAAAAAAAP4/hNwOQ-gJCQA/s1600/datacoursescreenshotformoocpost.png" height="417" width="640"></a></div><br />Since we are dedicated to experimenting with courses, we have not only applied these findings to the <a href="https://datasense.withgoogle.com/">Making Sense of Data</a> course, but we have also chosen to experiment with new open-source software and tools. We&#8217;re exploring the following aspects of online education in this class: <br /><br /><ul><li>Placing activities before content</li><li>Reduced use of videos</li><li>Final project that includes self-reflection without scores</li><li>New open-source technologies, including authoring the course using edX studio and importing it into <a href="https://code.google.com/p/course-builder/">cbX</a> (running on Google&#8217;s AppEngine platform) as well as <a href="http://google-opensource.blogspot.com/2014/02/oppia-tool-for-interactive-learning.html">Oppia</a> explorations</li></ul><br />We hope that our research and the open-source technologies we&#8217;re using will inspire educators and researchers to continue to evolve the next generation of online learning platforms.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Julia Wilkowski, Staff Instructional Designer</span><br /><br />In order to further evolve the open education system and online platforms, Google’s course design and development teams continually experiment with massive, open online courses. Recently, at the Association for Computing Machinery’s recent <a href="http://learningatscale.acm.org/index.html">Learning@Scale conference</a> in Atlanta, GA, several members of our team presented findings about our online courses. Our research focuses on <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41929.pdf">learners’ goals and activities</a> as well as <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41928.pdf">self-evaluation as an assessment tool</a>. In this post, I will present highlights from our research as well as how we’ve applied this research to our current course, <a href="https://datasense.withgoogle.com/">Making Sense of Data</a>.<br /><br />Google’s five online courses over the past two years have provided an opportunity for us to identify learning trends and refine instructional design. As we <a href="http://googleresearch.blogspot.com/2013/09/a-comparison-of-five-google-online.html">posted previously</a>, learners register for online courses for a variety of reasons. During registration, we ask learners to identify their primary goal for taking the class. We found that just over half (52.5%) of 41,000 registrants intended to complete the <a href="https://mapping.withgoogle.com/">Mapping with Google course</a>; the other half aimed to learn portions of the curriculum without earning a certificate. Next we measured how well participants achieved those goals by observing various interaction behaviors in the course, such as watching videos, viewing text lessons, and activity completion. We found that 42.4% of 21,000 active learners (who did something in the course other than register) achieved the goals they selected during registration. Similarly, for our <a href="https://webaccessibility.withgoogle.com/">Introduction to Web Accessibility course</a>, we found that 56.1% of 4,993 registrants intended to complete the course. Based on their interactions with course materials, we measured that 49.5% of 1,037 active learners achieved their goals. <br /><br />Although imperfect, these numbers are more accurate measures of course success than completion rates. Because students come to the course for many different reasons, course designers should make it easier for learners to meet a variety of objectives. Since many participants in online courses may just want to learn a few new things, we can help them by releasing all course content at the outset of the course and enabling them to search for specific topics of interest. We are exploring other ways of personalizing courses to help learners achieve individual goals.<br /><br />Our research also indicates that learners who complete activities are more likely to complete the course than peers who completed no activities. Activities include auto-graded multiple-choice or short-answer questions that encourage learners to practice skills from the course and receive instant feedback. In the Mapping with Google course, learners who completed at least sixty percent of course activities were much more likely to submit final projects than peers who finished fewer activities. This leads us to believe that as course designers, we should be paying more attention to creating effective, relevant activities than focusing so heavily on course content. We hypothesize that learners also use activities’ instant feedback to help them determine whether they should spend time reviewing the associated content. In this scenario, we believe that learners could benefit from experiencing activities before course content. <br /><br />As technological solutions for assessing qualitative work are still evolving, an active area of our research involves self-evaluation. We are also intrigued by <a href="http://hci.stanford.edu/publications/2012/SelfAssessmentInDesignSpring2012TR.pdf">previous research</a> showing the links between <a href="http://www.cogsci.ucsd.edu/~mboyle/COGS1/readings/Klemmer-COGS1-Peer%20and%20self%20assesment%20in%20massive%20online%20classes.pdf">self-evaluation</a> and <a href="http://www.cfa.harvard.edu/sed/staff/Sadler/articles/Sadler%20and%20Good%20EA.pdf">enhanced metacognition</a>. In several courses, we have asked learners to submit projects aligned with course objectives, calibrate themselves by evaluating sample work, then apply a rubric to assess their own work. Course staff graded a random sample of project submissions then compared the learners’ scores with course staff’s scores. In general, we found a moderate agreement on Advanced Power Searching (APS) case studies (55.1% within 1 point of each other on a 16-point scale), with an increased agreement on the Mapping projects (71.6% within 2 points of each other on a 27-point scale). We also observed that students submitted high quality projects overall, with course staff scoring 73% of APS assignments a B (80%) or above; similarly, course staff evaluated 94% of Mapping projects as a B or above. <br /><br />What changed between the two courses that allowed for a higher agreement with the mapping course? The most important change seems to be more objective criteria for the mapping project rubric. We also believe that we haven’t given enough weight to teaching learners how to evaluate their own work. We plan to keep experimenting with self-evaluation in future courses.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-FCbS3DHnfX0/UzNlsm0liPI/AAAAAAAAAP4/hNwOQ-gJCQA/s1600/datacoursescreenshotformoocpost.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-FCbS3DHnfX0/UzNlsm0liPI/AAAAAAAAAP4/hNwOQ-gJCQA/s1600/datacoursescreenshotformoocpost.png" height="417" width="640" /></a></div><br />Since we are dedicated to experimenting with courses, we have not only applied these findings to the <a href="https://datasense.withgoogle.com/">Making Sense of Data</a> course, but we have also chosen to experiment with new open-source software and tools. We’re exploring the following aspects of online education in this class: <br /><br /><ul><li>Placing activities before content</li><li>Reduced use of videos</li><li>Final project that includes self-reflection without scores</li><li>New open-source technologies, including authoring the course using edX studio and importing it into <a href="https://code.google.com/p/course-builder/">cbX</a> (running on Google’s AppEngine platform) as well as <a href="http://google-opensource.blogspot.com/2014/02/oppia-tool-for-interactive-learning.html">Oppia</a> explorations</li></ul><br />We hope that our research and the open-source technologies we’re using will inspire educators and researchers to continue to evolve the next generation of online learning platforms.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/making-sense-of-mooc-data/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Berkeley Earth Maps Powered by Google Maps Engine now available in the Google Maps Gallery</title>
		<link>https://googledata.org/google-research/berkeley-earth-maps-powered-by-google-maps-engine-now-available-in-the-google-maps-gallery/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=berkeley-earth-maps-powered-by-google-maps-engine-now-available-in-the-google-maps-gallery</link>
		<comments>https://googledata.org/google-research/berkeley-earth-maps-powered-by-google-maps-engine-now-available-in-the-google-maps-gallery/#comments</comments>
		<pubDate>Thu, 20 Mar 2014 16:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=8e2bd4b951d7e4eb91354242eda7ece7</guid>
		<description><![CDATA[<span>Posted by Dr. Robert Rohde, Berkeley Earth</span><br /><br />Google Maps is a familiar and versatile tool for exploring the world, but adding new data on top of Google Maps has traditionally required expending effort for both data management and website scripting. Google recently expanded <a href="https://mapsengine.google.com/admin/">Google Maps Engine</a> and debuted an updated <a href="http://maps.google.com/gallery/?hl=en">Google Maps Gallery</a>.  These tools aim to make it easier for users and organizations to integrate their geographic data with Google Maps and share it with the world.  At <a href="http://www.berkeleyearth.org/">Berkeley Earth</a> we had an early opportunity to work with these new tools.<br /><br />The use of Google Maps Engine eliminates the need for users to run their own map-serving Web servers.  Maps Engine also handles mundane mapping tasks, such as automatically converting  georeferenced image files into beautiful map layers that can be viewed in Google Maps, no programming required.  <br /><div><br /></div><div><br /></div><div><span>Annual average land-surface temperature during the period 1951-1980 as estimated by <a href="http://www.berkeleyearth.org/">Berkeley Earth</a>.</span></div><br />Similarly, one can take tables of location data and map them onto a Google Map using geographic markers and popup message boxes that make it easy to explore georeferenced information. <br /><br /><br /><div><a href="http://4.bp.blogspot.com/-r-APFxAMoIA/UysfUBkPVZI/AAAAAAAAAPg/vIO0Zh4AaIw/s1600/image01.png"><img border="0" src="http://4.bp.blogspot.com/-r-APFxAMoIA/UysfUBkPVZI/AAAAAAAAAPg/vIO0Zh4AaIw/s320/image01.png"></a><a href="http://2.bp.blogspot.com/-zMb3nuY3B-I/UysfUt5CZ8I/AAAAAAAAAPk/EGUkN1cklgw/s1600/temperature_station.tiff"><img border="0" src="http://2.bp.blogspot.com/-zMb3nuY3B-I/UysfUt5CZ8I/AAAAAAAAAPk/EGUkN1cklgw/s320/temperature_station.tiff"></a></div><div><span>Map of the more than 40,000 temperature stations used by the Berkeley Earth analysis.&#160;</span><span>On the left is part of the original table of data.  On the right is its representation in Google Maps Engine.</span></div><br />When mapping locations, the new Maps Engine tools allows users to upload their own geographic markers or chose from Google&#8217;s many selections; the geographic marker icons used in the temperature station map above were uploaded by us. Alternatively, we could have used one of the stock icons provided by Maps Engine. In addition, users can customize the content and appearance of the popup message boxes by using HTML.  If the georeferenced data can be linked the web addresses of already existing online content, one can also incorporate images or outgoing links within the message boxes, helping the user find more information about the content presented in the map.<br /><br />The ease of putting image layers into the new Maps Engine has allowed Berkeley Earth to create and share many <a href="http://maps.google.com/gallery/publisher?pub=Berkeley+Earth&#38;hl=en">scalable maps of climate and weather information</a> that are fun to explore.   Incorporating these maps in our website and posting them on the Google Maps Gallery provides the public with a new tool to help locate local weather stations, learn about local climate, and download various kinds of weather and climate data.  <br /><br />Now, anyone can easily learn about both the weather in their city and the climate of the entire globe from a single, simple interface.  Google Maps Engine and the new Maps Gallery has allowed us to bring the story of climate to a broad audience in a way that can be easily understood.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Dr. Robert Rohde, Berkeley Earth</span><br /><br />Google Maps is a familiar and versatile tool for exploring the world, but adding new data on top of Google Maps has traditionally required expending effort for both data management and website scripting. Google recently expanded <a href="https://mapsengine.google.com/admin/">Google Maps Engine</a> and debuted an updated <a href="http://maps.google.com/gallery/?hl=en">Google Maps Gallery</a>.  These tools aim to make it easier for users and organizations to integrate their geographic data with Google Maps and share it with the world.  At <a href="http://www.berkeleyearth.org/">Berkeley Earth</a> we had an early opportunity to work with these new tools.<br /><br />The use of Google Maps Engine eliminates the need for users to run their own map-serving Web servers.  Maps Engine also handles mundane mapping tasks, such as automatically converting  georeferenced image files into beautiful map layers that can be viewed in Google Maps, no programming required.  <br /><div style="text-align: center;"><br /></div><div style="text-align: center;"><iframe height="370px" src="https://mapsengine.google.com/gallery/mapviewer?id=zttWLnOPAlrs.kQBKSYw5ok5U&amp;hl=en&amp;ui=e" width="525px"></iframe><br /></div><div style="text-align: center;"><span style="font-size: x-small;">Annual average land-surface temperature during the period 1951-1980 as estimated by <a href="http://www.berkeleyearth.org/">Berkeley Earth</a>.</span></div><br />Similarly, one can take tables of location data and map them onto a Google Map using geographic markers and popup message boxes that make it easy to explore georeferenced information. <br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-r-APFxAMoIA/UysfUBkPVZI/AAAAAAAAAPg/vIO0Zh4AaIw/s1600/image01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-r-APFxAMoIA/UysfUBkPVZI/AAAAAAAAAPg/vIO0Zh4AaIw/s320/image01.png" /></a><a href="http://2.bp.blogspot.com/-zMb3nuY3B-I/UysfUt5CZ8I/AAAAAAAAAPk/EGUkN1cklgw/s1600/temperature_station.tiff" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-zMb3nuY3B-I/UysfUt5CZ8I/AAAAAAAAAPk/EGUkN1cklgw/s320/temperature_station.tiff" /></a></div><div style="text-align: center;"><span style="font-size: x-small;">Map of the more than 40,000 temperature stations used by the Berkeley Earth analysis.&nbsp;</span><span style="font-size: x-small;">On the left is part of the original table of data.  On the right is its representation in Google Maps Engine.</span></div><br />When mapping locations, the new Maps Engine tools allows users to upload their own geographic markers or chose from Google’s many selections; the geographic marker icons used in the temperature station map above were uploaded by us. Alternatively, we could have used one of the stock icons provided by Maps Engine. In addition, users can customize the content and appearance of the popup message boxes by using HTML.  If the georeferenced data can be linked the web addresses of already existing online content, one can also incorporate images or outgoing links within the message boxes, helping the user find more information about the content presented in the map.<br /><br />The ease of putting image layers into the new Maps Engine has allowed Berkeley Earth to create and share many <a href="http://maps.google.com/gallery/publisher?pub=Berkeley+Earth&amp;hl=en">scalable maps of climate and weather information</a> that are fun to explore.   Incorporating these maps in our website and posting them on the Google Maps Gallery provides the public with a new tool to help locate local weather stations, learn about local climate, and download various kinds of weather and climate data.  <br /><br />Now, anyone can easily learn about both the weather in their city and the climate of the entire globe from a single, simple interface.  Google Maps Engine and the new Maps Gallery has allowed us to bring the story of climate to a broad audience in a way that can be easily understood.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/berkeley-earth-maps-powered-by-google-maps-engine-now-available-in-the-google-maps-gallery/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Computer Science Education Recharged!</title>
		<link>https://googledata.org/google-research/computer-science-education-recharged/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=computer-science-education-recharged</link>
		<comments>https://googledata.org/google-research/computer-science-education-recharged/#comments</comments>
		<pubDate>Tue, 11 Mar 2014 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=b09ce8e69beacedab412466f92fe3dbd</guid>
		<description><![CDATA[<span>Posted by Maggie Johnson, Director of Education and University Relations</span><br /><span><br /></span> A few days ago, I attended the annual <a href="http://sigcse.org/">SIGCSE</a> (Special Interest Group, Computer Science Education) conference in Atlanta, GA. Google has been a platinum sponsor of SIGCSE for many years now, and the conference provides an opportunity for thousands of CS educators to come together, share ideas and engage in the resurgence of activity and interest in CS education. <br /><br />Seven years ago, the number of CS majors at the undergraduate level hit an all time low; the number of students taking the Advanced Placement CS exam fell 15% between 2001 and 2007, and the number of college freshmen intending to major in CS plummeted more than 70% during the same period. This was a concern for CS educators, as advancing U.S. students' understanding of the principles and practices of computing is critical to developing a globally competitive workforce for the 21st century. <br /><br />Since 2007, though, many significant things have happened. First, a commission of ten secondary and higher education faculty came together to design a new Advanced Placement CS course called <a href="http://www.csprinciples.org/">CS Principles</a>. This reinvention of AP CS not only introduces students to programming, but also gives them an understanding of the fundamental concepts of computing, its breadth of application and its potential for transforming the world. Additionally, since 2007 the Computer Science Teachers Association (<a href="http://csta.acm.org/">CSTA</a>), a community that plays a key role in professional development, CS standards definition (another critical stake in the ground), and scaling of the new AP CS, has grown to 16,000+ members. <br /><br />Finally, late last year, <a href="http://code.org/">code.org</a> launched <a href="http://csedweek.org/">Hour of Code</a> with over 29 million students participating, which is an unprecedented scale in CS education. This event raised awareness and provided enormous opportunity for follow-on with teachers and students who realized that coding is not only accessible, but fun. Their next step is to scale <a href="http://www.exploringcs.org/">Exploring Computer Science</a> this fall to 30 school districts (and counting) including some of the biggest districts in the country, in addition to developing K-5 and middle school curriculum. <br /><br />Last week at SIGCSE, Google had an opportunity to present two new programs and a transition of an existing program:<br /><br /><ul><li><a href="http://www.csfirst.net/">CS First</a> is a pilot program in South Carolina introducing students to CS in a social, collaborative after-school environment. The focus is on raising awareness and helping students understand their potential in the field.</li><li>Engage CS Edu will provide curriculum resources for introductory CS1/CS2 courses that are engaging to both women and men.</li><li><a href="http://www.cs4hs.com/">CS4HS</a> continues to experiment this year with online professional development opportunities for teachers. We still support face-to-face CS4HS workshops, but given the success of our MOOC experiments last year, we&#8217;d like to continue to see how we might scale to more and more teachers.</li></ul><br />The growth in awareness and activity in CS education over the past two years has been amazing and it continues to grow rapidly, thanks to the hard work of many. Google is proud to work with the many organizations in CS education to support and scale their work, through programs and funding. We strive to develop new programs where there are gaps, utilizing our technical infrastructure, our experience with scale, and a deep understanding of the potential of CS to transform the world in positive ways. This has been core to Google&#8217;s philosophy since we started 16 years ago.<br /><br /><br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Maggie Johnson, Director of Education and University Relations</span><br /><span class="byline-author"><br /></span> A few days ago, I attended the annual <a href="http://sigcse.org/">SIGCSE</a> (Special Interest Group, Computer Science Education) conference in Atlanta, GA. Google has been a platinum sponsor of SIGCSE for many years now, and the conference provides an opportunity for thousands of CS educators to come together, share ideas and engage in the resurgence of activity and interest in CS education. <br /><br />Seven years ago, the number of CS majors at the undergraduate level hit an all time low; the number of students taking the Advanced Placement CS exam fell 15% between 2001 and 2007, and the number of college freshmen intending to major in CS plummeted more than 70% during the same period. This was a concern for CS educators, as advancing U.S. students' understanding of the principles and practices of computing is critical to developing a globally competitive workforce for the 21st century. <br /><br />Since 2007, though, many significant things have happened. First, a commission of ten secondary and higher education faculty came together to design a new Advanced Placement CS course called <a href="http://www.csprinciples.org/">CS Principles</a>. This reinvention of AP CS not only introduces students to programming, but also gives them an understanding of the fundamental concepts of computing, its breadth of application and its potential for transforming the world. Additionally, since 2007 the Computer Science Teachers Association (<a href="http://csta.acm.org/">CSTA</a>), a community that plays a key role in professional development, CS standards definition (another critical stake in the ground), and scaling of the new AP CS, has grown to 16,000+ members. <br /><br />Finally, late last year, <a href="http://code.org/">code.org</a> launched <a href="http://csedweek.org/">Hour of Code</a> with over 29 million students participating, which is an unprecedented scale in CS education. This event raised awareness and provided enormous opportunity for follow-on with teachers and students who realized that coding is not only accessible, but fun. Their next step is to scale <a href="http://www.exploringcs.org/">Exploring Computer Science</a> this fall to 30 school districts (and counting) including some of the biggest districts in the country, in addition to developing K-5 and middle school curriculum. <br /><br />Last week at SIGCSE, Google had an opportunity to present two new programs and a transition of an existing program:<br /><br /><ul><li><a href="http://www.csfirst.net/">CS First</a> is a pilot program in South Carolina introducing students to CS in a social, collaborative after-school environment. The focus is on raising awareness and helping students understand their potential in the field.</li><li>Engage CS Edu will provide curriculum resources for introductory CS1/CS2 courses that are engaging to both women and men.</li><li><a href="http://www.cs4hs.com/">CS4HS</a> continues to experiment this year with online professional development opportunities for teachers. We still support face-to-face CS4HS workshops, but given the success of our MOOC experiments last year, we’d like to continue to see how we might scale to more and more teachers.</li></ul><br />The growth in awareness and activity in CS education over the past two years has been amazing and it continues to grow rapidly, thanks to the hard work of many. Google is proud to work with the many organizations in CS education to support and scale their work, through programs and funding. We strive to develop new programs where there are gaps, utilizing our technical infrastructure, our experience with scale, and a deep understanding of the potential of CS to transform the world in positive ways. This has been core to Google’s philosophy since we started 16 years ago.<br /><br /><br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/computer-science-education-recharged/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Google joins the Global Alliance for Genomics and Health</title>
		<link>https://googledata.org/google-research/google-joins-the-global-alliance-for-genomics-and-health/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-joins-the-global-alliance-for-genomics-and-health</link>
		<comments>https://googledata.org/google-research/google-joins-the-global-alliance-for-genomics-and-health/#comments</comments>
		<pubDate>Thu, 27 Feb 2014 18:10:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=385796431d33467b1fe118d8f801d63e</guid>
		<description><![CDATA[<span>Posted by Jonathan Bingham, Product Manager</span><br /><br />Generating research data is easier than ever before, but interpreting and analyzing it is still hard, and getting harder as the volume increases. This is especially true of genomics. Sequencing the whole genome of a single person produces more than 100 gigabytes of raw data, and a million genomes will add up to more than 100 petabytes. In 2003, the <a href="http://www.genome.gov/10001772">Human Genome Project</a> completed after 15 years and $3 billion. Today, it takes closer to one day and $1,000 to sequence a human genome.<br /><br />This abundance of new information carries great potential for research and human health -- and requires new standards, policies and technology. That&#8217;s why Google has joined the <a href="http://oicr.on.ca/oicr-programs-and-platforms/global-alliance-genomics-and-health-ga4gh">Global Alliance for Genomics and Health</a>.  The Alliance is an international effort to develop harmonized approaches to enable responsible, secure, and effective sharing of genomic and clinical information in the cloud with the research and healthcare communities, meeting the highest standards of ethics and privacy.  Members of the Global Alliance include leading technology, healthcare, research, and disease advocacy organizations from around the world. <br /><br />To contribute to the genomics community and help meet the data-intensive needs of the life sciences, we are introducing:<br /><br /><ul><li>a proposal for a <a href="https://developers.google.com/genomics/">simple web-based API</a> to import, process, store, and search genomic data at scale</li><li>a preview implementation of the API built on Google&#8217;s cloud infrastructure, including sample data from public datasets like the <a href="http://www.1000genomes.org/">1,000 Genomes Project</a></li><li>a collection of in-progress <a href="https://github.com/GoogleCloudPlatform/genomics-tools">open-source sample projects</a> built around the common API</li></ul><br /><div><b>Interoperability: One API, Many Apps</b></div><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://1.bp.blogspot.com/-qzbipJ4Ru5k/Uw9-dunEnbI/AAAAAAAAAPQ/UgkaXq9zLtY/s1600/genomics-api-diagram.png"><img border="0" src="http://1.bp.blogspot.com/-qzbipJ4Ru5k/Uw9-dunEnbI/AAAAAAAAAPQ/UgkaXq9zLtY/s1600/genomics-api-diagram.png" height="262" width="400"></a></td></tr><tr><td><span><span>Any of the apps at the top (one graphical, one command-line, and one for batch processing) can work with information in any of the repositories at the bottom (one using cloud-based storage and one using local files). As the ecosystem grows, all developers and researchers benefit from each individual developer&#8217;s work.</span></span></td></tr></tbody></table><br />With these first steps, it is our goal to support the global research community in bringing the vision of the Global Alliance for Genomics and Health to fruition. Imagine the impact if researchers everywhere had larger sample sizes to distinguish between people who become sick and those who remain healthy, between patients who respond to treatment and those whose condition worsens, between pathogens that cause outbreaks and those that are harmless. Imagine if they could test biological hypotheses in seconds instead of days, without owning a supercomputer.<br /><br />We are honored to be part of the community, working together to refine the technology and evolve the ecosystem, and aligning with appropriate standards as they arise.<br /><br /><b>How you can be involved</b><br /><br />To request access to <a href="https://developers.google.com/genomics">the API</a> for your research, please <a href="https://docs.google.com/a/google.com/forms/d/1V2GsNYJeHWeMI1Nijuaqz94T0eln01O68aAq1c9FeEk/viewform">fill out this simple form</a> to tell us about yourself and your research interests, and we will let you know when we&#8217;re ready to work with more partners.<br /><br />Together with the members of the Global Alliance for Genomics and Health, we believe we are at the beginning of a transformation in medicine and basic research, driven by advances in genome sequencing and huge-scale computing. We invite you to <a href="mailto:google-genomics-contact@googlegroups.com">contact us</a> and share your ideas about how to bring data science and life science together.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Jonathan Bingham, Product Manager</span><br /><br />Generating research data is easier than ever before, but interpreting and analyzing it is still hard, and getting harder as the volume increases. This is especially true of genomics. Sequencing the whole genome of a single person produces more than 100 gigabytes of raw data, and a million genomes will add up to more than 100 petabytes. In 2003, the <a href="http://www.genome.gov/10001772">Human Genome Project</a> completed after 15 years and $3 billion. Today, it takes closer to one day and $1,000 to sequence a human genome.<br /><br />This abundance of new information carries great potential for research and human health -- and requires new standards, policies and technology. That’s why Google has joined the <a href="http://oicr.on.ca/oicr-programs-and-platforms/global-alliance-genomics-and-health-ga4gh">Global Alliance for Genomics and Health</a>.  The Alliance is an international effort to develop harmonized approaches to enable responsible, secure, and effective sharing of genomic and clinical information in the cloud with the research and healthcare communities, meeting the highest standards of ethics and privacy.  Members of the Global Alliance include leading technology, healthcare, research, and disease advocacy organizations from around the world. <br /><br />To contribute to the genomics community and help meet the data-intensive needs of the life sciences, we are introducing:<br /><br /><ul><li>a proposal for a <a href="https://developers.google.com/genomics/">simple web-based API</a> to import, process, store, and search genomic data at scale</li><li>a preview implementation of the API built on Google’s cloud infrastructure, including sample data from public datasets like the <a href="http://www.1000genomes.org/">1,000 Genomes Project</a></li><li>a collection of in-progress <a href="https://github.com/GoogleCloudPlatform/genomics-tools">open-source sample projects</a> built around the common API</li></ul><br /><div style="text-align: center;"><b>Interoperability: One API, Many Apps</b></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-qzbipJ4Ru5k/Uw9-dunEnbI/AAAAAAAAAPQ/UgkaXq9zLtY/s1600/genomics-api-diagram.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-qzbipJ4Ru5k/Uw9-dunEnbI/AAAAAAAAAPQ/UgkaXq9zLtY/s1600/genomics-api-diagram.png" height="262" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="text-align: start;"><span style="font-size: x-small;">Any of the apps at the top (one graphical, one command-line, and one for batch processing) can work with information in any of the repositories at the bottom (one using cloud-based storage and one using local files). As the ecosystem grows, all developers and researchers benefit from each individual developer’s work.</span></span></td></tr></tbody></table><br />With these first steps, it is our goal to support the global research community in bringing the vision of the Global Alliance for Genomics and Health to fruition. Imagine the impact if researchers everywhere had larger sample sizes to distinguish between people who become sick and those who remain healthy, between patients who respond to treatment and those whose condition worsens, between pathogens that cause outbreaks and those that are harmless. Imagine if they could test biological hypotheses in seconds instead of days, without owning a supercomputer.<br /><br />We are honored to be part of the community, working together to refine the technology and evolve the ecosystem, and aligning with appropriate standards as they arise.<br /><br /><b>How you can be involved</b><br /><br />To request access to <a href="https://developers.google.com/genomics">the API</a> for your research, please <a href="https://docs.google.com/a/google.com/forms/d/1V2GsNYJeHWeMI1Nijuaqz94T0eln01O68aAq1c9FeEk/viewform">fill out this simple form</a> to tell us about yourself and your research interests, and we will let you know when we’re ready to work with more partners.<br /><br />Together with the members of the Global Alliance for Genomics and Health, we believe we are at the beginning of a transformation in medicine and basic research, driven by advances in genome sequencing and huge-scale computing. We invite you to <a href="mailto:google-genomics-contact@googlegroups.com">contact us</a> and share your ideas about how to bring data science and life science together.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/google-joins-the-global-alliance-for-genomics-and-health/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Making Sense of Data with Google</title>
		<link>https://googledata.org/google-research/making-sense-of-data-with-google-2/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=making-sense-of-data-with-google-2</link>
		<comments>https://googledata.org/google-research/making-sense-of-data-with-google-2/#comments</comments>
		<pubDate>Tue, 25 Feb 2014 22:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=d452d3d293be17bf8d15b70100eae56a</guid>
		<description><![CDATA[<span>Posted by John Atwood, Program Manager</span><br /><span><br /></span> <i>In September 2013, Google announced joining forces with <a href="https://www.edx.org/">edX</a> to contribute to their open source platform, Open edX. Since then we&#8217;ve been working together to expand this open education ecosystem. We&#8217;re pleased to announce our first online course built using Open edX. <a href="https://datasense.withgoogle.com/">Making Sense of Data</a> showcases the collaborative technology of Google and edX using <a href="https://code.google.com/p/course-builder/">cbX</a> to run Open edX courses on Google App Engine.</i><br /><br />The world is filled with lots of information; learning to make sense of it all helps us to gain perspective and make decisions. We&#8217;re pleased to share tools and techniques to structure, visualize, and analyze information in our latest self-paced, online course: Making Sense of Data.<br /><br />Making Sense of Data is intended for anybody who works with data on a daily basis, such as students, teachers, journalists, and small business owners, and who wants to learn more about how to apply that information to practical problems. Participants will learn about the data process, create and use Fusion Tables (an experimental tool), and look for patterns and relationships in data. Knowledge of statistics or experience with programming is not required.<br /><br />Like past courses, participants engage with course material through a combination of video and text lessons, activities, and projects.  In this course, we will also introduce some new features that help create a more engaging participant experience. For example, participants will be able to access instant hangouts and live chats from the course web page for quick help or for direct feedback. As with all of our MOOCs, you&#8217;ll learn from Google experts and collaborate with participants worldwide. You&#8217;ll also have the opportunity to complete a final project and apply the skills you&#8217;ve learned to earn a certificate.<br /><br />Making Sense of Data runs from <b>March 18 - April 4, 2014</b>. Visit <a href="http://g.co/datasense">g.co/datasense</a> to learn more and register today. We look forward to seeing you make sense of all the information out there!]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by John Atwood, Program Manager</span><br /><span class="byline-author"><br /></span> <i>In September 2013, Google announced joining forces with <a href="https://www.edx.org/">edX</a> to contribute to their open source platform, Open edX. Since then we’ve been working together to expand this open education ecosystem. We’re pleased to announce our first online course built using Open edX. <a href="https://datasense.withgoogle.com/">Making Sense of Data</a> showcases the collaborative technology of Google and edX using <a href="https://code.google.com/p/course-builder/">cbX</a> to run Open edX courses on Google App Engine.</i><br /><br />The world is filled with lots of information; learning to make sense of it all helps us to gain perspective and make decisions. We’re pleased to share tools and techniques to structure, visualize, and analyze information in our latest self-paced, online course: Making Sense of Data.<br /><br />Making Sense of Data is intended for anybody who works with data on a daily basis, such as students, teachers, journalists, and small business owners, and who wants to learn more about how to apply that information to practical problems. Participants will learn about the data process, create and use Fusion Tables (an experimental tool), and look for patterns and relationships in data. Knowledge of statistics or experience with programming is not required.<br /><br />Like past courses, participants engage with course material through a combination of video and text lessons, activities, and projects.  In this course, we will also introduce some new features that help create a more engaging participant experience. For example, participants will be able to access instant hangouts and live chats from the course web page for quick help or for direct feedback. As with all of our MOOCs, you’ll learn from Google experts and collaborate with participants worldwide. You’ll also have the opportunity to complete a final project and apply the skills you’ve learned to earn a certificate.<br /><br />Making Sense of Data runs from <b>March 18 - April 4, 2014</b>. Visit <a href="http://g.co/datasense">g.co/datasense</a> to learn more and register today. We look forward to seeing you make sense of all the information out there!]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/making-sense-of-data-with-google-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Monitoring the World&#8217;s Forests with Global Forest Watch</title>
		<link>https://googledata.org/google-research/monitoring-the-worlds-forests-with-global-forest-watch-2/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=monitoring-the-worlds-forests-with-global-forest-watch-2</link>
		<comments>https://googledata.org/google-research/monitoring-the-worlds-forests-with-global-forest-watch-2/#comments</comments>
		<pubDate>Thu, 20 Feb 2014 16:30:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=4305f8520cc26e4b7e9930933555991a</guid>
		<description><![CDATA[<span>Posted by Crystal Davis, Director of Global Forest Watch, the World Resources Institute, and Dave Thau, Developer Advocate, Google Earth Engine</span><br /><span><br /></span><span><i><a href="http://google-latlong.blogspot.com/2014/02/monitoring-worlds-forests-with-global.html">Cross-posted at the Google Lat Long Blog</a></i></span><br /><br />By the time we find out about deforestation, it&#8217;s usually too late to take action. <br /><br />Scientists have been studying forests for centuries, chronicling the vital importance of these ecosystems for human society. But most of us still lack timely and reliable information about where, when, and why forests are disappearing. <br /><br />This is about to change with the launch of <a href="http://www.globalforestwatch.org/">Global Forest Watch</a>&#8212;an online forest monitoring system created by the World Resources Institute, Google and a group of more than 40 partners. Global Forest Watch uses technologies including <a href="http://earthengine.google.org/">Google Earth Engine</a> and <a href="http://mapsengine.google.com/">Google Maps Engine</a> to map the world&#8217;s forests with satellite imagery, detect changes in forest cover in near-real-time, and make this information freely available to anyone with Internet access.<br /><br />By accessing the most current and reliable information, everyone can learn what&#8217;s happening in forests around the world. Now that we have the ability to peer into forests, a number of telling stories are beginning to emerge.<br /><br /><b><a href="http://www.globalforestwatch.org/map">Global forest loss far exceeds forest gain</a></b><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://3.bp.blogspot.com/-akVsK8srKPs/UwYlcLAqcZI/AAAAAAAAAOo/MIA6NAaOAuY/s1600/Global+Forest+Loss.png"><img border="0" src="http://3.bp.blogspot.com/-akVsK8srKPs/UwYlcLAqcZI/AAAAAAAAAOo/MIA6NAaOAuY/s1600/Global+Forest+Loss.png" height="371" width="640"></a></td></tr><tr><td><span><span>Pink = tree cover loss</span></span></td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://2.bp.blogspot.com/-eNO-eA_fG6o/UwYlerppgeI/AAAAAAAAAOw/3hjljpGyQIc/s1600/Global+Forest+Loss+2.png"><img border="0" src="http://2.bp.blogspot.com/-eNO-eA_fG6o/UwYlerppgeI/AAAAAAAAAOw/3hjljpGyQIc/s1600/Global+Forest+Loss+2.png" height="372" width="640"></a></td></tr><tr><td><span><span>Blue = Tree cover gain</span></span></td></tr></tbody></table><br />According to <a href="http://googleresearch.blogspot.com/2013/11/the-first-detailed-maps-of-global.html">data</a> from the University of Maryland and Google, the world lost more than 500 million acres of forest between 2000 and 2012. That&#8217;s the equivalent of losing 50 soccer fields&#8217; worth of forests every minute of every day for the past 13 years! By contrast, only 0.8 million km2  have regrown,  been planted, or restored during the same period.<br /><br /><br /><b><a href="http://www.globalforestwatch.org/map/6/31.67/-86.62/ALL">The United States&#8217; most heavily forested region is made up of production forests</a></b><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://2.bp.blogspot.com/-RP0BPZJdZZA/UwYmYrSSTqI/AAAAAAAAAO4/RLXkQBCZt6M/s1600/US+Forest+Region.png"><img border="0" src="http://2.bp.blogspot.com/-RP0BPZJdZZA/UwYmYrSSTqI/AAAAAAAAAO4/RLXkQBCZt6M/s1600/US+Forest+Region.png" height="474" width="640"></a></td></tr><tr><td><span><span>Pink = tree cover loss     Blue = Tree cover gain</span></span></td></tr></tbody></table><br />The Southern United States is home to the nation&#8217;s most heavily forested region, making up 29 percent of the total U.S. forest land. Interestingly, the majority of this region is &#8220;production forests.&#8221; The mosaic of loss (pink) and gain (blue) in the above map shows how forests throughout this region are used as crops &#8211; grown and harvested in five-year cycles to produce timber or wood pulp for paper production.<br /><br />This practice of &#8220;intensive forestry&#8221; is used all over the world to provide valuable commodities and bolster regional and national economies. WRI <a href="http://www.wri.org/publication/southern-forests-future">analysis</a> suggests that if managers of production forests embrace a &#8220;<a href="http://www.wri.org/blog/promoting-development-protecting-environment">multiple ecosystem services strategy</a>&#8221;, they will be able to generate additional benefits such as biodiversity, carbon storage, and water filtration.<br /><br /><br /><b><a href="http://www.globalforestwatch.org/map/10/-10.99/-61.17/ALL/">Forests are protected in Brazil&#8217;s indigenous territories</a></b><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://4.bp.blogspot.com/-RnvpijE9H9w/UwYnbLPQFII/AAAAAAAAAPA/L_FsB3K8rW4/s1600/Forests+in+Brazil.png"><img border="0" src="http://4.bp.blogspot.com/-RnvpijE9H9w/UwYnbLPQFII/AAAAAAAAAPA/L_FsB3K8rW4/s1600/Forests+in+Brazil.png" height="476" width="640"></a></td></tr><tr><td><span><span>Pink = tree cover loss     Dark green = forest      Light green = Degraded land or pastures</span></span></td></tr></tbody></table>The traditional territory of Brazil's Surui tribe is an island of green surrounded by lands that have been significantly degraded and deforested over the past 10+ years.  Indigenous communities often rely on forests for their livelihoods and cultural heritage and therefore have a strong incentive to manage forests sustainably. However, many indigenous communities struggle to protect their lands against encroachment by illegal loggers, which may be seen in Global Forest Watch using annual data from the University of Maryland and Google, or monthly alerts from <a href="http://www.imazon.org.br/">Imazon</a>, a Brazilian NGO and GFW partner.<br /><br /><br /><b><a href="http://www.globalforestwatch.org/map">Make Your Own Forest Map</a></b><br /><b><u><br /></u></b>Previously, the data required to make these maps was difficult to obtain and interpret, and most people lacked the resources necessary to access, view, and analyze the the information. With Global Forest Watch, this data is now open to anyone with Internet access. We encourage you to visit Global Forest Watch and <a href="http://www.globalforestwatch.org/map">make your own forest map</a>. There are many stories to tell about what is happening to forests around the world&#8212;and your stories can lead to action to protect these special and threatened places. What story will you tell?]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Crystal Davis, Director of Global Forest Watch, the World Resources Institute, and Dave Thau, Developer Advocate, Google Earth Engine</span><br /><span class="byline-author"><br /></span><span class="byline-author"><i><a href="http://google-latlong.blogspot.com/2014/02/monitoring-worlds-forests-with-global.html">Cross-posted at the Google Lat Long Blog</a></i></span><br /><br />By the time we find out about deforestation, it’s usually too late to take action. <br /><br />Scientists have been studying forests for centuries, chronicling the vital importance of these ecosystems for human society. But most of us still lack timely and reliable information about where, when, and why forests are disappearing. <br /><br />This is about to change with the launch of <a href="http://www.globalforestwatch.org/">Global Forest Watch</a>—an online forest monitoring system created by the World Resources Institute, Google and a group of more than 40 partners. Global Forest Watch uses technologies including <a href="http://earthengine.google.org/">Google Earth Engine</a> and <a href="http://mapsengine.google.com/">Google Maps Engine</a> to map the world’s forests with satellite imagery, detect changes in forest cover in near-real-time, and make this information freely available to anyone with Internet access.<br /><br />By accessing the most current and reliable information, everyone can learn what’s happening in forests around the world. Now that we have the ability to peer into forests, a number of telling stories are beginning to emerge.<br /><br /><b><a href="http://www.globalforestwatch.org/map">Global forest loss far exceeds forest gain</a></b><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-akVsK8srKPs/UwYlcLAqcZI/AAAAAAAAAOo/MIA6NAaOAuY/s1600/Global+Forest+Loss.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-akVsK8srKPs/UwYlcLAqcZI/AAAAAAAAAOo/MIA6NAaOAuY/s1600/Global+Forest+Loss.png" height="371" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-663acd92-5007-bc67-afd8-b2b61661fdd4"><span style="background-color: white; font-family: Arial; font-size: 12px; vertical-align: baseline; white-space: pre-wrap;">Pink = tree cover loss</span></span></td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-eNO-eA_fG6o/UwYlerppgeI/AAAAAAAAAOw/3hjljpGyQIc/s1600/Global+Forest+Loss+2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-eNO-eA_fG6o/UwYlerppgeI/AAAAAAAAAOw/3hjljpGyQIc/s1600/Global+Forest+Loss+2.png" height="372" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-663acd92-5007-ec52-7db1-d263934f8e71"><span style="background-color: white; font-family: Arial; font-size: 12px; vertical-align: baseline; white-space: pre-wrap;">Blue = Tree cover gain</span></span></td></tr></tbody></table><br />According to <a href="http://googleresearch.blogspot.com/2013/11/the-first-detailed-maps-of-global.html">data</a> from the University of Maryland and Google, the world lost more than 500 million acres of forest between 2000 and 2012. That’s the equivalent of losing 50 soccer fields’ worth of forests every minute of every day for the past 13 years! By contrast, only 0.8 million km2  have regrown,  been planted, or restored during the same period.<br /><br /><br /><b><a href="http://www.globalforestwatch.org/map/6/31.67/-86.62/ALL">The United States’ most heavily forested region is made up of production forests</a></b><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-RP0BPZJdZZA/UwYmYrSSTqI/AAAAAAAAAO4/RLXkQBCZt6M/s1600/US+Forest+Region.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-RP0BPZJdZZA/UwYmYrSSTqI/AAAAAAAAAO4/RLXkQBCZt6M/s1600/US+Forest+Region.png" height="474" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-663acd92-5006-67d5-c38e-866feaef95f9"><span style="background-color: white; font-family: Arial; font-size: 12px; vertical-align: baseline; white-space: pre-wrap;">Pink = tree cover loss     Blue = Tree cover gain</span></span></td></tr></tbody></table><br />The Southern United States is home to the nation’s most heavily forested region, making up 29 percent of the total U.S. forest land. Interestingly, the majority of this region is “production forests.” The mosaic of loss (pink) and gain (blue) in the above map shows how forests throughout this region are used as crops – grown and harvested in five-year cycles to produce timber or wood pulp for paper production.<br /><br />This practice of “intensive forestry” is used all over the world to provide valuable commodities and bolster regional and national economies. WRI <a href="http://www.wri.org/publication/southern-forests-future">analysis</a> suggests that if managers of production forests embrace a “<a href="http://www.wri.org/blog/promoting-development-protecting-environment">multiple ecosystem services strategy</a>”, they will be able to generate additional benefits such as biodiversity, carbon storage, and water filtration.<br /><br /><br /><b><a href="http://www.globalforestwatch.org/map/10/-10.99/-61.17/ALL/">Forests are protected in Brazil’s indigenous territories</a></b><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-RnvpijE9H9w/UwYnbLPQFII/AAAAAAAAAPA/L_FsB3K8rW4/s1600/Forests+in+Brazil.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-RnvpijE9H9w/UwYnbLPQFII/AAAAAAAAAPA/L_FsB3K8rW4/s1600/Forests+in+Brazil.png" height="476" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-663acd92-500a-58e8-a21f-09a33408f787"><span style="background-color: white; font-family: Arial; font-size: 12px; vertical-align: baseline; white-space: pre-wrap;">Pink = tree cover loss     Dark green = forest      Light green = Degraded land or pastures</span></span></td></tr></tbody></table>The traditional territory of Brazil's Surui tribe is an island of green surrounded by lands that have been significantly degraded and deforested over the past 10+ years.  Indigenous communities often rely on forests for their livelihoods and cultural heritage and therefore have a strong incentive to manage forests sustainably. However, many indigenous communities struggle to protect their lands against encroachment by illegal loggers, which may be seen in Global Forest Watch using annual data from the University of Maryland and Google, or monthly alerts from <a href="http://www.imazon.org.br/">Imazon</a>, a Brazilian NGO and GFW partner.<br /><br /><br /><b><a href="http://www.globalforestwatch.org/map">Make Your Own Forest Map</a></b><br /><b><u><br /></u></b>Previously, the data required to make these maps was difficult to obtain and interpret, and most people lacked the resources necessary to access, view, and analyze the the information. With Global Forest Watch, this data is now open to anyone with Internet access. We encourage you to visit Global Forest Watch and <a href="http://www.globalforestwatch.org/map">make your own forest map</a>. There are many stories to tell about what is happening to forests around the world—and your stories can lead to action to protect these special and threatened places. What story will you tell?]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/monitoring-the-worlds-forests-with-global-forest-watch-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Google Award Program stimulates Journalism and CS collaboration</title>
		<link>https://googledata.org/google-research/google-award-program-stimulates-journalism-and-cs-collaboration/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-award-program-stimulates-journalism-and-cs-collaboration</link>
		<comments>https://googledata.org/google-research/google-award-program-stimulates-journalism-and-cs-collaboration/#comments</comments>
		<pubDate>Wed, 19 Feb 2014 18:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=5ec7bd6bc13680c7e7fbf9d702fa83d7</guid>
		<description><![CDATA[<span>Posted by Krishna Bharat, Distinguished Research Scientist</span><br /><br />Last fall, Google invited academic researchers to participate in a Computational Journalism awards program focused on the intersection of Computer Science and Journalism. We solicited proposals for original research projects relevant to today&#8217;s fast evolving news industry.  <br /><br />As technology continues to shape and be shaped by the media landscape, applicants were asked to rethink traditional models and roles in the ecosystem, and reimagine the lifecycle of the news story in the online world. We encouraged them to develop innovative tools and open source software that could benefit readers and be game-changers for reporters and publishers. Each award includes funding of $60,000 in cash and $20,000 in computing credits on Google&#8217;s Cloud Platform.<br /><br />We congratulate the recipients of these awards, whose projects are described below, and look forward to the results of their research. Stay tuned for updates on their progress.<br /><br /><a href="http://infolab.northwestern.edu/people/larry-birnbaum/"><b>Larry Birnbaum</b></a>, Professor of Electrical Engineering and Computer Science, and Journalism, Northwestern University<br /><i>Project</i>: Thematic Characterization of News Stories<br />This project aims to develop computational methods for identifying abstract themes or "angles" in news stories, e.g., seeing a story as an instance of "pulling yourself up by your bootstraps," or as a "David vs. Goliath" story. In collaboration with journalism and computer science students, we will develop applications utilizing these methods in the creation, distribution, and consumption of news content.  <br /><br /><b><a href="http://prof.irfanessa.com/">Irfan Essa</a></b>, Professor, Georgia Institute of Technology<br /><i>Project</i>: Tracing Reuse in Political Language<br />Our goal in this project is to research, and then develop a data-mining tool that allows an online researcher to find and trace language reuse. By language reuse, we specifically mean: Can we find if in a current text some language was used that can be traced back to some other text or script. The technical innovation in this project is aimed at (1) identifying linguistic reuse in documents as well as other forms of material, which can be converted to text, and therefore includes political speeches and videos. Another innovation will be in (2) how linguistic reuse can be traced through the web and online social networks.<br /><br /><b><a href="http://www.journalism.columbia.edu/profile/365">Susan McGregor</a></b>, Assistant Director, Tow Center for Digital Journalism, Columbia Journalism School<br /><i>Project</i>: InfoScribe<br /><a href="http://infoscribe.tumblr.com/">InfoScribe</a> is a collaborative web platform that lets citizens participate in investigative journalism projects by digitizing select data from scanned document sets uploaded by journalists. One of InfoScribe's primary research goals is to explore how community participation in journalistic activities can help improve their accuracy, transparency and impact. Additionally, InfoScribe seeks to build and expand upon understandings of how computer vision and statistical inference can be most efficiently combined with human effort in the completion of complex tasks. <br /><br /><b><a href="http://presnick.people.si.umich.edu/">Paul Resnick</a></b>, Professor, University of Michigan School of Information <br /><i>Project</i>: RumorLens<br /><a href="http://rumorlens.org/">RumorLens</a> is a tool that will aid journalists in finding posts that spread or correct a particular rumor on Twitter, by exploring the size of the audiences that those posts have reached. In the collection phase, the user provides one or a few exemplar tweets and then manually classifies a few hundred others as spreading the rumor, correcting it, or labeling it as unrelated. This enables automatic retrieval and classification of remaining tweets, which are then presented in an interactive visualization that shows audience sizes.<br /><br /><b><a href="http://www.jomc.unc.edu/faculty-staff-journalism-faculty/thornburg-ryan">Ryan Thornburg</a></b>, Associate Professor, School of Journalism and Mass Communication, University of North Carolina at Chapel Hill<br />Project: Public Records Dashboard for Small Newsrooms<br />Building off our Knight News Challenge effort to bring data-driven journalism to readers of rural newspaper websites, we are developing an internal newsroom tool that will alert reporters and editors to potential story tips found in public data. Our project aims to lower the cost of finding in public data sets stories that shine light in dark places, hold powerful people accountable, and explain our increasingly complex and interconnected world. (Public facing site for the data acquisition element of the project at <a href="http://open-nc.org/">http://open-nc.org</a>)]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Krishna Bharat, Distinguished Research Scientist</span><br /><br />Last fall, Google invited academic researchers to participate in a Computational Journalism awards program focused on the intersection of Computer Science and Journalism. We solicited proposals for original research projects relevant to today’s fast evolving news industry.  <br /><br />As technology continues to shape and be shaped by the media landscape, applicants were asked to rethink traditional models and roles in the ecosystem, and reimagine the lifecycle of the news story in the online world. We encouraged them to develop innovative tools and open source software that could benefit readers and be game-changers for reporters and publishers. Each award includes funding of $60,000 in cash and $20,000 in computing credits on Google’s Cloud Platform.<br /><br />We congratulate the recipients of these awards, whose projects are described below, and look forward to the results of their research. Stay tuned for updates on their progress.<br /><br /><a href="http://infolab.northwestern.edu/people/larry-birnbaum/"><b>Larry Birnbaum</b></a>, Professor of Electrical Engineering and Computer Science, and Journalism, Northwestern University<br /><i>Project</i>: Thematic Characterization of News Stories<br />This project aims to develop computational methods for identifying abstract themes or "angles" in news stories, e.g., seeing a story as an instance of "pulling yourself up by your bootstraps," or as a "David vs. Goliath" story. In collaboration with journalism and computer science students, we will develop applications utilizing these methods in the creation, distribution, and consumption of news content.  <br /><br /><b><a href="http://prof.irfanessa.com/">Irfan Essa</a></b>, Professor, Georgia Institute of Technology<br /><i>Project</i>: Tracing Reuse in Political Language<br />Our goal in this project is to research, and then develop a data-mining tool that allows an online researcher to find and trace language reuse. By language reuse, we specifically mean: Can we find if in a current text some language was used that can be traced back to some other text or script. The technical innovation in this project is aimed at (1) identifying linguistic reuse in documents as well as other forms of material, which can be converted to text, and therefore includes political speeches and videos. Another innovation will be in (2) how linguistic reuse can be traced through the web and online social networks.<br /><br /><b><a href="http://www.journalism.columbia.edu/profile/365">Susan McGregor</a></b>, Assistant Director, Tow Center for Digital Journalism, Columbia Journalism School<br /><i>Project</i>: InfoScribe<br /><a href="http://infoscribe.tumblr.com/">InfoScribe</a> is a collaborative web platform that lets citizens participate in investigative journalism projects by digitizing select data from scanned document sets uploaded by journalists. One of InfoScribe's primary research goals is to explore how community participation in journalistic activities can help improve their accuracy, transparency and impact. Additionally, InfoScribe seeks to build and expand upon understandings of how computer vision and statistical inference can be most efficiently combined with human effort in the completion of complex tasks. <br /><br /><b><a href="http://presnick.people.si.umich.edu/">Paul Resnick</a></b>, Professor, University of Michigan School of Information <br /><i>Project</i>: RumorLens<br /><a href="http://rumorlens.org/">RumorLens</a> is a tool that will aid journalists in finding posts that spread or correct a particular rumor on Twitter, by exploring the size of the audiences that those posts have reached. In the collection phase, the user provides one or a few exemplar tweets and then manually classifies a few hundred others as spreading the rumor, correcting it, or labeling it as unrelated. This enables automatic retrieval and classification of remaining tweets, which are then presented in an interactive visualization that shows audience sizes.<br /><br /><b><a href="http://www.jomc.unc.edu/faculty-staff-journalism-faculty/thornburg-ryan">Ryan Thornburg</a></b>, Associate Professor, School of Journalism and Mass Communication, University of North Carolina at Chapel Hill<br />Project: Public Records Dashboard for Small Newsrooms<br />Building off our Knight News Challenge effort to bring data-driven journalism to readers of rural newspaper websites, we are developing an internal newsroom tool that will alert reporters and editors to potential story tips found in public data. Our project aims to lower the cost of finding in public data sets stories that shine light in dark places, hold powerful people accountable, and explain our increasingly complex and interconnected world. (Public facing site for the data acquisition element of the project at <a href="http://open-nc.org/">http://open-nc.org</a>)]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/google-award-program-stimulates-journalism-and-cs-collaboration/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Google Research Awards: Winter 2014</title>
		<link>https://googledata.org/google-research/google-research-awards-winter-2014/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-research-awards-winter-2014</link>
		<comments>https://googledata.org/google-research/google-research-awards-winter-2014/#comments</comments>
		<pubDate>Tue, 18 Feb 2014 18:20:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=6d8055c1bfccd3c951dc3546d8bdac24</guid>
		<description><![CDATA[<span>Posted by Maggie Johnson, Director of Education &#38; University Relations</span><br /><br />We have just completed another round of the <a href="http://research.google.com/university/relations/research_awards.html">Google Research Awards</a>, our biannual open call for proposals on computer science-related topics including robotics, natural language processing, systems, policy, and mobile. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google researchers and engineers.<br /><br />This round we received 691 proposals, an increase of 19% over last round, covering 46 countries on 6 continents. After expert reviews and committee discussions, we decided to fund 115 projects. The subject areas that received the highest level of support were human-computer interaction, systems, and machine learning, with 25% of the funding awarded to universities outside the U.S.<br /><br />We set a new record this round with over 2000 reviews done by 650 reviewers. Each proposal is reviewed by internal committees who provide feedback on merit and relevance.  In many cases, the committees include some of the foremost experts in the world. All committee members are volunteers who spend a significant amount of time making the Research Award program happen twice a year.  <br /><br />Congratulations to the well-deserving <a href="http://services.google.com/fh/files/blogs/research-awards-winter2014.pdf">recipients of this round&#8217;s awards</a>. If you are interested in applying for the next round (deadline is April 15), please visit <a href="http://research.google.com/university/relations/research_awards.html">our website</a> for more information.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Maggie Johnson, Director of Education &amp; University Relations</span><br /><br />We have just completed another round of the <a href="http://research.google.com/university/relations/research_awards.html">Google Research Awards</a>, our biannual open call for proposals on computer science-related topics including robotics, natural language processing, systems, policy, and mobile. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google researchers and engineers.<br /><br />This round we received 691 proposals, an increase of 19% over last round, covering 46 countries on 6 continents. After expert reviews and committee discussions, we decided to fund 115 projects. The subject areas that received the highest level of support were human-computer interaction, systems, and machine learning, with 25% of the funding awarded to universities outside the U.S.<br /><br />We set a new record this round with over 2000 reviews done by 650 reviewers. Each proposal is reviewed by internal committees who provide feedback on merit and relevance.  In many cases, the committees include some of the foremost experts in the world. All committee members are volunteers who spend a significant amount of time making the Research Award program happen twice a year.  <br /><br />Congratulations to the well-deserving <a href="http://services.google.com/fh/files/blogs/research-awards-winter2014.pdf">recipients of this round’s awards</a>. If you are interested in applying for the next round (deadline is April 15), please visit <a href="http://research.google.com/university/relations/research_awards.html">our website</a> for more information.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/google-research-awards-winter-2014/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Explore the history of Pop &#8212; and Punk, Jazz, and Folk &#8212; with the Music Timeline</title>
		<link>https://googledata.org/google-research/explore-the-history-of-pop-and-punk-jazz-and-folk-with-the-music-timeline/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=explore-the-history-of-pop-and-punk-jazz-and-folk-with-the-music-timeline</link>
		<comments>https://googledata.org/google-research/explore-the-history-of-pop-and-punk-jazz-and-folk-with-the-music-timeline/#comments</comments>
		<pubDate>Thu, 16 Jan 2014 08:30:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=82411485c04f0dc67f76b77899a3ab87</guid>
		<description><![CDATA[<span>Posted by Alison Cichowlas and Tony Lam, Google Research</span><br /><br />With newspapers, blogs, and websites wrapping up their Top Ten album lists for 2013, we thought we'd take a look at some favorite music a bit further in the past as well.<br /><br />Today, we&#8217;re releasing a visualization to show which music has stood the test of time, and how genres and artists have risen and fallen in popularity. The <a href="https://research.google.com/bigpicture/music">Music Timeline</a> uses aggregated data from Google Play Music to show the changes in music genres over the decades. <br /><br /><div><a href="http://2.bp.blogspot.com/-iRKa7-BJrvo/Utg1f0ExXMI/AAAAAAAAAN4/5O7frNEi8N0/s1600/1-musictimelineoverview.png"><img border="0" src="http://2.bp.blogspot.com/-iRKa7-BJrvo/Utg1f0ExXMI/AAAAAAAAAN4/5O7frNEi8N0/s1600/1-musictimelineoverview.png" height="196" width="640"></a></div><br />This visualization shows which albums from past eras are still in our music libraries today.<br /><br />Dig deeper into the chart to highlight key artists in each genre and read their stories, and navigate to Google Play to hear their music. For example, by clicking on the <a href="https://research.google.com/bigpicture/music/#METAL">Metal</a> stripe, we can see the handoff from <a href="https://research.google.com/bigpicture/music/#METAL%2CCLASSIC_METAL">Classic Metal</a> to <a href="https://research.google.com/bigpicture/music/#METAL,HAIR_METAL">Hair Metal</a> to <a href="https://research.google.com/bigpicture/music/#METAL,ALT_METAL">Alt Metal</a> within the growth of the overall genre, as well as some of the most popular artists that composed each subgenre.<br /><br /><div><a href="http://4.bp.blogspot.com/-BHtu4ekzPI0/Utg1gFcVVVI/AAAAAAAAAOM/1W0nywo3PNA/s1600/2-hairmetal-cropped.png"><img border="0" src="http://4.bp.blogspot.com/-BHtu4ekzPI0/Utg1gFcVVVI/AAAAAAAAAOM/1W0nywo3PNA/s1600/2-hairmetal-cropped.png" height="348" width="640"></a></div><br />The overall shape of each major genre shows when it hit the scene and when it retreated -- for example, <a href="https://research.google.com/bigpicture/music/#R_B_SOUL">R&#38;B</a> has a long history of resurgences, but <a href="https://research.google.com/bigpicture/music/#DANCE_ELECTRONIC">Electronica</a> is a strictly recent phenomenon.<br /><br /><div><a href="http://3.bp.blogspot.com/-NEfMZ8Rj6fo/Utg1gCnH0JI/AAAAAAAAAOQ/HBqD2tPF0pE/s1600/3-small_multiples.png"><img border="0" src="http://3.bp.blogspot.com/-NEfMZ8Rj6fo/Utg1gCnH0JI/AAAAAAAAAOQ/HBqD2tPF0pE/s1600/3-small_multiples.png" height="264" width="640"></a></div><br />Delve into changes in the vocabulary used to name artists and their work -- funk may be over as a genre, but as a band or album name it seems to be timeless.<br /><br /><div><a href="http://2.bp.blogspot.com/-BUZd8LTHiHk/Utg1ghFOrxI/AAAAAAAAAOU/wNu_voQnINs/s1600/4-funktimeline.png"><img border="0" src="http://2.bp.blogspot.com/-BUZd8LTHiHk/Utg1ghFOrxI/AAAAAAAAAOU/wNu_voQnINs/s1600/4-funktimeline.png" height="286" width="640"></a></div><br />Or search for a particular artist to see the trajectory of their career -- contrast U2&#8217;s long-running reinvention and re-emergence from the &#8216;80s up to today, versus a one hit wonder like Los del R&#237;o&#8217;s 1995 Macarena.<br /><br /><div><a href="http://4.bp.blogspot.com/-9aSMj6ZBpZ8/Utg1g5itAoI/AAAAAAAAAOc/otLQkkh9yus/s1600/5-comparison.png"><img border="0" src="http://4.bp.blogspot.com/-9aSMj6ZBpZ8/Utg1g5itAoI/AAAAAAAAAOc/otLQkkh9yus/s1600/5-comparison.png" height="600" width="640"></a></div><br />For more explorations of how information visualization can make complex data accessible, useful, and even fun, check out the Big Picture project page at <a href="http://research.google.com/bigpicture">research.google.com/bigpicture</a>. We expect this is the first of many collaborations with the <a href="http://research.google.com/pubs/ArtificialIntelligenceandMachineLearning.html">Music Recommendations and Discovery team</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Alison Cichowlas and Tony Lam, Google Research</span><br /><br />With newspapers, blogs, and websites wrapping up their Top Ten album lists for 2013, we thought we'd take a look at some favorite music a bit further in the past as well.<br /><br />Today, we’re releasing a visualization to show which music has stood the test of time, and how genres and artists have risen and fallen in popularity. The <a href="https://research.google.com/bigpicture/music">Music Timeline</a> uses aggregated data from Google Play Music to show the changes in music genres over the decades. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-iRKa7-BJrvo/Utg1f0ExXMI/AAAAAAAAAN4/5O7frNEi8N0/s1600/1-musictimelineoverview.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-iRKa7-BJrvo/Utg1f0ExXMI/AAAAAAAAAN4/5O7frNEi8N0/s1600/1-musictimelineoverview.png" height="196" width="640" /></a></div><br />This visualization shows which albums from past eras are still in our music libraries today.<br /><br />Dig deeper into the chart to highlight key artists in each genre and read their stories, and navigate to Google Play to hear their music. For example, by clicking on the <a href="https://research.google.com/bigpicture/music/#METAL">Metal</a> stripe, we can see the handoff from <a href="https://research.google.com/bigpicture/music/#METAL%2CCLASSIC_METAL">Classic Metal</a> to <a href="https://research.google.com/bigpicture/music/#METAL,HAIR_METAL">Hair Metal</a> to <a href="https://research.google.com/bigpicture/music/#METAL,ALT_METAL">Alt Metal</a> within the growth of the overall genre, as well as some of the most popular artists that composed each subgenre.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-BHtu4ekzPI0/Utg1gFcVVVI/AAAAAAAAAOM/1W0nywo3PNA/s1600/2-hairmetal-cropped.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-BHtu4ekzPI0/Utg1gFcVVVI/AAAAAAAAAOM/1W0nywo3PNA/s1600/2-hairmetal-cropped.png" height="348" width="640" /></a></div><br />The overall shape of each major genre shows when it hit the scene and when it retreated -- for example, <a href="https://research.google.com/bigpicture/music/#R_B_SOUL">R&amp;B</a> has a long history of resurgences, but <a href="https://research.google.com/bigpicture/music/#DANCE_ELECTRONIC">Electronica</a> is a strictly recent phenomenon.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-NEfMZ8Rj6fo/Utg1gCnH0JI/AAAAAAAAAOQ/HBqD2tPF0pE/s1600/3-small_multiples.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-NEfMZ8Rj6fo/Utg1gCnH0JI/AAAAAAAAAOQ/HBqD2tPF0pE/s1600/3-small_multiples.png" height="264" width="640" /></a></div><br />Delve into changes in the vocabulary used to name artists and their work -- funk may be over as a genre, but as a band or album name it seems to be timeless.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-BUZd8LTHiHk/Utg1ghFOrxI/AAAAAAAAAOU/wNu_voQnINs/s1600/4-funktimeline.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-BUZd8LTHiHk/Utg1ghFOrxI/AAAAAAAAAOU/wNu_voQnINs/s1600/4-funktimeline.png" height="286" width="640" /></a></div><br />Or search for a particular artist to see the trajectory of their career -- contrast U2’s long-running reinvention and re-emergence from the ‘80s up to today, versus a one hit wonder like Los del Río’s 1995 Macarena.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-9aSMj6ZBpZ8/Utg1g5itAoI/AAAAAAAAAOc/otLQkkh9yus/s1600/5-comparison.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-9aSMj6ZBpZ8/Utg1g5itAoI/AAAAAAAAAOc/otLQkkh9yus/s1600/5-comparison.png" height="600" width="640" /></a></div><br />For more explorations of how information visualization can make complex data accessible, useful, and even fun, check out the Big Picture project page at <a href="http://research.google.com/bigpicture">research.google.com/bigpicture</a>. We expect this is the first of many collaborations with the <a href="http://research.google.com/pubs/ArtificialIntelligenceandMachineLearning.html">Music Recommendations and Discovery team</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/explore-the-history-of-pop-and-punk-jazz-and-folk-with-the-music-timeline/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Piloting after school clubs to ignite interest in Computer Science</title>
		<link>https://googledata.org/google-research/piloting-after-school-clubs-to-ignite-interest-in-computer-science/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=piloting-after-school-clubs-to-ignite-interest-in-computer-science</link>
		<comments>https://googledata.org/google-research/piloting-after-school-clubs-to-ignite-interest-in-computer-science/#comments</comments>
		<pubDate>Wed, 15 Jan 2014 17:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=6a78137bf429102c43d90c9107d845fe</guid>
		<description><![CDATA[<span>Posted by JamieSue Goodman, Program Lead, CS First</span><br /><br />In July, Google launched a new pilot program, <a href="https://sites.google.com/site/carolinacsfirst/">CS First</a>,  located in our <a href="http://www.google.com/about/datacenters/inside/locations/berkeley-county/index.html">South Carolina data center</a>, to ignite interest in computer science (CS) for K-12 students, especially underrepresented minorities and girls.<br /><br />Working collaboratively with South Carolina Lowcountry school systems and teachers, we are creating an after-school program to achieve the following goals for students:<br /><ul><li>Have a positive attitude toward CS, have no fear of computers, and have the confidence and curiosity to jump into a new computing experience</li><li>Understand that coding is used in a diverse set of jobs/hobbies</li><li>Are willing to persevere and have a debugging mindset</li><li>Collaborate - coding is social!</li><li>Learn and use basic coding concepts such as object creation, conditionals, variables, loops, and input/output</li></ul>With these goals in mind, we began pilot programs in Berkeley, Charleston, and Dorchester counties, exposing students, with a focus on underrepresented minorities and girls, to the most promising existing content and tools. We then observed the clubs, collected survey data and iterated to improve the content and tools that worked best.<br /><br />To date, CS First has run 31 after-school programs for 4th-12th grades, reaching more than 450 students. Of those students, 53% were girls, and 66% qualify for free or reduced lunch. Each after-school program lasted four weeks and consisted of eight 60-90 minute lessons using a variety of tools such as <a href="http://appinventor.mit.edu/explore/">App Inventor</a>, <a href="http://scratch.mit.edu/">Scratch</a>, <a href="http://learn.code.org/s/1/level/2">Blockly</a>, <a href="http://www.codecademy.com/tracks/python">Python in Codacademy</a>, Scheme in <a href="http://www.bootstrapworld.org/materials/Spring2014/">Bootstrap</a>, as well as various physical gadgets like <a href="http://www.finchrobot.com/">Finch Robots</a>, <a href="http://littlebits.cc/">Little Bits</a>, <a href="http://www.gosphero.com/2b/">Sphero</a> and <a href="http://www.raspberrypi.org/">Raspberry Pi</a>. <br /><br />The <a href="http://googleresearch.blogspot.com/2013/08/computer-science-teaching-fellows.html">Google Computer Science Teaching Fellows</a> led the CS First after-school clubs and studied student engagement and preferences. The following major findings are helping to shape our programs as we iterate:<br /><ul><li>Students were most engaged when they had a creative outlet or where there was a big "wow-factor". They were less engaged with tools that were strongly scaffolded for learning and did not result in a creative result. In this initial set of pilots, students were most engaged with Scratch and App Inventor.</li><li>Club culture is as important as the tool choice for engagement levels and therefore we continue to intentionally create a social club environment.</li><li>Scaffolding and forward momentum are especially important in creative design-oriented tools. Students sometimes became distracted by the design aspects of Scratch and would often not code unless club structure included accountability.</li></ul>We are collecting qualitative observations about CS First, as well as surveying our students and teacher sponsors. Our November programs had the following survey results (sample size of 73 students):<br /><ul><li>"I can create things with computer science": Students agreed with this statement 25% more after the program.</li><li>"If I get stuck on a computer problem, I know how I might fix it": An increase of 22% after the program.</li><li>"Do you like programming?": The number of &#8220;yes&#8221; responses increased 29%.</li><li>"Do you think computer science is cool?" The number of &#8220;yes&#8221; responses increased 26%.</li><li>"I don't really understand computer science": Students agreed with this statement 34% less after the program.</li></ul>In November, we began working with 4 non-teacher technologists to understand the challenges of scaling through volunteers. In our January programs, we will pilot with 10 community members leading the clubs in partnership with local school districts.  The outcome will be a polished kit of tested materials which teachers and volunteers can use to lead an after-school CS First club with students. The kit is designed so that a teacher with little CS experience or a technologist with little teaching experience will be able to implement it with a group of 10-20 students.  The ultimate goal of CS First is to provide proven teaching materials, screencasts, and curricula for after-school programs that will ignite the interest and confidence of underrepresented minorities and girls in CS and to scale these programs through a network of teacher sponsors, volunteers and national organizations.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by JamieSue Goodman, Program Lead, CS First</span><br /><br />In July, Google launched a new pilot program, <a href="https://sites.google.com/site/carolinacsfirst/">CS First</a>,  located in our <a href="http://www.google.com/about/datacenters/inside/locations/berkeley-county/index.html">South Carolina data center</a>, to ignite interest in computer science (CS) for K-12 students, especially underrepresented minorities and girls.<br /><br />Working collaboratively with South Carolina Lowcountry school systems and teachers, we are creating an after-school program to achieve the following goals for students:<br /><ul><li>Have a positive attitude toward CS, have no fear of computers, and have the confidence and curiosity to jump into a new computing experience</li><li>Understand that coding is used in a diverse set of jobs/hobbies</li><li>Are willing to persevere and have a debugging mindset</li><li>Collaborate - coding is social!</li><li>Learn and use basic coding concepts such as object creation, conditionals, variables, loops, and input/output</li></ul>With these goals in mind, we began pilot programs in Berkeley, Charleston, and Dorchester counties, exposing students, with a focus on underrepresented minorities and girls, to the most promising existing content and tools. We then observed the clubs, collected survey data and iterated to improve the content and tools that worked best.<br /><br />To date, CS First has run 31 after-school programs for 4th-12th grades, reaching more than 450 students. Of those students, 53% were girls, and 66% qualify for free or reduced lunch. Each after-school program lasted four weeks and consisted of eight 60-90 minute lessons using a variety of tools such as <a href="http://appinventor.mit.edu/explore/">App Inventor</a>, <a href="http://scratch.mit.edu/">Scratch</a>, <a href="http://learn.code.org/s/1/level/2">Blockly</a>, <a href="http://www.codecademy.com/tracks/python">Python in Codacademy</a>, Scheme in <a href="http://www.bootstrapworld.org/materials/Spring2014/">Bootstrap</a>, as well as various physical gadgets like <a href="http://www.finchrobot.com/">Finch Robots</a>, <a href="http://littlebits.cc/">Little Bits</a>, <a href="http://www.gosphero.com/2b/">Sphero</a> and <a href="http://www.raspberrypi.org/">Raspberry Pi</a>. <br /><br />The <a href="http://googleresearch.blogspot.com/2013/08/computer-science-teaching-fellows.html">Google Computer Science Teaching Fellows</a> led the CS First after-school clubs and studied student engagement and preferences. The following major findings are helping to shape our programs as we iterate:<br /><ul><li>Students were most engaged when they had a creative outlet or where there was a big "wow-factor". They were less engaged with tools that were strongly scaffolded for learning and did not result in a creative result. In this initial set of pilots, students were most engaged with Scratch and App Inventor.</li><li>Club culture is as important as the tool choice for engagement levels and therefore we continue to intentionally create a social club environment.</li><li>Scaffolding and forward momentum are especially important in creative design-oriented tools. Students sometimes became distracted by the design aspects of Scratch and would often not code unless club structure included accountability.</li></ul>We are collecting qualitative observations about CS First, as well as surveying our students and teacher sponsors. Our November programs had the following survey results (sample size of 73 students):<br /><ul><li>"I can create things with computer science": Students agreed with this statement 25% more after the program.</li><li>"If I get stuck on a computer problem, I know how I might fix it": An increase of 22% after the program.</li><li>"Do you like programming?": The number of “yes” responses increased 29%.</li><li>"Do you think computer science is cool?" The number of “yes” responses increased 26%.</li><li>"I don't really understand computer science": Students agreed with this statement 34% less after the program.</li></ul>In November, we began working with 4 non-teacher technologists to understand the challenges of scaling through volunteers. In our January programs, we will pilot with 10 community members leading the clubs in partnership with local school districts.  The outcome will be a polished kit of tested materials which teachers and volunteers can use to lead an after-school CS First club with students. The kit is designed so that a teacher with little CS experience or a technologist with little teaching experience will be able to implement it with a group of 10-20 students.  The ultimate goal of CS First is to provide proven teaching materials, screencasts, and curricula for after-school programs that will ignite the interest and confidence of underrepresented minorities and girls in CS and to scale these programs through a network of teacher sponsors, volunteers and national organizations.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/piloting-after-school-clubs-to-ignite-interest-in-computer-science/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Groundbreaking simulations by Google Exacycle Visiting Faculty</title>
		<link>https://googledata.org/google-research/groundbreaking-simulations-by-google-exacycle-visiting-faculty/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=groundbreaking-simulations-by-google-exacycle-visiting-faculty</link>
		<comments>https://googledata.org/google-research/groundbreaking-simulations-by-google-exacycle-visiting-faculty/#comments</comments>
		<pubDate>Mon, 16 Dec 2013 18:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=b01e1cdaf4712eec1237b3426a0a0f58</guid>
		<description><![CDATA[<span>Posted by David Konerding, Staff Software Engineer</span><br /><br />In April 2011, we <a href="http://googleresearch.blogspot.com/2011/04/1-billion-core-hours-of-computational.html">announced</a> the <a href="http://research.google.com/university/exacycle_program.html">Google Exacycle for Visiting Faculty</a>, a new academic research awards program donating one billion core-hours of computational capacity to researchers. The Exacycle project <a href="http://ieeexplore.ieee.org/xpl/login.jsp?tp=&#38;arnumber=6257372&#38;url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D%26arnumber%3D6257372">enables massive parallelism for doing science in the cloud</a>, and inspired multiple <a href="http://googleresearch.blogspot.com/2012/12/millions-of-core-hours-awarded-to.html">proposals</a> aiming to take advantage of cloud scale. Today, we would like to share some exciting results from a project built on Google&#8217;s infrastructure.<br /><br />Google Research Scientist <a href="http://research.google.com/pubs/KaiKohlhoff.html">Kai Kohlhoff</a>, in collaboration with Stanford University and Google engineers, investigated how an important <a href="http://en.wikipedia.org/wiki/Cell_signaling">signalling protein</a> in the membrane of human cells can switch off and on by changing its three-dimensional structure following a sequence of local <a href="http://en.wikipedia.org/wiki/Conformational_change">conformational changes</a>.  This research can help to better understand the effects of certain chemical compounds on the human body and assist future development of more potent drug molecules with fewer side effects.<br /><br />The protein, known as the <a href="http://en.wikipedia.org/wiki/Beta-2_adrenergic_receptor">beta-2 adrenergic receptor</a>, is  a G protein-coupled receptor (<a href="http://en.wikipedia.org/wiki/G_protein-coupled_receptor">GPCR</a>), a primary drug target that plays a role in several debilitating health conditions.  These include asthma, type-2 diabetes, obesity, and hypertension. The receptor and its close GPCR relatives bind to many familiar molecules, such as epinephrine, beta-blockers, and caffeine.  Understanding their structure, function, and the underlying dynamics during binding and activation increases our chances to decode the causes and mechanisms of diseases.<br /><br />To gain insights into the receptor&#8217;s dynamics, Kai performed detailed molecular simulations using hundreds of millions of core hours on Google&#8217;s infrastructure,  generating hundreds of terabytes of valuable molecular dynamics data. The Exacycle program enabled the realization of simulations with longer sampling and higher accuracy than previous experiments, exposing the complex processes taking place on the nanoscale during activation of this biological switch. <br /><br />The paper summarizing the results of Kai&#8217;s and his collaborators&#8217; work is featured on the January cover of <a href="http://www.nature.com/nchem/index.html">Nature Chemistry</a>, with artwork by Google R&#38;D UX Creative Lead Thor Lewis, to be published on December 17, 2013. The online version of his paper was published on their <a href="http://dx.doi.org/10.1038/nchem.1821">website</a> today.<br /><br />We are extremely pleased with the results of this program. We look forward to seeing this research continue to develop. <br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by David Konerding, Staff Software Engineer</span><br /><br />In April 2011, we <a href="http://googleresearch.blogspot.com/2011/04/1-billion-core-hours-of-computational.html">announced</a> the <a href="http://research.google.com/university/exacycle_program.html">Google Exacycle for Visiting Faculty</a>, a new academic research awards program donating one billion core-hours of computational capacity to researchers. The Exacycle project <a href="http://ieeexplore.ieee.org/xpl/login.jsp?tp=&amp;arnumber=6257372&amp;url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D%26arnumber%3D6257372">enables massive parallelism for doing science in the cloud</a>, and inspired multiple <a href="http://googleresearch.blogspot.com/2012/12/millions-of-core-hours-awarded-to.html">proposals</a> aiming to take advantage of cloud scale. Today, we would like to share some exciting results from a project built on Google’s infrastructure.<br /><br />Google Research Scientist <a href="http://research.google.com/pubs/KaiKohlhoff.html">Kai Kohlhoff</a>, in collaboration with Stanford University and Google engineers, investigated how an important <a href="http://en.wikipedia.org/wiki/Cell_signaling">signalling protein</a> in the membrane of human cells can switch off and on by changing its three-dimensional structure following a sequence of local <a href="http://en.wikipedia.org/wiki/Conformational_change">conformational changes</a>.  This research can help to better understand the effects of certain chemical compounds on the human body and assist future development of more potent drug molecules with fewer side effects.<br /><br />The protein, known as the <a href="http://en.wikipedia.org/wiki/Beta-2_adrenergic_receptor">beta-2 adrenergic receptor</a>, is  a G protein-coupled receptor (<a href="http://en.wikipedia.org/wiki/G_protein-coupled_receptor">GPCR</a>), a primary drug target that plays a role in several debilitating health conditions.  These include asthma, type-2 diabetes, obesity, and hypertension. The receptor and its close GPCR relatives bind to many familiar molecules, such as epinephrine, beta-blockers, and caffeine.  Understanding their structure, function, and the underlying dynamics during binding and activation increases our chances to decode the causes and mechanisms of diseases.<br /><br />To gain insights into the receptor’s dynamics, Kai performed detailed molecular simulations using hundreds of millions of core hours on Google’s infrastructure,  generating hundreds of terabytes of valuable molecular dynamics data. The Exacycle program enabled the realization of simulations with longer sampling and higher accuracy than previous experiments, exposing the complex processes taking place on the nanoscale during activation of this biological switch. <br /><br />The paper summarizing the results of Kai’s and his collaborators’ work is featured on the January cover of <a href="http://www.nature.com/nchem/index.html">Nature Chemistry</a>, with artwork by Google R&D UX Creative Lead Thor Lewis, to be published on December 17, 2013. The online version of his paper was published on their <a href="http://dx.doi.org/10.1038/nchem.1821">website</a> today.<br /><br />We are extremely pleased with the results of this program. We look forward to seeing this research continue to develop. <br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/groundbreaking-simulations-by-google-exacycle-visiting-faculty/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Googler Moti Yung elected as 2013 ACM Fellow</title>
		<link>https://googledata.org/google-research/googler-moti-yung-elected-as-2013-acm-fellow/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=googler-moti-yung-elected-as-2013-acm-fellow</link>
		<comments>https://googledata.org/google-research/googler-moti-yung-elected-as-2013-acm-fellow/#comments</comments>
		<pubDate>Wed, 11 Dec 2013 18:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=82b9d2ead14fe5ae08b6a3b5e2649934</guid>
		<description><![CDATA[<span>Posted by Alfred Spector, VP of Engineering</span><br /><br />Yesterday, the Association for Computing Machinery (ACM) <a href="http://www.acm.org/press-room/news-releases/2013/fellows-2013">released</a> the list of those who have been elected ACM Fellows in 2013. I am excited to announce that Google <a href="http://research.google.com/pubs/author3060.html">Research Scientist Moti Yung</a> is among the distinguished individuals receiving this honor.<br /><br />Moti was chosen for his contributions to computer science and cryptography that have provided fundamental knowledge to the field of computing security. We are proud of the breadth and depth of his contributions, and believe they serve as motivation for computer scientists worldwide.<br /><br />On behalf of Google, I congratulate our colleague, who joins the 17 ACM Fellow and other professional society awardees at Google, in exemplifying our extraordinarily talented people. You can read a more detailed summary of Moti&#8217;s accomplishments below, including the official citations from ACM.<br /><br /><b>Dr. Moti Yung: Research Scientist</b><br /><i>For contributions to cryptography and its use in security and privacy of systems</i><br /><br />Moti has made key contributions to several areas of cryptography including (but not limited to!) secure group communication, digital signatures, <a href="http://en.wikipedia.org/wiki/Traitor_tracing">traitor tracing</a>, <a href="http://en.wikipedia.org/wiki/Threshold_cryptosystem">threshold cryptosystems</a> and <a href="http://en.wikipedia.org/wiki/Zero-knowledge_proof">zero knowledge proofs.</a> Moti's work often seeds a new area in theoretical cryptography as well as finding applications broadly. For example, in 1992, Moti co-developed a protocol by which users can commonly compute a  group key using their own private information that is secure against coalitions of rogue users. This work led to the growth of the broadcast encryption research area and has applications to pay-tv, network communication and sensor networks.<br />Moti is also a long-time leader of the security and privacy research communities, having mentored many of the leading researchers in the field, and serving on numerous program committees. A prolific author, Moti routinely publishes 10+ papers a year, and has been a key contributor to principled and consistent anonymization practices and data protection at Google.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Alfred Spector, VP of Engineering</span><br /><br />Yesterday, the Association for Computing Machinery (ACM) <a href="http://www.acm.org/press-room/news-releases/2013/fellows-2013">released</a> the list of those who have been elected ACM Fellows in 2013. I am excited to announce that Google <a href="http://research.google.com/pubs/author3060.html">Research Scientist Moti Yung</a> is among the distinguished individuals receiving this honor.<br /><br />Moti was chosen for his contributions to computer science and cryptography that have provided fundamental knowledge to the field of computing security. We are proud of the breadth and depth of his contributions, and believe they serve as motivation for computer scientists worldwide.<br /><br />On behalf of Google, I congratulate our colleague, who joins the 17 ACM Fellow and other professional society awardees at Google, in exemplifying our extraordinarily talented people. You can read a more detailed summary of Moti’s accomplishments below, including the official citations from ACM.<br /><br /><b>Dr. Moti Yung: Research Scientist</b><br /><i>For contributions to cryptography and its use in security and privacy of systems</i><br /><br />Moti has made key contributions to several areas of cryptography including (but not limited to!) secure group communication, digital signatures, <a href="http://en.wikipedia.org/wiki/Traitor_tracing">traitor tracing</a>, <a href="http://en.wikipedia.org/wiki/Threshold_cryptosystem">threshold cryptosystems</a> and <a href="http://en.wikipedia.org/wiki/Zero-knowledge_proof">zero knowledge proofs.</a> Moti's work often seeds a new area in theoretical cryptography as well as finding applications broadly. For example, in 1992, Moti co-developed a protocol by which users can commonly compute a  group key using their own private information that is secure against coalitions of rogue users. This work led to the growth of the broadcast encryption research area and has applications to pay-tv, network communication and sensor networks.<br />Moti is also a long-time leader of the security and privacy research communities, having mentored many of the leading researchers in the field, and serving on numerous program committees. A prolific author, Moti routinely publishes 10+ papers a year, and has been a key contributor to principled and consistent anonymization practices and data protection at Google.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/googler-moti-yung-elected-as-2013-acm-fellow/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Free Language Lessons for Computers</title>
		<link>https://googledata.org/google-research/free-language-lessons-for-computers/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=free-language-lessons-for-computers</link>
		<comments>https://googledata.org/google-research/free-language-lessons-for-computers/#comments</comments>
		<pubDate>Tue, 03 Dec 2013 18:10:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=654a86370365036a53fb3d772f85d07b</guid>
		<description><![CDATA[<span>Posted by Dave Orr, Google Research Product Manager</span><br /><br /><div><i>Not everything that can be counted counts.</i></div><div><i>Not everything that counts can be counted.</i></div><div>- <a href="http://quoteinvestigator.com/2010/05/26/everything-counts-einstein/">William Bruce Cameron</a></div><br />50,000 relations from Wikipedia. 100,000 feature vectors from YouTube videos. 1.8 million historical infoboxes. 40 million entities derived from webpages. 11 billion Freebase entities in 800 million web documents. 350 billion words&#8217; worth from books analyzed for syntax.<br /><br />These are all datasets that we&#8217;ve shared with researchers around the world over the last year from Google Research.<br /><br />But data by itself doesn&#8217;t mean much. Data is only valuable in the right context, and only if it leads to increased knowledge. Labeled data is critical to train and evaluate machine-learned systems in many arenas, improving systems that can increase our ability to understand the world. Advances in natural language understanding, information retrieval, information extraction, computer vision, etc. can help us <a href="http://www.google.com/publicdata/directory">tell stories</a>, mine for valuable insights, or <a href="https://books.google.com/ngrams">visualize information</a> in beautiful and compelling ways.<br /><br />That&#8217;s why we are pleased to be able to release sets of labeled data from various domains and with various annotations, some automatic and some manual. Our hope is that the research community will use these datasets in ways both straightforward and surprising, to improve systems for annotation or understanding, and perhaps launch new efforts we haven&#8217;t thought of.<br /><br />Here&#8217;s a listing of the major datasets we&#8217;ve released in the last year, or you can subscribe to our <a href="https://groups.google.com/forum/?fromgroups#!forum/knowledge-data-releases">mailing list</a>. Please tell us what you&#8217;ve managed to accomplish, or send us pointers to papers that use this data. We want to see what the research world can do with what we&#8217;ve created.<br /><br /><b>50,000 Lessons on How to Read: a Relation Extraction Corpus</b><br /><br /><i>What is it</i>: A human-judged dataset of two relations involving public figures on <a href="http://www.wikipedia.org/">Wikipedia</a>: about 10,000 examples of &#8220;place of birth&#8221; and 40,000 examples of &#8220;attended or graduated from an institution.&#8221; <br /><i>Where can I find it</i>: <a href="https://code.google.com/p/relation-extraction-corpus/">https://code.google.com/p/relation-extraction-corpus/</a><br /><i>I want to know more</i>: Here&#8217;s a <a href="http://googleresearch.blogspot.com/2013/04/50000-lessons-on-how-to-read-relation.html">handy blog post</a> with a broader explanation, descriptions and examples of the data, and plenty of links to learn more.<br /><br /><b>11 Billion Clues in 800 Million Documents</b><br /><br /><i>What is it</i>: We took the ClueWeb corpora and automatically labeled concepts and entities with <a href="http://wiki.freebase.com/wiki/Machine_ID">Freebase concept IDs</a>, an example of entity resolution. This dataset is huge: nearly 800 million web pages.<br /><i>Where can I find it</i>: We released two corpora: <a href="http://lemurproject.org/clueweb09/FACC1/">ClueWeb09 FACC</a> and <a href="http://lemurproject.org/clueweb12/FACC1/">ClueWeb12 FACC</a>.<br /><i>I want to know more</i>: We described the process and results in a recent blog post.<br /><div><a href="http://4.bp.blogspot.com/-eSPcd2OLX40/Up4U1qQ5odI/AAAAAAAAAM8/NYL3XRoAcRQ/s1600/freebase-example.png"><img border="0" src="http://4.bp.blogspot.com/-eSPcd2OLX40/Up4U1qQ5odI/AAAAAAAAAM8/NYL3XRoAcRQ/s1600/freebase-example.png"></a></div><b><br /></b><b>Features Extracted From YouTube Videos for Multiview Learning</b><br /><br /><i>What is it</i>: Multiple feature families from a set of public YouTube videos of games. The videos are labeled with one of 30 categories, and each has an associated set of visual, auditory, and and textual features.<br /><i>Where can I find it</i>: The data and more information can be obtained from the <a href="http://archive.ics.uci.edu/ml/datasets/YouTube+Multiview+Video+Games+Dataset">UCI machine learning repository (multiview video dataset)</a>, or from <a href="https://code.google.com/p/multiview-video-features-data/">Google&#8217;s repository</a>.<br /><i>I want to know more</i>: Read more about the data and uses for it <a href="http://googleresearch.blogspot.com/2013/11/released-data-set-features-extracted.html">here</a>.<br /><br /><b>40 Million Entities in Context</b><br /><br /><i>What is it</i>: A disambiguation set consisting of pointers to 10 million web pages with 40 million entities that have links to Wikipedia. This is another entity resolution corpus, since the links can be used to disambiguate the mentions, but unlike the ClueWeb example above, the links are inserted by the web page authors and can therefore be considered human annotation.<br /><i>Where can I find it</i>: Here&#8217;s the  <a href="https://code.google.com/p/wiki-links/">WikiLinks corpus</a>, and tools can be found to help use this data on our partner&#8217;s page: <a href="http://www.iesl.cs.umass.edu/data/wiki-links">Umass Wiki-links</a>.<br /><i>I want to know more</i>: Other disambiguation sets, data formats, ideas for uses of this data, and more can be found at our <a href="http://googleresearch.blogspot.com/2013/03/learning-from-big-data-40-million.html">blog post announcing the release</a>.<br /><div><a href="http://3.bp.blogspot.com/-f4ZvuLBiKMk/Up4WCTWYBVI/AAAAAAAAANI/NmWXXioLKFo/s1600/mention-diagram.png"><img border="0" src="http://3.bp.blogspot.com/-f4ZvuLBiKMk/Up4WCTWYBVI/AAAAAAAAANI/NmWXXioLKFo/s1600/mention-diagram.png"></a></div><b><br /></b><b>Distributing the Edit History of Wikipedia Infoboxes</b><br /><br /><i>What is it</i>: The edit history of 1.8 million infoboxes in Wikipedia pages in one handy resource. Attributes on Wikipedia change over time, and some of them change more than others. Understanding attribute change is important for extracting accurate and useful information from Wikipedia.<br /><i>Where can I find it</i>: <a href="http://commondatastorage.googleapis.com/historicalinfobox/20120323-en-updates.json.gz">Download from Google</a> or from <a href="http://www.wikimedia.de/wiki/Hauptseite">Wikimedia Deutschland</a>.<br /><i>I want to know more</i>: We <a href="http://googleresearch.blogspot.com/2013/05/distributing-edit-history-of-wikipedia.html">posted</a> a detailed look at the data, the process for gathering it, and where to find it. You can also read a <a href="http://rd.springer.com/article/10.1007/s10579-013-9232-5">paper</a> we published on the release.<br /><div><a href="http://2.bp.blogspot.com/-jyImLMRK0NM/Up4Wfbo7ETI/AAAAAAAAANQ/1wq2piGzDds/s1600/image00.png"><img border="0" height="320" src="http://2.bp.blogspot.com/-jyImLMRK0NM/Up4Wfbo7ETI/AAAAAAAAANQ/1wq2piGzDds/s320/image00.png" width="216"></a><a href="http://2.bp.blogspot.com/-IpTQZjfEMAw/Up4WiD3-lpI/AAAAAAAAANY/NMyJG4OjMaQ/s1600/image01.png"><img border="0" height="320" src="http://2.bp.blogspot.com/-IpTQZjfEMAw/Up4WiD3-lpI/AAAAAAAAANY/NMyJG4OjMaQ/s320/image01.png" width="159"></a></div><div><span><span>Note the change in the capital of Palau.</span></span></div><br /><b><br /></b><b>Syntactic Ngrams over Time</b><br /><br /><i>What is it</i>: We automatically syntactically analyzed 350 billion words from the 3.5 million English language books in <a href="http://books.google.com/">Google Books</a>, and collated and released a set of fragments -- billions of unique tree fragments with counts sorted into types. The underlying corpus is the same one that underlies the recently updated <a href="https://books.google.com/ngrams/">Google Ngram Viewer</a>.<br /><i>Where can I find it</i>: <a href="http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html">http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html</a><br /><i>I want to know more</i>: We discussed the nature of dependency parses and describe the data and release in a <a href="http://googleresearch.blogspot.com/2013/05/syntactic-ngrams-over-time.html">blog post</a>. We also published a <a href="http://commondatastorage.googleapis.com/books/syntactic-ngrams/syntngrams.final.pdf">paper about the release</a>.<br /><div><a href="http://4.bp.blogspot.com/-xuuIdr-MdDk/Up4XXSkNF-I/AAAAAAAAANk/aTDA19Rmfrk/s1600/image1.png"><img border="0" src="http://4.bp.blogspot.com/-xuuIdr-MdDk/Up4XXSkNF-I/AAAAAAAAANk/aTDA19Rmfrk/s1600/image1.png"></a></div><b><br /></b><b>Dictionaries for linking Text, Entities, and Ideas</b><br /><br /><i>What is it</i>: We created a large database of pairs of 175 million strings associated with 7.5 million concepts, annotated with counts, which were mined from Wikipedia. The concepts in this case are Wikipedia articles, and the strings are anchor text spans that link to the concepts in question.<br /><i>Where can I find it</i>: <a href="http://nlp.stanford.edu/pubs/crosswikis-data.tar.bz2">http://nlp.stanford.edu/pubs/crosswikis-data.tar.bz2</a><br /><i>I want to know more</i>: A description of the data, several examples, and ideas for uses for it can be found in a <a href="http://googleresearch.blogspot.com/2012/05/from-words-to-concepts-and-back.html">blog post</a> or in the <a href="http://nlp.stanford.edu/pubs/crosswikis.pdf">associated paper</a>.<br /><br /><b>Other datasets</b><br /><br />Not every release had its own blog post describing it. Here are some other releases:<br /><ul><li>Automatic <a href="http://lemurproject.org/clueweb09/TREC%20Freebase%20Queries,%20v1.1.zip">Freebase annotations</a> of Trec&#8217;s Million Query and Web track queries.</li><li>A <a href="https://developers.google.com/freebase/data#freebase-deleted-triples">set of Freebase triples</a> that have been deleted from Freebase over time -- 63 million of them.</li></ul>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Dave Orr, Google Research Product Manager</span><br /><br /><div style="text-align: center;"><i>Not everything that can be counted counts.</i></div><div style="text-align: center;"><i>Not everything that counts can be counted.</i></div><div style="text-align: center;">- <a href="http://quoteinvestigator.com/2010/05/26/everything-counts-einstein/">William Bruce Cameron</a></div><br />50,000 relations from Wikipedia. 100,000 feature vectors from YouTube videos. 1.8 million historical infoboxes. 40 million entities derived from webpages. 11 billion Freebase entities in 800 million web documents. 350 billion words’ worth from books analyzed for syntax.<br /><br />These are all datasets that we’ve shared with researchers around the world over the last year from Google Research.<br /><br />But data by itself doesn’t mean much. Data is only valuable in the right context, and only if it leads to increased knowledge. Labeled data is critical to train and evaluate machine-learned systems in many arenas, improving systems that can increase our ability to understand the world. Advances in natural language understanding, information retrieval, information extraction, computer vision, etc. can help us <a href="http://www.google.com/publicdata/directory">tell stories</a>, mine for valuable insights, or <a href="https://books.google.com/ngrams">visualize information</a> in beautiful and compelling ways.<br /><br />That’s why we are pleased to be able to release sets of labeled data from various domains and with various annotations, some automatic and some manual. Our hope is that the research community will use these datasets in ways both straightforward and surprising, to improve systems for annotation or understanding, and perhaps launch new efforts we haven’t thought of.<br /><br />Here’s a listing of the major datasets we’ve released in the last year, or you can subscribe to our <a href="https://groups.google.com/forum/?fromgroups#!forum/knowledge-data-releases">mailing list</a>. Please tell us what you’ve managed to accomplish, or send us pointers to papers that use this data. We want to see what the research world can do with what we’ve created.<br /><br /><b>50,000 Lessons on How to Read: a Relation Extraction Corpus</b><br /><br /><i>What is it</i>: A human-judged dataset of two relations involving public figures on <a href="http://www.wikipedia.org/">Wikipedia</a>: about 10,000 examples of “place of birth” and 40,000 examples of “attended or graduated from an institution.” <br /><i>Where can I find it</i>: <a href="https://code.google.com/p/relation-extraction-corpus/">https://code.google.com/p/relation-extraction-corpus/</a><br /><i>I want to know more</i>: Here’s a <a href="http://googleresearch.blogspot.com/2013/04/50000-lessons-on-how-to-read-relation.html">handy blog post</a> with a broader explanation, descriptions and examples of the data, and plenty of links to learn more.<br /><br /><b>11 Billion Clues in 800 Million Documents</b><br /><br /><i>What is it</i>: We took the ClueWeb corpora and automatically labeled concepts and entities with <a href="http://wiki.freebase.com/wiki/Machine_ID">Freebase concept IDs</a>, an example of entity resolution. This dataset is huge: nearly 800 million web pages.<br /><i>Where can I find it</i>: We released two corpora: <a href="http://lemurproject.org/clueweb09/FACC1/">ClueWeb09 FACC</a> and <a href="http://lemurproject.org/clueweb12/FACC1/">ClueWeb12 FACC</a>.<br /><i>I want to know more</i>: We described the process and results in a recent blog post.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-eSPcd2OLX40/Up4U1qQ5odI/AAAAAAAAAM8/NYL3XRoAcRQ/s1600/freebase-example.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-eSPcd2OLX40/Up4U1qQ5odI/AAAAAAAAAM8/NYL3XRoAcRQ/s1600/freebase-example.png" /></a></div><b><br /></b><b>Features Extracted From YouTube Videos for Multiview Learning</b><br /><br /><i>What is it</i>: Multiple feature families from a set of public YouTube videos of games. The videos are labeled with one of 30 categories, and each has an associated set of visual, auditory, and and textual features.<br /><i>Where can I find it</i>: The data and more information can be obtained from the <a href="http://archive.ics.uci.edu/ml/datasets/YouTube+Multiview+Video+Games+Dataset">UCI machine learning repository (multiview video dataset)</a>, or from <a href="https://code.google.com/p/multiview-video-features-data/">Google’s repository</a>.<br /><i>I want to know more</i>: Read more about the data and uses for it <a href="http://googleresearch.blogspot.com/2013/11/released-data-set-features-extracted.html">here</a>.<br /><br /><b>40 Million Entities in Context</b><br /><br /><i>What is it</i>: A disambiguation set consisting of pointers to 10 million web pages with 40 million entities that have links to Wikipedia. This is another entity resolution corpus, since the links can be used to disambiguate the mentions, but unlike the ClueWeb example above, the links are inserted by the web page authors and can therefore be considered human annotation.<br /><i>Where can I find it</i>: Here’s the  <a href="https://code.google.com/p/wiki-links/">WikiLinks corpus</a>, and tools can be found to help use this data on our partner’s page: <a href="http://www.iesl.cs.umass.edu/data/wiki-links">Umass Wiki-links</a>.<br /><i>I want to know more</i>: Other disambiguation sets, data formats, ideas for uses of this data, and more can be found at our <a href="http://googleresearch.blogspot.com/2013/03/learning-from-big-data-40-million.html">blog post announcing the release</a>.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-f4ZvuLBiKMk/Up4WCTWYBVI/AAAAAAAAANI/NmWXXioLKFo/s1600/mention-diagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-f4ZvuLBiKMk/Up4WCTWYBVI/AAAAAAAAANI/NmWXXioLKFo/s1600/mention-diagram.png" /></a></div><b><br /></b><b>Distributing the Edit History of Wikipedia Infoboxes</b><br /><br /><i>What is it</i>: The edit history of 1.8 million infoboxes in Wikipedia pages in one handy resource. Attributes on Wikipedia change over time, and some of them change more than others. Understanding attribute change is important for extracting accurate and useful information from Wikipedia.<br /><i>Where can I find it</i>: <a href="http://commondatastorage.googleapis.com/historicalinfobox/20120323-en-updates.json.gz">Download from Google</a> or from <a href="http://www.wikimedia.de/wiki/Hauptseite">Wikimedia Deutschland</a>.<br /><i>I want to know more</i>: We <a href="http://googleresearch.blogspot.com/2013/05/distributing-edit-history-of-wikipedia.html">posted</a> a detailed look at the data, the process for gathering it, and where to find it. You can also read a <a href="http://rd.springer.com/article/10.1007/s10579-013-9232-5">paper</a> we published on the release.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-jyImLMRK0NM/Up4Wfbo7ETI/AAAAAAAAANQ/1wq2piGzDds/s1600/image00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-jyImLMRK0NM/Up4Wfbo7ETI/AAAAAAAAANQ/1wq2piGzDds/s320/image00.png" width="216" /></a><a href="http://2.bp.blogspot.com/-IpTQZjfEMAw/Up4WiD3-lpI/AAAAAAAAANY/NMyJG4OjMaQ/s1600/image01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-IpTQZjfEMAw/Up4WiD3-lpI/AAAAAAAAANY/NMyJG4OjMaQ/s320/image01.png" width="159" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="text-align: start;"><span style="font-size: x-small;">Note the change in the capital of Palau.</span></span></div><br /><b><br /></b><b>Syntactic Ngrams over Time</b><br /><br /><i>What is it</i>: We automatically syntactically analyzed 350 billion words from the 3.5 million English language books in <a href="http://books.google.com/">Google Books</a>, and collated and released a set of fragments -- billions of unique tree fragments with counts sorted into types. The underlying corpus is the same one that underlies the recently updated <a href="https://books.google.com/ngrams/">Google Ngram Viewer</a>.<br /><i>Where can I find it</i>: <a href="http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html">http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html</a><br /><i>I want to know more</i>: We discussed the nature of dependency parses and describe the data and release in a <a href="http://googleresearch.blogspot.com/2013/05/syntactic-ngrams-over-time.html">blog post</a>. We also published a <a href="http://commondatastorage.googleapis.com/books/syntactic-ngrams/syntngrams.final.pdf">paper about the release</a>.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-xuuIdr-MdDk/Up4XXSkNF-I/AAAAAAAAANk/aTDA19Rmfrk/s1600/image1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-xuuIdr-MdDk/Up4XXSkNF-I/AAAAAAAAANk/aTDA19Rmfrk/s1600/image1.png" /></a></div><b><br /></b><b>Dictionaries for linking Text, Entities, and Ideas</b><br /><br /><i>What is it</i>: We created a large database of pairs of 175 million strings associated with 7.5 million concepts, annotated with counts, which were mined from Wikipedia. The concepts in this case are Wikipedia articles, and the strings are anchor text spans that link to the concepts in question.<br /><i>Where can I find it</i>: <a href="http://nlp.stanford.edu/pubs/crosswikis-data.tar.bz2">http://nlp.stanford.edu/pubs/crosswikis-data.tar.bz2</a><br /><i>I want to know more</i>: A description of the data, several examples, and ideas for uses for it can be found in a <a href="http://googleresearch.blogspot.com/2012/05/from-words-to-concepts-and-back.html">blog post</a> or in the <a href="http://nlp.stanford.edu/pubs/crosswikis.pdf">associated paper</a>.<br /><br /><b>Other datasets</b><br /><br />Not every release had its own blog post describing it. Here are some other releases:<br /><ul><li>Automatic <a href="http://lemurproject.org/clueweb09/TREC%20Freebase%20Queries,%20v1.1.zip">Freebase annotations</a> of Trec’s Million Query and Web track queries.</li><li>A <a href="https://developers.google.com/freebase/data#freebase-deleted-triples">set of Freebase triples</a> that have been deleted from Freebase over time -- 63 million of them.</li></ul>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/free-language-lessons-for-computers/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Released Data Set: Features Extracted From YouTube Videos for Multiview Learning</title>
		<link>https://googledata.org/youtube/released-data-set-features-extracted-from-youtube-videos-for-multiview-learning/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=released-data-set-features-extracted-from-youtube-videos-for-multiview-learning</link>
		<comments>https://googledata.org/youtube/released-data-set-features-extracted-from-youtube-videos-for-multiview-learning/#comments</comments>
		<pubDate>Tue, 26 Nov 2013 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[Youtube]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=8b3750301df55087de49db578f2a0a90</guid>
		<description><![CDATA[<span>Posted by Omid Madani, Senior Software Engineer</span><br /><span><br /></span><div><span>&#8220;If it looks like a duck, swims like a duck, and quacks like a duck, then it <i><u>probably</u></i> is a duck.&#8221;</span></div><div><span><a href="http://en.wikipedia.org/wiki/Duck_test">The &#8220;duck test&#8221;</a>.</span></div><br />Performance of machine learning algorithms, supervised or unsupervised, is often significantly enhanced when a variety of feature families, or <i>multiple views</i> of the data, are available.  For example, in the case of web pages, one feature family can be based on the words appearing on the page, and another can be based on the URLs and related connectivity properties. Similarly, videos contain both audio and visual signals where in turn each modality is analyzed in a variety of ways.  For instance, the visual stream can be analyzed based on the color and edge distribution, texture, motion, object types, and so on.  YouTube videos are also associated with textual information (title, tags, comments, etc.).    Each feature family complements others in providing predictive signals to accomplish a prediction or classification task, for example, in automatically classifying videos into subject areas such as sports, music, comedy, games, and so on.<br /><br />We have released a dataset of over 100k feature vectors extracted from public YouTube videos. These videos are labeled by one of 30 classes, each class corresponding to a video game (with some amount of class noise): each video shows a gameplay of a video game, for  teaching purposes for example.  Each instance (video) is described by three feature families (textual, visual, and auditory), and each family is broken into subfamilies yielding up to 13 feature types per instance. Neither video identities nor class identities are released.<br /><br />We hope that this dataset will be valuable for research on a variety of multiview related machine learning topics, including multiview clustering, co-training, active learning, classifier fusion and ensembles.<br /><br />The data and more information can be obtained from the <a href="http://archive.ics.uci.edu/ml/datasets/YouTube+Multiview+Video+Games+Dataset">UCI machine learning repository (multiview video dataset)</a>, or from <a href="https://code.google.com/p/multiview-video-features-data/">here</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Omid Madani, Senior Software Engineer</span><br /><span class="byline-author"><br /></span><div style="text-align: center;"><span style="font-size: large;">“If it looks like a duck, swims like a duck, and quacks like a duck, then it <i><u>probably</u></i> is a duck.”</span></div><div style="text-align: center;"><span style="font-size: large;"><a href="http://en.wikipedia.org/wiki/Duck_test">The “duck test”</a>.</span></div><br />Performance of machine learning algorithms, supervised or unsupervised, is often significantly enhanced when a variety of feature families, or <i>multiple views</i> of the data, are available.  For example, in the case of web pages, one feature family can be based on the words appearing on the page, and another can be based on the URLs and related connectivity properties. Similarly, videos contain both audio and visual signals where in turn each modality is analyzed in a variety of ways.  For instance, the visual stream can be analyzed based on the color and edge distribution, texture, motion, object types, and so on.  YouTube videos are also associated with textual information (title, tags, comments, etc.).    Each feature family complements others in providing predictive signals to accomplish a prediction or classification task, for example, in automatically classifying videos into subject areas such as sports, music, comedy, games, and so on.<br /><br />We have released a dataset of over 100k feature vectors extracted from public YouTube videos. These videos are labeled by one of 30 classes, each class corresponding to a video game (with some amount of class noise): each video shows a gameplay of a video game, for  teaching purposes for example.  Each instance (video) is described by three feature families (textual, visual, and auditory), and each family is broken into subfamilies yielding up to 13 feature types per instance. Neither video identities nor class identities are released.<br /><br />We hope that this dataset will be valuable for research on a variety of multiview related machine learning topics, including multiview clustering, co-training, active learning, classifier fusion and ensembles.<br /><br />The data and more information can be obtained from the <a href="http://archive.ics.uci.edu/ml/datasets/YouTube+Multiview+Video+Games+Dataset">UCI machine learning repository (multiview video dataset)</a>, or from <a href="https://code.google.com/p/multiview-video-features-data/">here</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/youtube/released-data-set-features-extracted-from-youtube-videos-for-multiview-learning/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>The MiniZinc Challenge</title>
		<link>https://googledata.org/google-research/the-minizinc-challenge/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-minizinc-challenge</link>
		<comments>https://googledata.org/google-research/the-minizinc-challenge/#comments</comments>
		<pubDate>Mon, 25 Nov 2013 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=afaa6171dc15d56fd2fb7b81ebfeb698</guid>
		<description><![CDATA[<span>Posted by Jon Orwant, Engineering Manager</span><br /><br /><i>Constraint Programming</i> is a style of problem solving where the properties of a solution are first identified, and a large space of solutions is searched through to find the best. Good constraint programming depends on modeling the problem well, and on searching effectively. Poor representations or slow search techniques can make the difference between finding a good solution and finding no solution at all.<br /><br />One example of constraint programming is <i>scheduling</i>: for instance, determining a schedule for a conference where there are 30 talks (that&#8217;s one constraint), only eight rooms to hold them in (that&#8217;s another constraint), and some talks can&#8217;t overlap (more constraints).<br /><br />Every year, some of the world&#8217;s top constraint programming researchers compete for medals in the MiniZinc challenge.  Problems range from scheduling to vehicle routing to program verification and frequency allocation.<br /><br />Google&#8217;s open source solver, <a href="https://code.google.com/p/or-tools/">or-tools</a>, took two gold medals and two silver medals. The gold medals were in parallel and portfolio search, and the silver medals were in fixed and free search. Google&#8217;s success was due in part to integrating a <a href="http://en.wikipedia.org/wiki/Boolean_satisfiability_problem">SAT</a> solver to handle boolean constraints, and a new presolve phase inherited from <a href="http://en.wikipedia.org/wiki/Integer_programming">integer programming</a>.<br /><br />Laurent Perron, a member of Google&#8217;s Optimization team and a lead contributor to or-tools, noted that every year brings fresh techniques to the competition: &#8220;One of the big surprises this year was the success of lazy-clause generation, which combines techniques from the SAT and constraint programming communities.&#8221;<br /><br />If you&#8217;re interested in learning more about constraint programming, you can start at the <a href="http://en.wikipedia.org/wiki/Linear_programming">wikipedia page</a>, or have a look at <a href="https://code.google.com/p/or-tools/">or-tools</a>.<br /><br />The full list of winners is available <a href="http://www.minizinc.org/challenge2013/results2013.html">here</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Jon Orwant, Engineering Manager</span><br /><br /><i>Constraint Programming</i> is a style of problem solving where the properties of a solution are first identified, and a large space of solutions is searched through to find the best. Good constraint programming depends on modeling the problem well, and on searching effectively. Poor representations or slow search techniques can make the difference between finding a good solution and finding no solution at all.<br /><br />One example of constraint programming is <i>scheduling</i>: for instance, determining a schedule for a conference where there are 30 talks (that’s one constraint), only eight rooms to hold them in (that’s another constraint), and some talks can’t overlap (more constraints).<br /><br />Every year, some of the world’s top constraint programming researchers compete for medals in the MiniZinc challenge.  Problems range from scheduling to vehicle routing to program verification and frequency allocation.<br /><br />Google’s open source solver, <a href="https://code.google.com/p/or-tools/">or-tools</a>, took two gold medals and two silver medals. The gold medals were in parallel and portfolio search, and the silver medals were in fixed and free search. Google’s success was due in part to integrating a <a href="http://en.wikipedia.org/wiki/Boolean_satisfiability_problem">SAT</a> solver to handle boolean constraints, and a new presolve phase inherited from <a href="http://en.wikipedia.org/wiki/Integer_programming">integer programming</a>.<br /><br />Laurent Perron, a member of Google’s Optimization team and a lead contributor to or-tools, noted that every year brings fresh techniques to the competition: “One of the big surprises this year was the success of lazy-clause generation, which combines techniques from the SAT and constraint programming communities.”<br /><br />If you’re interested in learning more about constraint programming, you can start at the <a href="http://en.wikipedia.org/wiki/Linear_programming">wikipedia page</a>, or have a look at <a href="https://code.google.com/p/or-tools/">or-tools</a>.<br /><br />The full list of winners is available <a href="http://www.minizinc.org/challenge2013/results2013.html">here</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/the-minizinc-challenge/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>New Research Challenges in Language Understanding</title>
		<link>https://googledata.org/google-research/new-research-challenges-in-language-understanding/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=new-research-challenges-in-language-understanding</link>
		<comments>https://googledata.org/google-research/new-research-challenges-in-language-understanding/#comments</comments>
		<pubDate>Fri, 22 Nov 2013 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=c1c7a3981b37408a4a9f667e2cd2956b</guid>
		<description><![CDATA[<span>Posted by Maggie Johnson, Director of Education and University Relations</span><br /><br />We held the first global Language Understanding and Knowledge Discovery Focused Faculty Workshop in Nanjing, China, on November 14-15, 2013. Thirty-four faculty members joined the workshop arriving from 10 countries and regions across APAC, EMEA and the US. Googlers from Research, Engineering and University Relations/University Programs also attended the event. <br /><br />The 2-day workshop included keynote talks, panel discussions and break-out sessions [<a href="https://sites.google.com/site/2013kgworkshop/">agenda</a>]. It was an engaging and productive workshop, and we saw lots of positive interactions among the attendees. The workshop encouraged communication between Google and faculty around the world working in these areas. <br /><br />Research in text mining continues to explore open questions relating to entity annotation, relation extraction, and more. The workshop&#8217;s goal was to brainstorm and discuss relevant topics to further investigate these areas. Ultimately, this research should help provide users search results that are much more relevant to them. <br /><br />At the end of the workshop, participants identified four topics representing challenges and opportunities for further exploration in Language Understanding and Knowledge Discovery:<br /><br /><ul><li>Knowledge representation, integration, and maintenance</li><li>Efficient and scalable infrastructure and algorithms for inferencing</li><li>Presentation and explanation of knowledge</li><li>Multilingual computation</li></ul><br />Going forward, Google will be collaborating with academic researchers on a position paper related to these topics. We also welcome faculty interested in contributing to further research in this area to submit a proposal to the <a href="http://research.google.com/university/relations/research_awards.html">Faculty Research Awards program</a>.  Faculty Research Awards are one-year grants to researchers working in areas of mutual interest. <br /><br />The faculty attendees responded positively to the focused workshop format, as it allowed time to go in depth into important and timely research questions. Encouraged by their feedback, we are considering similar workshops on other topics in the future.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Maggie Johnson, Director of Education and University Relations</span><br /><br />We held the first global Language Understanding and Knowledge Discovery Focused Faculty Workshop in Nanjing, China, on November 14-15, 2013. Thirty-four faculty members joined the workshop arriving from 10 countries and regions across APAC, EMEA and the US. Googlers from Research, Engineering and University Relations/University Programs also attended the event. <br /><br />The 2-day workshop included keynote talks, panel discussions and break-out sessions [<a href="https://sites.google.com/site/2013kgworkshop/">agenda</a>]. It was an engaging and productive workshop, and we saw lots of positive interactions among the attendees. The workshop encouraged communication between Google and faculty around the world working in these areas. <br /><br />Research in text mining continues to explore open questions relating to entity annotation, relation extraction, and more. The workshop’s goal was to brainstorm and discuss relevant topics to further investigate these areas. Ultimately, this research should help provide users search results that are much more relevant to them. <br /><br />At the end of the workshop, participants identified four topics representing challenges and opportunities for further exploration in Language Understanding and Knowledge Discovery:<br /><br /><ul><li>Knowledge representation, integration, and maintenance</li><li>Efficient and scalable infrastructure and algorithms for inferencing</li><li>Presentation and explanation of knowledge</li><li>Multilingual computation</li></ul><br />Going forward, Google will be collaborating with academic researchers on a position paper related to these topics. We also welcome faculty interested in contributing to further research in this area to submit a proposal to the <a href="http://research.google.com/university/relations/research_awards.html">Faculty Research Awards program</a>.  Faculty Research Awards are one-year grants to researchers working in areas of mutual interest. <br /><br />The faculty attendees responded positively to the focused workshop format, as it allowed time to go in depth into important and timely research questions. Encouraged by their feedback, we are considering similar workshops on other topics in the future. ]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/new-research-challenges-in-language-understanding/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Unique Strategies for Scaling Teacher Professional Development</title>
		<link>https://googledata.org/google-research/unique-strategies-for-scaling-teacher-professional-development/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=unique-strategies-for-scaling-teacher-professional-development</link>
		<comments>https://googledata.org/google-research/unique-strategies-for-scaling-teacher-professional-development/#comments</comments>
		<pubDate>Tue, 19 Nov 2013 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=80d47c295172d756d96c4446e6c99cbf</guid>
		<description><![CDATA[<span>Posted by Candice Reimers, Senior Program Manager</span><br /><br /><a href="http://www.ccsso.org/Documents/2009/Effects_of_Teacher_Professional_2009.pdf">Research shows</a> that professional development for educators has a direct, positive impact on students, so it&#8217;s no wonder that institutions are eager to explore creative ways to enhance professional development for K-12 teachers. Open source MOOC platforms, such as <a href="https://code.google.com/p/course-builder/">Course Builder</a>, offer the flexibility to extend the reach of standard curriculum; recently, several courses have launched that demonstrate new and creative applications of MOOCs. With their wide reach, participant engagement, and rich content, MOOCs that offer professional development opportunities for teachers bring flexibility and accessibility to an important area.<br /><br />This summer, the ScratchEd team out of Harvard University launched the <a href="https://creative-computing.appspot.com/preview">Creative Computing</a> MOOC, a 6 week self paced workshop focused on building computational thinking skills in the classroom. As a MOOC, the course had 2600 participants, who created more than 4700 Scratch projects, and engaged in 3500 forum discussions, compared to the &#8220;in-person&#8221; class held last year, which reached only 50 educators. <br /><br />Other creative uses of Course Builder for educator professional development come from <a href="http://www.nationalgeographic.com/">National Geographic</a> and <a href="http://www.learner.org/">Annenberg Learner</a> who joined forces to develop <a href="http://www.eeipd.org/course">Water: The Essential Resource</a>, a course developed around California&#8217;s Education and Environment Initiative. <a href="https://www.fi.ncsu.edu/">The Friday Institute</a>&#8217;s MOOC, <a href="http://www.mooc-ed.org/">Digital Learning Transitions</a>, focused on the benefits of utilizing educational technology and reached educators across 50 states and 68 countries worldwide. The course design included embedded peer support, project-based learning, and case studies; a <a href="http://all4ed.org/wp-content/uploads/2013/09/MOOC-Ed.pdf">post-course survey</a> showed an overwhelming majority of responders &#8220;were able to personalize their own learning experiences&#8221; in an &#8220;engaging, easy to navigate&#8221; curriculum and greatly appreciated the 24/7 access to materials. <br /><br />In addition to participant surveys, course authors using the Course Builder platform are able to conduct deeper analysis via web analytics and <a href="http://googleresearch.blogspot.com/2013/10/opening-up-course-builder-data.html">course data</a> to assess course effectiveness and make improvements for future courses.<br /><br />New opportunities to experience professional development MOOCs are rapidly emerging; the University of Adelaide recently announced their <a href="http://google-au.blogspot.com/2013/11/google-and-university-of-adelaide-team.html">Digital Technology course</a> to provide professional development for primary school teachers on the <a href="http://www.acara.edu.au/technologies.html">new Australian curriculum</a>, the Google in Education team just launched <a href="https://educourses.withgoogle.com/course">a suite of courses</a> for teachers using Google technologies, and the Friday Institute <a href="https://courses.mooc-ed.org/EquipartitioningF13/preview">course</a> that aligns with the U.S. based  <a href="http://www.corestandards.org/">Common Core State Standards</a> is now available.<br /><br />We&#8217;re excited about the innovative approaches underway and the positive impact it can have for students and teachers around the world.   We also look forward to seeing new, creative applications of MOOC platforms in new, unchartered territory.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Candice Reimers, Senior Program Manager</span><br /><br /><a href="http://www.ccsso.org/Documents/2009/Effects_of_Teacher_Professional_2009.pdf">Research shows</a> that professional development for educators has a direct, positive impact on students, so it’s no wonder that institutions are eager to explore creative ways to enhance professional development for K-12 teachers. Open source MOOC platforms, such as <a href="https://code.google.com/p/course-builder/">Course Builder</a>, offer the flexibility to extend the reach of standard curriculum; recently, several courses have launched that demonstrate new and creative applications of MOOCs. With their wide reach, participant engagement, and rich content, MOOCs that offer professional development opportunities for teachers bring flexibility and accessibility to an important area.<br /><br />This summer, the ScratchEd team out of Harvard University launched the <a href="https://creative-computing.appspot.com/preview">Creative Computing</a> MOOC, a 6 week self paced workshop focused on building computational thinking skills in the classroom. As a MOOC, the course had 2600 participants, who created more than 4700 Scratch projects, and engaged in 3500 forum discussions, compared to the “in-person” class held last year, which reached only 50 educators. <br /><br />Other creative uses of Course Builder for educator professional development come from <a href="http://www.nationalgeographic.com/">National Geographic</a> and <a href="http://www.learner.org/">Annenberg Learner</a> who joined forces to develop <a href="http://www.eeipd.org/course">Water: The Essential Resource</a>, a course developed around California’s Education and Environment Initiative. <a href="https://www.fi.ncsu.edu/">The Friday Institute</a>’s MOOC, <a href="http://www.mooc-ed.org/">Digital Learning Transitions</a>, focused on the benefits of utilizing educational technology and reached educators across 50 states and 68 countries worldwide. The course design included embedded peer support, project-based learning, and case studies; a <a href="http://all4ed.org/wp-content/uploads/2013/09/MOOC-Ed.pdf">post-course survey</a> showed an overwhelming majority of responders “were able to personalize their own learning experiences” in an “engaging, easy to navigate” curriculum and greatly appreciated the 24/7 access to materials. <br /><br />In addition to participant surveys, course authors using the Course Builder platform are able to conduct deeper analysis via web analytics and <a href="http://googleresearch.blogspot.com/2013/10/opening-up-course-builder-data.html">course data</a> to assess course effectiveness and make improvements for future courses.<br /><br />New opportunities to experience professional development MOOCs are rapidly emerging; the University of Adelaide recently announced their <a href="http://google-au.blogspot.com/2013/11/google-and-university-of-adelaide-team.html">Digital Technology course</a> to provide professional development for primary school teachers on the <a href="http://www.acara.edu.au/technologies.html">new Australian curriculum</a>, the Google in Education team just launched <a href="https://educourses.withgoogle.com/course">a suite of courses</a> for teachers using Google technologies, and the Friday Institute <a href="https://courses.mooc-ed.org/EquipartitioningF13/preview">course</a> that aligns with the U.S. based  <a href="http://www.corestandards.org/">Common Core State Standards</a> is now available.<br /><br />We’re excited about the innovative approaches underway and the positive impact it can have for students and teachers around the world.   We also look forward to seeing new, creative applications of MOOC platforms in new, unchartered territory. ]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/unique-strategies-for-scaling-teacher-professional-development/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Moore’s Law Part 4: Moore&#8217;s Law in other domains</title>
		<link>https://googledata.org/google-research/moores-law-part-4-moores-law-in-other-domains/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=moores-law-part-4-moores-law-in-other-domains</link>
		<comments>https://googledata.org/google-research/moores-law-part-4-moores-law-in-other-domains/#comments</comments>
		<pubDate>Fri, 15 Nov 2013 20:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=87c6a9cd38987fa1c7e02f2d9787a3ef</guid>
		<description><![CDATA[<i>This is the last entry of a series focused on Moore&#8217;s Law and its implications moving forward, edited from a White paper on Moore&#8217;s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore&#8217;s Law and explores how they believe Moore&#8217;s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.    </i><br /><br />--<br /><br />The quest <a href="http://en.wikipedia.org/wiki/Moore%27s_law">for Moore&#8217;s Law</a> and its potential impact in other disciplines is a journey the technology industry is starting, by crossing the Rubicon from the semiconductor industry to other less explored fields, but with the particular mindset created by Moore&#8217;s Law. Our goal is to explore if there are Moore&#8217;s Law opportunities emerging in other disciplines, as well as its potential impact. As such, we have interviewed several professors and researchers and asked them if they could see emerging &#8216;Moore&#8217;s Laws&#8217; in their discipline. Listed below are some highlights of those discussions, ranging from CS+ to potentials in the Energy Sector: <br /><br /><b>Sensors and Data Acquisition</b><br /><b><a href="http://www.edparsons.com/">Ed Parsons</a>, Google Geospatial Technologist</b><br /><i>The More than Moore discussion can be extended to outside of the main chip, and go within the same board as the main chip or within the device that a user is carrying. Greater sensors capabilities (for the measurement of pressure, electromagnetic field and other local conditions) allow including them in smart phones, glasses, or other devices and perform local data acquisition. This trend is strong, and should allow future devices benefiting from Moore&#8217;s Law to receive enough data to perform more complex applications. </i><br /><i><br /></i><i><a href="http://en.wikipedia.org/wiki/Metcalfe%27s_law">Metcalfe&#8217;s Law</a> states that the value of a telecommunication network is proportional to the square of connected nodes of the system. This law can be used in parallel to Moore&#8217;s Law to evaluate the value of the <a href="http://en.wikipedia.org/wiki/Internet_of_Things">Internet of Things</a>. The network itself can be seen as composed by layers: at the user&#8217;s local level (to capture data related to the body of the user, or to immediately accessible objects), locally around the user (such as to get data within the same street as the user), and finally globally (to get data from the global internet). The extrapolation made earlier in this blog (several TB available in flash memory) will lead to the ability to construct, exchange and download/upload entire contexts for a given situation or a given application and use these contexts without intense network activity, or even with very little or no network activity. </i><br /><br /><b>Future of Moore&#8217;s Law and its impact on Physics</b><br /><b><a href="http://openlab.web.cern.ch/about/people/sverre-jarp">Sverre Jarp</a>, CERN</b><br /><i><a href="http://home.web.cern.ch/">CERN</a>, and its experiments with the Large Electron-Positron Collider (<a href="http://home.web.cern.ch/about/accelerators/large-electron-positron-collider">LEP</a>) and Large Hadron Collider (LHC)  generate data on the order of a PetaByte per year; this data has to be filtered, processed and analyzed in order to find meaningful physics events leading to new discoveries. In this context Moore&#8217;s Law has been particularly helpful to allow computing power, storage and networking capabilities at CERN and at other High Energy Physics (<a href="http://en.wikipedia.org/wiki/Particle_physics">HEP</a>) centers to scale up regularly. Several generations of hardware and software have been exhausted during the journey from mainframes to today&#8217;s clusters.</i><br /><i><br /></i><i>CERN has a long tradition of collaboration with chip manufacturers, hardware and software vendors to understand and predict next trends in the computing evolution curve. Recent analysis indicates that Moore&#8217;s Law will likely continue over the next decade. The statement of &#8216;several TB of flash memory availability by 2025&#8217; may even be a little conservative according to most recent analysis.</i><br /><br /><b>Big Data Visualizations</b><br /><b><a href="http://ella.slis.indiana.edu/~katy/">Katy B&#246;rner</a>, Indiana University</b><br /><i>Thanks to Moore&#8217;s Law, the amount of data available for any given phenomenon, whether sensed or simulated, has been growing by several orders of magnitude over the past decades. Intelligent sampling can be used to filter out the most relevant bits of information and is practiced in Physics, Astronomy, Medicine and other sciences. Subsequently, data needs to be analyzed and visualized to identify meaningful trends and phenomena, and to communicate them to others.</i><br /><i><br /></i><i>While most people learn in school how to read charts and maps, many never learn how to read a network layout&#8212;data literacy remains a challenge. The <a href="http://ivmooc.cns.iu.edu/">Information Visualization Massive Open Online Course (MOOC)</a> at Indiana University teaches students from more than 100 countries how to read but also how to design meaningful network, topical, geospatial, and temporal visualizations. Using the tools introduced in this free course anyone can analyze, visualize, and navigate complex data sets to understand patterns and trends.</i><br /><br /><b>Candidate for Moore&#8217;s Law in Energy </b><br /><b><a href="http://people.epfl.ch/francesco.stellacci">Professor Francesco Stellacci</a>, EPFL</b><br /><i>It is currently hard to see a &#8220;Moore&#8217;s Law&#8221; applying to candidates in energy technology. Nuclear fusion could reserve some positive surprises, if several significant breakthroughs are found in the process of creating usable energy with this technique. For any other technology the technological growth will be slower. Best solar cells of today have a 30% efficiency, which could scale higher of course (obviously not much more than a factor of 3). Also cost could be driven down by an order of magnitude. Best estimates show, however, a combined performance improvement by a factor 30 over many years.</i><br /><br /><b>Further Discussion of Moore&#8217;s Law in Energy</b><br /><b><a href="http://www.googleventures.com/team/ross-koningstein">Ross Koningstein</a>, Google Director Emeritus</b><br /><i>As of today there is no obvious Moore&#8217;s Law in the Energy sector which could decrease some major costs by 50% every 18 months. However material properties at nanoscale, and chemical processes such as <a href="http://en.wikipedia.org/wiki/Catalysis">catalysis</a> are being investigated and could lead to promising results.  Applications targeted are <a href="http://en.wikipedia.org/wiki/Hydrocarbon">hydrocarbon</a> creation at scale and improvement of <a href="http://en.wikipedia.org/wiki/Petroleum_refining_processes">oil refinery processes</a>, where breakthrough in micro/nano property catalysts is pursued.  Hydrocarbons are much more compatible at scale with the existing automotive/aviation and natural gas distribution systems.  Here in California, <a href="http://www.googleventures.com/">Google Ventures</a> has invested in <a href="http://www.coolplanet.com/">Cool Planet Energy Systems</a>, a company with neat technology that can convert biomass to gasoline/jet fuel/diesel with impressive efficiency.</i><br /><i><br /></i><i>One of the challenges is the ability to run many experiments at low cost per experiment, instead of only a few expensive experiments per year. Discoveries are likely to happen faster if more experiments are conducted. This leads to heavier investments, which are difficult to achieve within slim margin businesses. Therefore the nurturing processes for disruptive business are likely to come from new players, beside existing players which will decide to fund significant new investments.</i><br /><br />Of course, these discussions could be opened for many other sectors. The opportunities for more discourse on the impact and future of Moore&#8217;s Law on CS and other disciplines are abundant, and can be continued with your comments on the <a href="https://plus.google.com/117790530324740296539/posts/LfbZPimWbdi">Research at Google Google+ page</a>. Please join, and share your thoughts.]]></description>
				<content:encoded><![CDATA[<i>This is the last entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.    </i><br /><br />--<br /><br />The quest <a href="http://en.wikipedia.org/wiki/Moore%27s_law">for Moore’s Law</a> and its potential impact in other disciplines is a journey the technology industry is starting, by crossing the Rubicon from the semiconductor industry to other less explored fields, but with the particular mindset created by Moore’s Law. Our goal is to explore if there are Moore’s Law opportunities emerging in other disciplines, as well as its potential impact. As such, we have interviewed several professors and researchers and asked them if they could see emerging ‘Moore’s Laws’ in their discipline. Listed below are some highlights of those discussions, ranging from CS+ to potentials in the Energy Sector: <br /><br /><b>Sensors and Data Acquisition</b><br /><b><a href="http://www.edparsons.com/">Ed Parsons</a>, Google Geospatial Technologist</b><br /><i>The More than Moore discussion can be extended to outside of the main chip, and go within the same board as the main chip or within the device that a user is carrying. Greater sensors capabilities (for the measurement of pressure, electromagnetic field and other local conditions) allow including them in smart phones, glasses, or other devices and perform local data acquisition. This trend is strong, and should allow future devices benefiting from Moore’s Law to receive enough data to perform more complex applications. </i><br /><i><br /></i><i><a href="http://en.wikipedia.org/wiki/Metcalfe%27s_law">Metcalfe’s Law</a> states that the value of a telecommunication network is proportional to the square of connected nodes of the system. This law can be used in parallel to Moore’s Law to evaluate the value of the <a href="http://en.wikipedia.org/wiki/Internet_of_Things">Internet of Things</a>. The network itself can be seen as composed by layers: at the user’s local level (to capture data related to the body of the user, or to immediately accessible objects), locally around the user (such as to get data within the same street as the user), and finally globally (to get data from the global internet). The extrapolation made earlier in this blog (several TB available in flash memory) will lead to the ability to construct, exchange and download/upload entire contexts for a given situation or a given application and use these contexts without intense network activity, or even with very little or no network activity. </i><br /><br /><b>Future of Moore’s Law and its impact on Physics</b><br /><b><a href="http://openlab.web.cern.ch/about/people/sverre-jarp">Sverre Jarp</a>, CERN</b><br /><i><a href="http://home.web.cern.ch/">CERN</a>, and its experiments with the Large Electron-Positron Collider (<a href="http://home.web.cern.ch/about/accelerators/large-electron-positron-collider">LEP</a>) and Large Hadron Collider (LHC)  generate data on the order of a PetaByte per year; this data has to be filtered, processed and analyzed in order to find meaningful physics events leading to new discoveries. In this context Moore’s Law has been particularly helpful to allow computing power, storage and networking capabilities at CERN and at other High Energy Physics (<a href="http://en.wikipedia.org/wiki/Particle_physics">HEP</a>) centers to scale up regularly. Several generations of hardware and software have been exhausted during the journey from mainframes to today’s clusters.</i><br /><i><br /></i><i>CERN has a long tradition of collaboration with chip manufacturers, hardware and software vendors to understand and predict next trends in the computing evolution curve. Recent analysis indicates that Moore’s Law will likely continue over the next decade. The statement of ‘several TB of flash memory availability by 2025’ may even be a little conservative according to most recent analysis.</i><br /><br /><b>Big Data Visualizations</b><br /><b><a href="http://ella.slis.indiana.edu/~katy/">Katy Börner</a>, Indiana University</b><br /><i>Thanks to Moore’s Law, the amount of data available for any given phenomenon, whether sensed or simulated, has been growing by several orders of magnitude over the past decades. Intelligent sampling can be used to filter out the most relevant bits of information and is practiced in Physics, Astronomy, Medicine and other sciences. Subsequently, data needs to be analyzed and visualized to identify meaningful trends and phenomena, and to communicate them to others.</i><br /><i><br /></i><i>While most people learn in school how to read charts and maps, many never learn how to read a network layout—data literacy remains a challenge. The <a href="http://ivmooc.cns.iu.edu/">Information Visualization Massive Open Online Course (MOOC)</a> at Indiana University teaches students from more than 100 countries how to read but also how to design meaningful network, topical, geospatial, and temporal visualizations. Using the tools introduced in this free course anyone can analyze, visualize, and navigate complex data sets to understand patterns and trends.</i><br /><br /><b>Candidate for Moore’s Law in Energy </b><br /><b><a href="http://people.epfl.ch/francesco.stellacci">Professor Francesco Stellacci</a>, EPFL</b><br /><i>It is currently hard to see a “Moore’s Law” applying to candidates in energy technology. Nuclear fusion could reserve some positive surprises, if several significant breakthroughs are found in the process of creating usable energy with this technique. For any other technology the technological growth will be slower. Best solar cells of today have a 30% efficiency, which could scale higher of course (obviously not much more than a factor of 3). Also cost could be driven down by an order of magnitude. Best estimates show, however, a combined performance improvement by a factor 30 over many years.</i><br /><br /><b>Further Discussion of Moore’s Law in Energy</b><br /><b><a href="http://www.googleventures.com/team/ross-koningstein">Ross Koningstein</a>, Google Director Emeritus</b><br /><i>As of today there is no obvious Moore’s Law in the Energy sector which could decrease some major costs by 50% every 18 months. However material properties at nanoscale, and chemical processes such as <a href="http://en.wikipedia.org/wiki/Catalysis">catalysis</a> are being investigated and could lead to promising results.  Applications targeted are <a href="http://en.wikipedia.org/wiki/Hydrocarbon">hydrocarbon</a> creation at scale and improvement of <a href="http://en.wikipedia.org/wiki/Petroleum_refining_processes">oil refinery processes</a>, where breakthrough in micro/nano property catalysts is pursued.  Hydrocarbons are much more compatible at scale with the existing automotive/aviation and natural gas distribution systems.  Here in California, <a href="http://www.googleventures.com/">Google Ventures</a> has invested in <a href="http://www.coolplanet.com/">Cool Planet Energy Systems</a>, a company with neat technology that can convert biomass to gasoline/jet fuel/diesel with impressive efficiency.</i><br /><i><br /></i><i>One of the challenges is the ability to run many experiments at low cost per experiment, instead of only a few expensive experiments per year. Discoveries are likely to happen faster if more experiments are conducted. This leads to heavier investments, which are difficult to achieve within slim margin businesses. Therefore the nurturing processes for disruptive business are likely to come from new players, beside existing players which will decide to fund significant new investments.</i><br /><br />Of course, these discussions could be opened for many other sectors. The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the <a href="https://plus.google.com/117790530324740296539/posts/LfbZPimWbdi">Research at Google Google+ page</a>. Please join, and share your thoughts.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/moores-law-part-4-moores-law-in-other-domains/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>The first detailed maps of global forest change</title>
		<link>https://googledata.org/google-research/the-first-detailed-maps-of-global-forest-change/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-first-detailed-maps-of-global-forest-change</link>
		<comments>https://googledata.org/google-research/the-first-detailed-maps-of-global-forest-change/#comments</comments>
		<pubDate>Thu, 14 Nov 2013 19:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=966a48998b9bcf4dcc6ce4eea7a34031</guid>
		<description><![CDATA[<span>Posted by Matt Hansen and Peter Potapov, University of Maryland; Rebecca Moore and Matt Hancher, Google </span><br /><br />Most people are familiar with exploring images of the Earth&#8217;s surface in Google Maps and Earth, but of course there&#8217;s more to satellite data than just pretty pictures.  By applying algorithms to time-series data it is possible to quantify global land dynamics, such as forest extent and change.  Mapping global forests over time not only enables many science applications, such as climate change and biodiversity modeling efforts, but also informs policy initiatives by providing objective data on forests that are ready for use by governments, civil society and private industry in improving forest management.<br /><br />In a collaboration led by researchers at the University of Maryland, we built a new map product that quantifies global forest extent and change from 2000 to 2012.  This product is the first of its kind, a global 30 meter resolution thematic map of the Earth&#8217;s land surface that offers a consistent characterization of forest change at a resolution that is high enough to be locally relevant as well.  It captures myriad forest dynamics, including fires, tornadoes, disease and logging.<br /><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><img height="360" src="https://lh4.googleusercontent.com/hUXExIQe3_jOu6riQ1BwM3rUTM0A9ZOvjxaQbXJ0v2KBau9CYbtb3w6VD2NnqtPZ_EPQ2boSXK6aHUUkma67wzsZ-sWIfLiXtgyNROyTbKTwa-mA-rQwfDHp" width="640"></td></tr><tr><td><span><span><i>Global 30 meter resolution thematic maps of the Earth&#8217;s land surface: Landsat composite reference image (2000), summary map of forest loss, extent and gain (2000-2012), individual maps of forest extent, gain, loss, and loss color-coded by year. </i><a href="https://plus.google.com/photos/+GoogleEarth/albums/5945835727174665985/5945838776208728226?pid=5945838776208728226&#38;oid=106191537604091348855">Click to enlarge</a></span></span></td></tr></tbody></table>The satellite data came from the Enhanced Thematic Mapper Plus (ETM+) sensor onboard the NASA/USGS <a href="http://en.wikipedia.org/wiki/Landsat_7">Landsat 7</a> satellite.  The expertise of NASA and USGS, from satellite design to operations to data management and delivery, is critical to any earth system study using Landsat data.  For this analysis, we processed over 650,000 ETM+ images in order to characterize global forest change.  <br /><br />Key to the study&#8217;s success was the collaboration between remote sensing scientists at the University of Maryland, who developed and tested models for processing and characterizing the Landsat data, and computer scientists at Google, who oversaw the implementation of the final models using Google&#8217;s Earth Engine computation platform.  <a href="https://earthengine.google.org/">Google Earth Engine</a> is a massively parallel technology for high-performance processing of geospatial data, and houses a copy of the entire Landsat image catalog.  For this study, a total of 20 terapixels of Landsat data were processed using one million CPU-core hours on 10,000 computers in parallel,  in order to characterize year 2000 percent tree cover and subsequent tree cover loss and gain through 2012.  What would have taken a single computer 15 years to perform was completed in a matter of days using Google Earth Engine computing. <br /><br />Global forest loss totaled 2.3 million square kilometers and gain 0.8 million square kilometers from 2000 to 2012.  Among the many results is the finding that tropical forest loss is increasing with an average of 2,101 additional square kilometers of forest loss per year over the study period.  Despite the reduction in Brazilian deforestation over the study period, increasing rates of forest loss in countries such as Indonesia, Malaysia, Tanzania, Angola, Peru and Paraguay resulted in a statistically significant trend in increasing tropical forest loss.  The maps and statistics from this study fill an information void for many parts of the world.  The results can be used as an initial reference for countries lacking such information, as a spur to capacity building in such countries, and as a basis of comparison in evolving national forest monitoring methods.  Additionally, we hope it will enable further science investigations ranging from the evaluation of the integrity of protected areas to the economic drivers of deforestation to carbon cycle modeling.<br /><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><img height="360" src="https://lh4.googleusercontent.com/xg2AoW0MvllJV9fHPGM-T86Eo2ioWN2n5xYJLEutgAeCffke7DSaQRJBRyQm7p7i98sait7cLp9DwpaQ3kLpOk6UemD4vZILiIWkdaCO5EX7sRvje9r6tOPc" width="640"></td></tr><tr><td><span><span><span><i>The Chaco woodlands of Bolivia, Paraguay and Argentina are under intensive pressure from agroindustrial development. Paraguay&#8217;s Chaco woodlands within the western half of the country are experiencing rapid deforestation in the development of cattle ranches. The result is the highest rate of deforestation in the world.</i>&#160;<a href="https://plus.google.com/photos/+GoogleEarth/albums/5945835727174665985/5945868637392432770?pid=5945868637392432770&#38;oid=106191537604091348855">Click to enlarge</a></span></span></span></td></tr></tbody></table>Global map of forest change: <a href="http://earthenginepartners.appspot.com/science-2013-global-forest">http://earthenginepartners.appspot.com/science-2013-global-forest</a><br /><br />If you are curious to learn more, tune in next Monday, November 18 to a live-streamed, online presentation and demonstration by Matt Hansen and colleagues from UMD, Google, USGS, NASA and the Moore Foundation:<br /><br /><div><b><u>Live-stream Presentation: Mapping Global Forest Change</u></b></div><div>Live online presentation and demonstration, followed by Q&#38;A</div><div>Monday, November 18, 2013 at 1pm EST, 10am PST</div><div>Link to live-streamed event:  <a href="http://goo.gl/JbWWTk">http://goo.gl/JbWWTk</a></div><div>Please submit questions here:  <a href="http://goo.gl/rhxK5X">http://goo.gl/rhxK5X</a></div><br />For further results and details of this study, see <a href="http://www.sciencemag.org/lookup/doi/10.1126/science.1244693"><i>High-Resolution Global Maps of 21st-Century Forest Cover Change</i></a> in the November 15th  issue of the journal Science.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Matt Hansen and Peter Potapov, University of Maryland; Rebecca Moore and Matt Hancher, Google </span><br /><br />Most people are familiar with exploring images of the Earth’s surface in Google Maps and Earth, but of course there’s more to satellite data than just pretty pictures.  By applying algorithms to time-series data it is possible to quantify global land dynamics, such as forest extent and change.  Mapping global forests over time not only enables many science applications, such as climate change and biodiversity modeling efforts, but also informs policy initiatives by providing objective data on forests that are ready for use by governments, civil society and private industry in improving forest management.<br /><br />In a collaboration led by researchers at the University of Maryland, we built a new map product that quantifies global forest extent and change from 2000 to 2012.  This product is the first of its kind, a global 30 meter resolution thematic map of the Earth’s land surface that offers a consistent characterization of forest change at a resolution that is high enough to be locally relevant as well.  It captures myriad forest dynamics, including fires, tornadoes, disease and logging.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><img height="360" src="https://lh4.googleusercontent.com/hUXExIQe3_jOu6riQ1BwM3rUTM0A9ZOvjxaQbXJ0v2KBau9CYbtb3w6VD2NnqtPZ_EPQ2boSXK6aHUUkma67wzsZ-sWIfLiXtgyNROyTbKTwa-mA-rQwfDHp" style="margin-left: auto; margin-right: auto;" width="640" /></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-3b6553b4-5740-a4ed-f963-597b0057b3ad"><span style="background-color: white; color: #222222; font-family: Arial; font-size: 12px; vertical-align: baseline; white-space: pre-wrap;"><i>Global 30 meter resolution thematic maps of the Earth’s land surface: Landsat composite reference image (2000), summary map of forest loss, extent and gain (2000-2012), individual maps of forest extent, gain, loss, and loss color-coded by year. </i><a href="https://plus.google.com/photos/+GoogleEarth/albums/5945835727174665985/5945838776208728226?pid=5945838776208728226&amp;oid=106191537604091348855">Click to enlarge</a></span></span></td></tr></tbody></table>The satellite data came from the Enhanced Thematic Mapper Plus (ETM+) sensor onboard the NASA/USGS <a href="http://en.wikipedia.org/wiki/Landsat_7">Landsat 7</a> satellite.  The expertise of NASA and USGS, from satellite design to operations to data management and delivery, is critical to any earth system study using Landsat data.  For this analysis, we processed over 650,000 ETM+ images in order to characterize global forest change.  <br /><br />Key to the study’s success was the collaboration between remote sensing scientists at the University of Maryland, who developed and tested models for processing and characterizing the Landsat data, and computer scientists at Google, who oversaw the implementation of the final models using Google’s Earth Engine computation platform.  <a href="https://earthengine.google.org/">Google Earth Engine</a> is a massively parallel technology for high-performance processing of geospatial data, and houses a copy of the entire Landsat image catalog.  For this study, a total of 20 terapixels of Landsat data were processed using one million CPU-core hours on 10,000 computers in parallel,  in order to characterize year 2000 percent tree cover and subsequent tree cover loss and gain through 2012.  What would have taken a single computer 15 years to perform was completed in a matter of days using Google Earth Engine computing. <br /><br />Global forest loss totaled 2.3 million square kilometers and gain 0.8 million square kilometers from 2000 to 2012.  Among the many results is the finding that tropical forest loss is increasing with an average of 2,101 additional square kilometers of forest loss per year over the study period.  Despite the reduction in Brazilian deforestation over the study period, increasing rates of forest loss in countries such as Indonesia, Malaysia, Tanzania, Angola, Peru and Paraguay resulted in a statistically significant trend in increasing tropical forest loss.  The maps and statistics from this study fill an information void for many parts of the world.  The results can be used as an initial reference for countries lacking such information, as a spur to capacity building in such countries, and as a basis of comparison in evolving national forest monitoring methods.  Additionally, we hope it will enable further science investigations ranging from the evaluation of the integrity of protected areas to the economic drivers of deforestation to carbon cycle modeling.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><img height="360" src="https://lh4.googleusercontent.com/xg2AoW0MvllJV9fHPGM-T86Eo2ioWN2n5xYJLEutgAeCffke7DSaQRJBRyQm7p7i98sait7cLp9DwpaQ3kLpOk6UemD4vZILiIWkdaCO5EX7sRvje9r6tOPc" style="margin-left: auto; margin-right: auto;" width="640" /></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: Arial, Helvetica, sans-serif;"><span style="text-align: start;"><span style="font-size: x-small;"><i>The Chaco woodlands of Bolivia, Paraguay and Argentina are under intensive pressure from agroindustrial development. Paraguay’s Chaco woodlands within the western half of the country are experiencing rapid deforestation in the development of cattle ranches. The result is the highest rate of deforestation in the world.</i>&nbsp;<a href="https://plus.google.com/photos/+GoogleEarth/albums/5945835727174665985/5945868637392432770?pid=5945868637392432770&amp;oid=106191537604091348855">Click to enlarge</a></span></span></span></td></tr></tbody></table>Global map of forest change: <a href="http://earthenginepartners.appspot.com/science-2013-global-forest">http://earthenginepartners.appspot.com/science-2013-global-forest</a><br /><br />If you are curious to learn more, tune in next Monday, November 18 to a live-streamed, online presentation and demonstration by Matt Hansen and colleagues from UMD, Google, USGS, NASA and the Moore Foundation:<br /><br /><div style="text-align: center;"><b><u>Live-stream Presentation: Mapping Global Forest Change</u></b></div><div style="text-align: center;">Live online presentation and demonstration, followed by Q&amp;A</div><div style="text-align: center;">Monday, November 18, 2013 at 1pm EST, 10am PST</div><div style="text-align: center;">Link to live-streamed event:  <a href="http://goo.gl/JbWWTk">http://goo.gl/JbWWTk</a></div><div style="text-align: center;">Please submit questions here:  <a href="http://goo.gl/rhxK5X">http://goo.gl/rhxK5X</a></div><br />For further results and details of this study, see <a href="http://www.sciencemag.org/lookup/doi/10.1126/science.1244693"><i>High-Resolution Global Maps of 21st-Century Forest Cover Change</i></a> in the November 15th  issue of the journal Science.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/the-first-detailed-maps-of-global-forest-change/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Moore’s Law, Part 3: Possible extrapolations over the next 15 years and impact</title>
		<link>https://googledata.org/google-research/moores-law-part-3-possible-extrapolations-over-the-next-15-years-and-impact/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=moores-law-part-3-possible-extrapolations-over-the-next-15-years-and-impact</link>
		<comments>https://googledata.org/google-research/moores-law-part-3-possible-extrapolations-over-the-next-15-years-and-impact/#comments</comments>
		<pubDate>Wed, 13 Nov 2013 17:30:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=d450758700e137b74896f085bbdaaba3</guid>
		<description><![CDATA[<div></div><div></div><br /><br /><i>This is the third entry of a series focused on Moore&#8217;s Law and its implications moving forward, edited from a White paper on Moore&#8217;s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore&#8217;s Law and explores how they believe Moore&#8217;s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.</i><br /><br />--<br /><br /><b>More Moore</b><br />We examine data from the ITRS 2012 <a href="http://www.itrs.net/links/2012Summer/ORTC.ppt">Overall Roadmap Technology Characteristics</a> (ORTC 2012), and select notable interpolations; The chart below shows chip size trends up to the year 2026 along with the &#8220;Average Moore&#8217;s Law&#8221; line. Additionally, in the <a href="http://www.itrs.net/Links/2011ITRS/2011Tables/ORTC_2011Tables.xlsm">ORTC 2011 tables</a> we find data on 3D chip layer increases (up to 128 layers), including costs. Finally, the ORTC 2011 index sheet estimates that the <a href="http://en.wikipedia.org/wiki/Dynamic_random-access_memory">DRAM</a> cost per bit at production will be ~0.002 microcents per bit by ~2025. From these sources we draw three More Moore (MM) extrapolations, that by the year 2025:<br /><br /><ul><li>4Tb Flash <a href="http://en.wikipedia.org/wiki/Multi-level_cell">multi-level cell</a> (MLC) memory will be in production</li><li>There will be ~100 billion transistors per microprocessing unit (MPU)</li><li>1TB RAM Memory will cost less than $100</li></ul><br /><div><a href="https://lh6.googleusercontent.com/gbs5wbeb8XcK5fmR3Rn1679cAzHNrf3Dvd9weeA6_HtgQr3_pQWImFjH0IWYFBaTlzwMlZEfxlAjdiCdbwwZQYUiPtsYXyI7fHeqQKbYzoMb-7g-W4xe3rzZ"><img border="0" height="480" src="https://lh6.googleusercontent.com/gbs5wbeb8XcK5fmR3Rn1679cAzHNrf3Dvd9weeA6_HtgQr3_pQWImFjH0IWYFBaTlzwMlZEfxlAjdiCdbwwZQYUiPtsYXyI7fHeqQKbYzoMb-7g-W4xe3rzZ" width="640"></a></div><br /><b>More than Moore</b><br />It should be emphasized that &#8220;More than Moore&#8221; (MtM) technologies do not constitute an alternative or even a competitor to the digital trend as described by Moore&#8217;s Law. In fact, it is the heterogeneous integration of digital and non-digital functionalities into compact systems that will be the key driver for a wide variety of application fields. Whereas MM may be viewed as the brain of an intelligent compact system, MtM refers to its capabilities to interact with the outside world and the users. <br /><br />As such, functional diversification may be regarded as a complement of digital signal and data processing in a product. This includes the interaction with the outside world through sensors and actuators and the subsystem for powering the product, implying analog and mixed signal processing, the incorporation of passive and/or high-voltage components, micro-mechanical devices enabling biological functionalities, and more. While MtM looks very promising for a variety of diversification topics, the ITRS study does not give figures from which &#8220;solid&#8221; extrapolations can be made. However, we can make safe/not so safe bets going towards 2025, and examine what these extrapolations mean in terms of  the user.<br /><br />Today we have a 1TB hard disk drives (HDD) for $100, but the access speed to data on the disk does not allow to take full advantage of this data in a fully interactive, or even practical, way. More importantly, the size and construction of HDD does not allow for their incorporation into mobile devices, Solid state drives (SSD), in comparison, have similar data transfer rates (~1Gb/s), latencies typically 100 times less than HDD, and have a significantly smaller form factor with no moving parts. The promise of offering several TB of flash memory, cost effectively by 2025, in a device carried along during the day (e.g. smartphone, watch, clothing, etc.) represents a paradigm shift with regard of today&#8217;s situation; it will empower the user by moving him/her from an environment where local data needs to be refreshed frequently (as with augmented reality applications) to a new environment where full contextual data will be available locally and refreshed only when critically needed.<br /><br />If data is pre-loaded in the order of magnitude of TBs, one will be able to get a complete contextual data set loaded before an action or a movement, and the device will dispatch its local intelligence to the user during the progress of the action, regardless of network availability or performance. This opens up the possibility of combining local 3D models and remote inputs, allowing applications like 3D conferencing to become available. The development and use of 3D avatars could even facilitate many social interaction models. To benefit from such applications the use of personal devices such as Google Glass may become pervasive, allowing users to navigate 3D scenes and environments naturally, as well as facilitating 3D conferencing and their &#8220;social&#8221; interactions. <br /><br />The opportunities for more discourse on the impact and future of Moore&#8217;s Law on CS and other disciplines are abundant, and can be continued with your comments on the <a href="https://plus.google.com/117790530324740296539/posts/aeMm4Wob9S8">Research at Google Google+ page</a>. Please join, and share your thoughts.]]></description>
				<content:encoded><![CDATA[<div class="separator" style="clear: both; text-align: center;"></div><div style="margin-left: 1em; margin-right: 1em;"></div><br /><br /><i>This is the third entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.</i><br /><br />--<br /><br /><b>More Moore</b><br />We examine data from the ITRS 2012 <a href="http://www.itrs.net/links/2012Summer/ORTC.ppt">Overall Roadmap Technology Characteristics</a> (ORTC 2012), and select notable interpolations; The chart below shows chip size trends up to the year 2026 along with the “Average Moore’s Law” line. Additionally, in the <a href="http://www.itrs.net/Links/2011ITRS/2011Tables/ORTC_2011Tables.xlsm">ORTC 2011 tables</a> we find data on 3D chip layer increases (up to 128 layers), including costs. Finally, the ORTC 2011 index sheet estimates that the <a href="http://en.wikipedia.org/wiki/Dynamic_random-access_memory">DRAM</a> cost per bit at production will be ~0.002 microcents per bit by ~2025. From these sources we draw three More Moore (MM) extrapolations, that by the year 2025:<br /><br /><ul><li>4Tb Flash <a href="http://en.wikipedia.org/wiki/Multi-level_cell">multi-level cell</a> (MLC) memory will be in production</li><li>There will be ~100 billion transistors per microprocessing unit (MPU)</li><li>1TB RAM Memory will cost less than $100</li></ul><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://lh6.googleusercontent.com/gbs5wbeb8XcK5fmR3Rn1679cAzHNrf3Dvd9weeA6_HtgQr3_pQWImFjH0IWYFBaTlzwMlZEfxlAjdiCdbwwZQYUiPtsYXyI7fHeqQKbYzoMb-7g-W4xe3rzZ" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="480" src="https://lh6.googleusercontent.com/gbs5wbeb8XcK5fmR3Rn1679cAzHNrf3Dvd9weeA6_HtgQr3_pQWImFjH0IWYFBaTlzwMlZEfxlAjdiCdbwwZQYUiPtsYXyI7fHeqQKbYzoMb-7g-W4xe3rzZ" width="640" /></a></div><br /><b>More than Moore</b><br />It should be emphasized that “More than Moore” (MtM) technologies do not constitute an alternative or even a competitor to the digital trend as described by Moore’s Law. In fact, it is the heterogeneous integration of digital and non-digital functionalities into compact systems that will be the key driver for a wide variety of application fields. Whereas MM may be viewed as the brain of an intelligent compact system, MtM refers to its capabilities to interact with the outside world and the users. <br /><br />As such, functional diversification may be regarded as a complement of digital signal and data processing in a product. This includes the interaction with the outside world through sensors and actuators and the subsystem for powering the product, implying analog and mixed signal processing, the incorporation of passive and/or high-voltage components, micro-mechanical devices enabling biological functionalities, and more. While MtM looks very promising for a variety of diversification topics, the ITRS study does not give figures from which “solid” extrapolations can be made. However, we can make safe/not so safe bets going towards 2025, and examine what these extrapolations mean in terms of  the user.<br /><br />Today we have a 1TB hard disk drives (HDD) for $100, but the access speed to data on the disk does not allow to take full advantage of this data in a fully interactive, or even practical, way. More importantly, the size and construction of HDD does not allow for their incorporation into mobile devices, Solid state drives (SSD), in comparison, have similar data transfer rates (~1Gb/s), latencies typically 100 times less than HDD, and have a significantly smaller form factor with no moving parts. The promise of offering several TB of flash memory, cost effectively by 2025, in a device carried along during the day (e.g. smartphone, watch, clothing, etc.) represents a paradigm shift with regard of today’s situation; it will empower the user by moving him/her from an environment where local data needs to be refreshed frequently (as with augmented reality applications) to a new environment where full contextual data will be available locally and refreshed only when critically needed.<br /><br />If data is pre-loaded in the order of magnitude of TBs, one will be able to get a complete contextual data set loaded before an action or a movement, and the device will dispatch its local intelligence to the user during the progress of the action, regardless of network availability or performance. This opens up the possibility of combining local 3D models and remote inputs, allowing applications like 3D conferencing to become available. The development and use of 3D avatars could even facilitate many social interaction models. To benefit from such applications the use of personal devices such as Google Glass may become pervasive, allowing users to navigate 3D scenes and environments naturally, as well as facilitating 3D conferencing and their “social” interactions. <br /><br />The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the <a href="https://plus.google.com/117790530324740296539/posts/aeMm4Wob9S8">Research at Google Google+ page</a>. Please join, and share your thoughts.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/moores-law-part-3-possible-extrapolations-over-the-next-15-years-and-impact/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Moore’s Law, Part 2: More Moore and More than Moore</title>
		<link>https://googledata.org/google-research/moores-law-part-2-more-moore-and-more-than-moore/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=moores-law-part-2-more-moore-and-more-than-moore</link>
		<comments>https://googledata.org/google-research/moores-law-part-2-more-moore-and-more-than-moore/#comments</comments>
		<pubDate>Tue, 12 Nov 2013 17:30:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=3b06d13bcd6954e9905529bf53fa9cbf</guid>
		<description><![CDATA[<i>This is the second entry of a series focused on Moore&#8217;s Law and its implications moving forward, edited from a White paper on Moore&#8217;s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore&#8217;s Law and explores how they believe Moore&#8217;s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance. </i><br /><br />--<br /><br /><i>One of the fundamental lessons derived for the past successes of the semiconductor industry comes for the observation that most of the innovations of the past ten years&#8212;those that indeed  that have revolutionized the way CMOS transistors are manufactured nowadays&#8212;were initiated 10&#8211;15 years before they were incorporated into the CMOS process. Strained silicon research began in the early 90s,  high-&#954;/metal-gate initiated in the mid-90s and  multiple-gate transistors were pioneered in the late 90s. This fundamental observation generates a simple but fundamental question: &#8220;What should the ITRS do to identify now what the extended semiconductor industry will need 10&#8211;15 years from now?&#8221; </i><br />- <a href="http://www.itrs.net/Links/2012ITRS/2012Chapters/2012Overview.pdf">International Technology Roadmap for Semiconductors 2012</a><br /><br /><b>More Moore</b><br />As we look at the years 2020&#8211;2025, we can see that the physical dimensions of <a href="http://en.wikipedia.org/wiki/CMOS">CMOS</a> manufacture are expected to be crossing below the 10 nanometer threshold. It is expected that as dimensions approach the 5&#8211;7 nanometer range it will be difficult to operate any transistor structure that is utilizing the metal-oxide semiconductor (MOS) physics as the basic principle of operation. Of course, we expect that new devices, like the <a href="http://spectrum.ieee.org/semiconductors/devices/the-tunneling-transistor">very promising tunnel transistors</a>, will allow a smooth transition from traditional CMOS to this new class of devices to reach these new levels of miniaturization. However, it is becoming clear that fundamental geometrical limits will be reached in the above timeframe. By fully utilizing the vertical dimension, it will be possible to <a href="http://www.pcworld.com/article/227260/intels_3d_transistor_why_it_matters.html">stack layers of transistors</a> on top of each other, and this 3D approach will continue to increase the number of components per square millimeter even when horizontal physical dimensions will no longer be amenable to any further reduction. It seems important, then, that we ask ourselves a fundamental question:  &#8220;How will we be able to increase the computation and memory capacity when the device physical limits will be reached?&#8221; It becomes necessary to re-examine how we can get more information in a finite amount of space. <br /><br />The semiconductor industry has thrived on <a href="http://en.wikipedia.org/wiki/Boolean_logic">Boolean logic</a>; after all, for most applications the CMOS devices have been used as nothing more than an &#8220;on-off&#8221; switch. Consequently, it becomes of paramount importance to develop new techniques that allow the use of multiple (i.e., more than 2) logic states in any given and finite location, which evokes the magic of &#8220;<a href="http://en.wikipedia.org/wiki/Quantum_computing">quantum computing</a>&#8221; looming in the distance. However, short of reaching this goal, a field of active research involves <a href="http://embedded.eecs.berkeley.edu/mvsis/mvlogic.html">increasing the number of states</a> available, e.g. 4&#8211;10 states, and to increase the number of &#8220;virtual transistors&#8221; by 2 every 2 years.<br /><div><span><img height="422" src="https://lh4.googleusercontent.com/w08XNuMbB8ffMxtnJYPxK2cCmzEDkOQGvqHnqvbvlklUJ1vfFEWbsGqUOEYgCyJLVkvhTCLpgSyM1dOWZGjYSez_X_OzRk4lCWpUYpM_S9vHCHNg126kjDkA" width="640"></span></div><br /><br /><b>More than Moore</b><br />During the blazing progress propelled by Moore&#8217;s Law of semiconductor logic and memory products, many &#8220;complementary&#8221; technologies have progressed as well, although not necessarily scaling to Moore&#8217;s Law. Heterogeneous integration of multiple technologies has generated &#8220;added value&#8221; to devices with multiple applications, beyond the traditional semiconductor logic and memory products that had lead the semiconductor industry from the mid 60s to the 90s. A variety of wireless devices contain typical examples of this confluence of technologies, e.g. logic and memory devices, display technology, microelectricomechanical systems (<a href="http://en.wikipedia.org/wiki/MEMS">MEMS</a>), RF and Analog/Mixed-signal technologies (<a href="http://eecatalog.com/intel/2013/10/02/radio-frequency-rf-and-analogmixed-signal-ams-ic-developer-survey/">RF/AMS</a>), etc. <br /><br />The ITRS has incorporated More than Moore and RF/AMS chapters in the main body of the ITRS, but is uncertain whether this is sufficient to encompass the plethora of associated technologies now entangled into modern products, or the multi-faceted public consumer who has become an influential driver of the semiconductor industry, demanding custom functionality in commercial electronic products.  In the next blog of this series, we will examine select data from the <a href="http://www.itrs.net/links/2012Summer/ORTC.ppt">ITRS Overall Roadmap Technology Characteristics (ORTC) 2012</a> and attempt to extrapolate the progress in the next 15 years, and its potential impact.<br /><br />The opportunities for more discourse on the impact and future of Moore&#8217;s Law on CS and other disciplines are abundant, and can be continued with your comments on the <a href="https://plus.google.com/117790530324740296539/posts/fpjmKvEkTEf">Research at Google Google+ page</a>. Please join, and share your thoughts.]]></description>
				<content:encoded><![CDATA[<i>This is the second entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance. </i><br /><br />--<br /><br /><i>One of the fundamental lessons derived for the past successes of the semiconductor industry comes for the observation that most of the innovations of the past ten years—those that indeed  that have revolutionized the way CMOS transistors are manufactured nowadays—were initiated 10–15 years before they were incorporated into the CMOS process. Strained silicon research began in the early 90s,  high-κ/metal-gate initiated in the mid-90s and  multiple-gate transistors were pioneered in the late 90s. This fundamental observation generates a simple but fundamental question: “What should the ITRS do to identify now what the extended semiconductor industry will need 10–15 years from now?” </i><br />- <a href="http://www.itrs.net/Links/2012ITRS/2012Chapters/2012Overview.pdf">International Technology Roadmap for Semiconductors 2012</a><br /><br /><b>More Moore</b><br />As we look at the years 2020–2025, we can see that the physical dimensions of <a href="http://en.wikipedia.org/wiki/CMOS">CMOS</a> manufacture are expected to be crossing below the 10 nanometer threshold. It is expected that as dimensions approach the 5–7 nanometer range it will be difficult to operate any transistor structure that is utilizing the metal-oxide semiconductor (MOS) physics as the basic principle of operation. Of course, we expect that new devices, like the <a href="http://spectrum.ieee.org/semiconductors/devices/the-tunneling-transistor">very promising tunnel transistors</a>, will allow a smooth transition from traditional CMOS to this new class of devices to reach these new levels of miniaturization. However, it is becoming clear that fundamental geometrical limits will be reached in the above timeframe. By fully utilizing the vertical dimension, it will be possible to <a href="http://www.pcworld.com/article/227260/intels_3d_transistor_why_it_matters.html">stack layers of transistors</a> on top of each other, and this 3D approach will continue to increase the number of components per square millimeter even when horizontal physical dimensions will no longer be amenable to any further reduction. It seems important, then, that we ask ourselves a fundamental question:  “How will we be able to increase the computation and memory capacity when the device physical limits will be reached?” It becomes necessary to re-examine how we can get more information in a finite amount of space. <br /><br />The semiconductor industry has thrived on <a href="http://en.wikipedia.org/wiki/Boolean_logic">Boolean logic</a>; after all, for most applications the CMOS devices have been used as nothing more than an “on-off” switch. Consequently, it becomes of paramount importance to develop new techniques that allow the use of multiple (i.e., more than 2) logic states in any given and finite location, which evokes the magic of “<a href="http://en.wikipedia.org/wiki/Quantum_computing">quantum computing</a>” looming in the distance. However, short of reaching this goal, a field of active research involves <a href="http://embedded.eecs.berkeley.edu/mvsis/mvlogic.html">increasing the number of states</a> available, e.g. 4–10 states, and to increase the number of “virtual transistors” by 2 every 2 years.<br /><div class="separator" style="clear: both; text-align: center;"><span id="docs-internal-guid-32e455d7-4a19-f928-2a5f-ad7363f3dce2" style="margin-left: 1em; margin-right: 1em;"><img height="422" src="https://lh4.googleusercontent.com/w08XNuMbB8ffMxtnJYPxK2cCmzEDkOQGvqHnqvbvlklUJ1vfFEWbsGqUOEYgCyJLVkvhTCLpgSyM1dOWZGjYSez_X_OzRk4lCWpUYpM_S9vHCHNg126kjDkA" width="640" /></span></div><br /><br /><b>More than Moore</b><br />During the blazing progress propelled by Moore’s Law of semiconductor logic and memory products, many “complementary” technologies have progressed as well, although not necessarily scaling to Moore’s Law. Heterogeneous integration of multiple technologies has generated “added value” to devices with multiple applications, beyond the traditional semiconductor logic and memory products that had lead the semiconductor industry from the mid 60s to the 90s. A variety of wireless devices contain typical examples of this confluence of technologies, e.g. logic and memory devices, display technology, microelectricomechanical systems (<a href="http://en.wikipedia.org/wiki/MEMS">MEMS</a>), RF and Analog/Mixed-signal technologies (<a href="http://eecatalog.com/intel/2013/10/02/radio-frequency-rf-and-analogmixed-signal-ams-ic-developer-survey/">RF/AMS</a>), etc. <br /><br />The ITRS has incorporated More than Moore and RF/AMS chapters in the main body of the ITRS, but is uncertain whether this is sufficient to encompass the plethora of associated technologies now entangled into modern products, or the multi-faceted public consumer who has become an influential driver of the semiconductor industry, demanding custom functionality in commercial electronic products.  In the next blog of this series, we will examine select data from the <a href="http://www.itrs.net/links/2012Summer/ORTC.ppt">ITRS Overall Roadmap Technology Characteristics (ORTC) 2012</a> and attempt to extrapolate the progress in the next 15 years, and its potential impact.<br /><br />The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the <a href="https://plus.google.com/117790530324740296539/posts/fpjmKvEkTEf">Research at Google Google+ page</a>. Please join, and share your thoughts.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/moores-law-part-2-more-moore-and-more-than-moore/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Moore’s Law, Part 1: Brief history of Moore&#8217;s Law and current state</title>
		<link>https://googledata.org/google-research/moores-law-part-1-brief-history-of-moores-law-and-current-state/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=moores-law-part-1-brief-history-of-moores-law-and-current-state</link>
		<comments>https://googledata.org/google-research/moores-law-part-1-brief-history-of-moores-law-and-current-state/#comments</comments>
		<pubDate>Mon, 11 Nov 2013 17:30:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=e27a7dc691c136486a127ba2068ac180</guid>
		<description><![CDATA[<i>This is the first entry of a series focused on Moore&#8217;s Law and its implications moving forward, edited from a White paper on Moore&#8217;s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore&#8217;s Law and explores how they believe Moore&#8217;s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.    </i><br /><i><br /></i><br /><div><i>---</i></div><br /><i>Moore's Law is the observation that over the <a href="http://en.wikipedia.org/wiki/History_of_computing_hardware">history of computing hardware</a>, the number of transistors on integrated circuits doubles approximately every two years. The period often quoted as "18 months" is due to Intel executive David House, who predicted that period for a doubling in chip performance (being a combination of the effect of more transistors and their being faster). </i>-<a href="http://en.wikipedia.org/wiki/Moores_law">Wikipedia</a><br /><br />Moore&#8217;s Law is named after Intel co-founder <a href="http://en.wikipedia.org/wiki/Gordon_Moore">Gordon E. Moore</a>, who described the trend in his <a href="http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf">1965 paper</a>. In it, Moore noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue "for at least ten years". Moore&#8217;s prediction has proven to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development.<br /><br />The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speed, memory capacity, sensors and even the number and size of <a href="http://www.nytimes.com/2006/06/07/technology/circuits/07essay.html">pixels in digital cameras</a>. All of these are improving at (roughly) exponential rates as well (see <a href="http://en.wikipedia.org/wiki/Moore%27s_law#Other_formulations_and_similar_laws">Other formulations and similar laws</a>). This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the <a href="http://www.theatlantic.com/past/docs/issues/2001/01/rauch.htm">world economy</a>, and is a driving force of technological and social change in the late 20th and early 21st centuries.<br /><br />Most improvement trends have resulted principally from the industry&#8217;s ability to exponentially decrease the minimum feature sizes used to fabricate integrated circuits. Of course, the most frequently cited trend is in integration level, which is usually expressed as Moore&#8217;s Law (that is, the number of components per chip doubles roughly every 24 months). The most significant trend is the decreasing cost-per-function, which has led to significant improvements in economic productivity and overall quality of life through proliferation of computers, communication, and other industrial and consumer electronics.<br /><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><img height="575" src="https://lh4.googleusercontent.com/CJvRWkndvvVL4GvRV35fl8vm9Eu5JI706JPfrcdiPr1OB8yBNaFQTmbKCui41oJFfgGHhFKXybL8_IJhp6CL6eWdy8Gf0KwDfOYJDQoMs7-RFUrvMXenO12B" width="640"></td></tr><tr><td><span><span>Transistor counts for integrated circuits plotted against their dates of introduction. The curve shows Moore's law - the doubling of transistor counts every two years. The y-axis is logarithmic, so the line corresponds to exponential growth</span></span></td></tr></tbody></table><br />All of these improvement trends, sometimes called &#8220;scaling&#8221; trends, have been enabled by large R&#38;D investments. In the last three decades, the growing size of the required investments has motivated industry collaboration and spawned many R&#38;D partnerships, consortia, and other cooperative ventures. To help guide these R&#38;D programs, the Semiconductor Industry Association (SIA) initiated the National Technology Roadmap for Semiconductors (<a href="http://en.wikipedia.org/wiki/International_Technology_Roadmap_for_Semiconductors#History">NTRS</a>) in 1992. Since its inception, a basic premise of the NTRS has been that continued scaling of electronics would further reduce the cost per function and promote market growth for integrated circuits. Thus, the Roadmap has been put together in the spirit of a challenge&#8212;essentially, &#8220;What technical capabilities need to be developed for the industry to stay on Moore&#8217;s Law and the other trends?&#8221;<br /><br />In 1998, the SIA was joined by corresponding industry associations in Europe, Japan, Korea, and Taiwan to participate in a 1998 update of the Roadmap and to begin work toward the first International Technology Roadmap for Semiconductors (<a href="http://www.itrs.net/">ITRS</a>), published in 1999. The overall objective of the ITRS is to present industry-wide consensus on the &#8220;best current estimate&#8221; of the industry&#8217;s research and development needs out to a 15-year horizon. As such, it provides a guide to the efforts of companies, universities, governments, and other research providers or funders. The ITRS has improved the quality of R&#38;D investment decisions made at all levels and has helped channel research efforts to areas that most need research breakthroughs.<br /><br />For more than half a century these scaling trends continued, and <a href="http://news.cnet.com/New-life-for-Moores-Law/2009-1006_3-5672485.html">sources in 2005</a> expected it to continue until at least 2015 or 2020. However, the <a href="http://www.itrs.net/Links/2010ITRS/2010Update/ToPost/2010Tables_ORTC_ITRS.xls">2010 update to the ITRS</a> has growth slowing at the end of 2013, after which time transistor counts and densities are to double only every three years. Accordingly, since 2007 the ITRS has addressed the concept of functional diversification under the title &#8220;<a href="http://www.itrs.net/Links/2010ITRS/IRC-ITRS-MtM-v2%203.pdf">More than Moore</a>&#8221; (MtM). This concept addresses an emerging category of devices  that incorporate functionalities that do not necessarily scale according to &#8220;Moore's Law,&#8221; but provide additional value to the end customer in different ways. <br /><br />The MtM approach typically allows for the non-digital functionalities (e.g., RF communication, power control, passive components, sensors, actuators)  to migrate from the system board-level into a particular package-level (<a href="http://goo.gl/N5ixTN">SiP</a>) or chip-level (<a href="http://goo.gl/S6H0Ha">SoC</a>) system solution. It is also hoped that by the end of this decade, it will be possible to augment the technology of constructing integrated circuits (<a href="http://en.wikipedia.org/wiki/CMOS">CMOS</a>) by introducing new devices that will realize some &#8220;beyond CMOS&#8221; capabilities. However, since these new devices may not totally replace CMOS functionality, it is anticipated that either chip-level or package level integration with CMOS may be implemented.<br /><br />The ITRS provides a very comprehensive analysis of the perspective for Moore&#8217;s Law when looking towards 2020 and beyond. The analysis can be roughly segmented into two trends: More Moore (MM) and More than Moore (MtM). In the next blog in this series, we will look in the the recent conclusions mentioned in the ITRS 2012 report on both trends. <br /><br />The opportunities for more discourse on the impact and future of Moore&#8217;s Law on CS and other disciplines are abundant, and can be continued with your comments on the <a href="https://plus.google.com/117790530324740296539/posts/S5SnnPmqHdy">Research at Google Google+ page</a>. Please join, and share your thoughts.]]></description>
				<content:encoded><![CDATA[<i>This is the first entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.    </i><br /><i><br /></i><br /><div style="text-align: center;"><i>---</i></div><br /><i>Moore's Law is the observation that over the <a href="http://en.wikipedia.org/wiki/History_of_computing_hardware">history of computing hardware</a>, the number of transistors on integrated circuits doubles approximately every two years. The period often quoted as "18 months" is due to Intel executive David House, who predicted that period for a doubling in chip performance (being a combination of the effect of more transistors and their being faster). </i>-<a href="http://en.wikipedia.org/wiki/Moores_law">Wikipedia</a><br /><br />Moore’s Law is named after Intel co-founder <a href="http://en.wikipedia.org/wiki/Gordon_Moore">Gordon E. Moore</a>, who described the trend in his <a href="http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf">1965 paper</a>. In it, Moore noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue "for at least ten years". Moore’s prediction has proven to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development.<br /><br />The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speed, memory capacity, sensors and even the number and size of <a href="http://www.nytimes.com/2006/06/07/technology/circuits/07essay.html">pixels in digital cameras</a>. All of these are improving at (roughly) exponential rates as well (see <a href="http://en.wikipedia.org/wiki/Moore%27s_law#Other_formulations_and_similar_laws">Other formulations and similar laws</a>). This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the <a href="http://www.theatlantic.com/past/docs/issues/2001/01/rauch.htm">world economy</a>, and is a driving force of technological and social change in the late 20th and early 21st centuries.<br /><br />Most improvement trends have resulted principally from the industry’s ability to exponentially decrease the minimum feature sizes used to fabricate integrated circuits. Of course, the most frequently cited trend is in integration level, which is usually expressed as Moore’s Law (that is, the number of components per chip doubles roughly every 24 months). The most significant trend is the decreasing cost-per-function, which has led to significant improvements in economic productivity and overall quality of life through proliferation of computers, communication, and other industrial and consumer electronics.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><img height="575" src="https://lh4.googleusercontent.com/CJvRWkndvvVL4GvRV35fl8vm9Eu5JI706JPfrcdiPr1OB8yBNaFQTmbKCui41oJFfgGHhFKXybL8_IJhp6CL6eWdy8Gf0KwDfOYJDQoMs7-RFUrvMXenO12B" style="margin-left: auto; margin-right: auto;" width="640" /></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="text-align: start;"><span style="font-size: x-small;">Transistor counts for integrated circuits plotted against their dates of introduction. The curve shows Moore's law - the doubling of transistor counts every two years. The y-axis is logarithmic, so the line corresponds to exponential growth</span></span></td></tr></tbody></table><br />All of these improvement trends, sometimes called “scaling” trends, have been enabled by large R&amp;D investments. In the last three decades, the growing size of the required investments has motivated industry collaboration and spawned many R&amp;D partnerships, consortia, and other cooperative ventures. To help guide these R&amp;D programs, the Semiconductor Industry Association (SIA) initiated the National Technology Roadmap for Semiconductors (<a href="http://en.wikipedia.org/wiki/International_Technology_Roadmap_for_Semiconductors#History">NTRS</a>) in 1992. Since its inception, a basic premise of the NTRS has been that continued scaling of electronics would further reduce the cost per function and promote market growth for integrated circuits. Thus, the Roadmap has been put together in the spirit of a challenge—essentially, “What technical capabilities need to be developed for the industry to stay on Moore’s Law and the other trends?”<br /><br />In 1998, the SIA was joined by corresponding industry associations in Europe, Japan, Korea, and Taiwan to participate in a 1998 update of the Roadmap and to begin work toward the first International Technology Roadmap for Semiconductors (<a href="http://www.itrs.net/">ITRS</a>), published in 1999. The overall objective of the ITRS is to present industry-wide consensus on the “best current estimate” of the industry’s research and development needs out to a 15-year horizon. As such, it provides a guide to the efforts of companies, universities, governments, and other research providers or funders. The ITRS has improved the quality of R&amp;D investment decisions made at all levels and has helped channel research efforts to areas that most need research breakthroughs.<br /><br />For more than half a century these scaling trends continued, and <a href="http://news.cnet.com/New-life-for-Moores-Law/2009-1006_3-5672485.html">sources in 2005</a> expected it to continue until at least 2015 or 2020. However, the <a href="http://www.itrs.net/Links/2010ITRS/2010Update/ToPost/2010Tables_ORTC_ITRS.xls">2010 update to the ITRS</a> has growth slowing at the end of 2013, after which time transistor counts and densities are to double only every three years. Accordingly, since 2007 the ITRS has addressed the concept of functional diversification under the title “<a href="http://www.itrs.net/Links/2010ITRS/IRC-ITRS-MtM-v2%203.pdf">More than Moore</a>” (MtM). This concept addresses an emerging category of devices  that incorporate functionalities that do not necessarily scale according to “Moore's Law,” but provide additional value to the end customer in different ways. <br /><br />The MtM approach typically allows for the non-digital functionalities (e.g., RF communication, power control, passive components, sensors, actuators)  to migrate from the system board-level into a particular package-level (<a href="http://goo.gl/N5ixTN">SiP</a>) or chip-level (<a href="http://goo.gl/S6H0Ha">SoC</a>) system solution. It is also hoped that by the end of this decade, it will be possible to augment the technology of constructing integrated circuits (<a href="http://en.wikipedia.org/wiki/CMOS">CMOS</a>) by introducing new devices that will realize some “beyond CMOS” capabilities. However, since these new devices may not totally replace CMOS functionality, it is anticipated that either chip-level or package level integration with CMOS may be implemented.<br /><br />The ITRS provides a very comprehensive analysis of the perspective for Moore’s Law when looking towards 2020 and beyond. The analysis can be roughly segmented into two trends: More Moore (MM) and More than Moore (MtM). In the next blog in this series, we will look in the the recent conclusions mentioned in the ITRS 2012 report on both trends. <br /><br />The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the <a href="https://plus.google.com/117790530324740296539/posts/S5SnnPmqHdy">Research at Google Google+ page</a>. Please join, and share your thoughts.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/moores-law-part-1-brief-history-of-moores-law-and-current-state/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Enhancing Linguistic Search with the Google Books Ngram Viewer</title>
		<link>https://googledata.org/google-books/enhancing-linguistic-search-with-the-google-books-ngram-viewer/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=enhancing-linguistic-search-with-the-google-books-ngram-viewer</link>
		<comments>https://googledata.org/google-books/enhancing-linguistic-search-with-the-google-books-ngram-viewer/#comments</comments>
		<pubDate>Thu, 17 Oct 2013 13:01:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Books]]></category>
		<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=6e0744fa8641c6444bccd6bd7d2cda90</guid>
		<description><![CDATA[<span>Posted by Slav Petrov and Dipanjan Das, Research Scientists</span><br /><br />Our book scanning effort, now in its eighth year, has put tens of millions of books online. Beyond the obvious benefits of being able to discover books and search through them, the project lets us take a step back and learn what the entire collection tells us about culture and language.<br /><br />Launched in 2010 by Jon Orwant and Will Brockman, the Google Books Ngram Viewer lets you search for words and phrases over the centuries, in English, Chinese, Russian, French, German, Italian, Hebrew, and Spanish.  It&#8217;s become popular for both casual explorations into language usage and serious linguistic research, and this summer we decided to provide some new ways to search with it.<br /><br />With our interns Jason Mann, Lu Yang, and David Zhang, we&#8217;ve added three new features. The first is wildcards: by putting an asterisk as a placeholder in your query, you can retrieve the ten most popular replacement. For instance, <a href="http://books.google.com/ngrams/graph?content=Queen+*_NOUN&#38;year_start=1800&#38;year_end=2000&#38;corpus=16&#38;smoothing=3&#38;share=&#38;direct_url=t2%3B%2CQueen%20%2A_NOUN%3B%2Cc0%3B%2Cs0%3B%3BQueen%20Elizabeth_NOUN%3B%2Cc0%3B%3BQueen%20Anne_NOUN%3B%2Cc0%3B%3BQueen%20Mary_NOUN%3B%2Cc0%3B%3BQueen%20Victoria_NOUN%3B%2Cc0%3B%3BQueen%20Caroline_NOUN%3B%2Cc0%3B%3BQueen%20Street_NOUN%3B%2Cc0%3B%3BQueen%20Charlotte_NOUN%3B%2Cc0%3B%3BQueen%20Margaret_NOUN%3B%2Cc0%3B%3BQueen%20Catherine_NOUN%3B%2Cc0%3B%3BQueen%20Guenever_NOUN%3B%2Cc0">what noun most often follows &#8220;Queen&#8221; in English fiction</a>? The answer is &#8220;Elizabeth&#8221;:<br /><br /><br />This graph also reveals that the frequency of mentions of the most popular queens has been decreasing steadily over time. (Language expert Ben Zimmer shows some other interesting examples in <a href="http://www.theatlantic.com/technology/archive/2013/10/Googles-Ngram-Viewer-Goes-Wild/280601/">his Atlantic article</a>.) Right-clicking collapses all of the series into a sum, allowing you to see the overall change. <br /><br />Another feature we&#8217;ve added is the ability to search for inflections: different grammatical forms of the same word. (Inflections of the verb &#8220;eat&#8221; include &#8220;ate&#8221;, &#8220;eating&#8221;, &#8220;eats&#8221;, and &#8220;eaten&#8221;.) Here, we can see that <a href="http://books.google.com/ngrams/graph?content=change_INF+roles&#38;year_start=1900&#38;year_end=2000&#38;corpus=16&#38;smoothing=3&#38;share=&#38;direct_url=t3%3B%2Cchange_INF%20roles%3B%2Cc0%3B%2Cs0%3B%3Bchanged%20roles%3B%2Cc0%3B%3Bchange%20roles%3B%2Cc0%3B%3Bchanging%20roles%3B%2Cc0%3B%3Bchanges%20roles%3B%2Cc0">the phrase &#8220;changing roles&#8221; has recently surged in popularity in English fiction</a>, besting &#8220;change roles&#8221;, which earlier dethroned &#8220;changed roles&#8221;:<br /><br /><br />Curiously, this switching doesn&#8217;t happen <a href="http://books.google.com/ngrams/graph?content=change_INF+roles&#38;year_start=1900&#38;year_end=2000&#38;corpus=15&#38;smoothing=3&#38;share=&#38;direct_url=t3%3B%2Cchange_INF%20roles%3B%2Cc0%3B%2Cs0%3B%3Bchanging%20roles%3B%2Cc0%3B%3Bchange%20roles%3B%2Cc0%3B%3Bchanged%20roles%3B%2Cc0%3B%3Bchanges%20roles%3B%2Cc0">when we add non-fiction into the mix</a>: &#8220;changing roles&#8221; is persistently on top, with an odd dip in the late 1980s. As with wildcards, right-clicking collapses and expands the data:<br /><br /><br />Finally, we&#8217;ve implemented the most common feature request from our users: the ability to search for multiple capitalization styles simultaneously. Until now, searching for <a href="http://books.google.com/ngrams/graph?content=mother+earth&#38;case_insensitive=on&#38;year_start=1900&#38;year_end=2000&#38;corpus=15&#38;smoothing=3&#38;share=&#38;direct_url=t4%3B%2Cmother%20earth%3B%2Cc0%3B%2Cs0%3B%3BMother%20Earth%3B%2Cc0%3B%3Bmother%20earth%3B%2Cc0%3B%3Bmother%20Earth%3B%2Cc0%3B%3BMOTHER%20EARTH%3B%2Cc0%3B%3BMother%20earth%3B%2Cc0#t4%3B%2Cmother%20earth%3B%2Cc0%3B%2Cs0%3B%3BMother%20Earth%3B%2Cc0%3B%3Bmother%20earth%3B%2Cc0%3B%3Bmother%20Earth%3B%2Cc0%3B%3BMOTHER%20EARTH%3B%2Cc0%3B%3BMother%20earth%3B%2Cc0">common capitalizations of &#8220;Mother Earth&#8221;</a> required using a plus sign to combine ngrams (e.g., &#8220;Mother Earth + mother Earth + mother earth&#8221;), but now the case-insensitive checkbox makes it easier:<br /><br /><br />As with our other two features, right-clicking toggles whether the variants are shown.<br /><br />We hope these features help you discover and share interesting trends in language use!]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Slav Petrov and Dipanjan Das, Research Scientists</span><br /><br />Our book scanning effort, now in its eighth year, has put tens of millions of books online. Beyond the obvious benefits of being able to discover books and search through them, the project lets us take a step back and learn what the entire collection tells us about culture and language.<br /><br />Launched in 2010 by Jon Orwant and Will Brockman, the Google Books Ngram Viewer lets you search for words and phrases over the centuries, in English, Chinese, Russian, French, German, Italian, Hebrew, and Spanish.  It’s become popular for both casual explorations into language usage and serious linguistic research, and this summer we decided to provide some new ways to search with it.<br /><br />With our interns Jason Mann, Lu Yang, and David Zhang, we’ve added three new features. The first is wildcards: by putting an asterisk as a placeholder in your query, you can retrieve the ten most popular replacement. For instance, <a href="http://books.google.com/ngrams/graph?content=Queen+*_NOUN&year_start=1800&year_end=2000&corpus=16&smoothing=3&share=&direct_url=t2%3B%2CQueen%20%2A_NOUN%3B%2Cc0%3B%2Cs0%3B%3BQueen%20Elizabeth_NOUN%3B%2Cc0%3B%3BQueen%20Anne_NOUN%3B%2Cc0%3B%3BQueen%20Mary_NOUN%3B%2Cc0%3B%3BQueen%20Victoria_NOUN%3B%2Cc0%3B%3BQueen%20Caroline_NOUN%3B%2Cc0%3B%3BQueen%20Street_NOUN%3B%2Cc0%3B%3BQueen%20Charlotte_NOUN%3B%2Cc0%3B%3BQueen%20Margaret_NOUN%3B%2Cc0%3B%3BQueen%20Catherine_NOUN%3B%2Cc0%3B%3BQueen%20Guenever_NOUN%3B%2Cc0">what noun most often follows “Queen” in English fiction</a>? The answer is “Elizabeth”:<br /><br /><iframe name="queen_chart" src="http://books.google.com/ngrams/interactive_chart?content=Queen+*_NOUN&year_start=1800&year_end=2000&corpus=16&smoothing=3&share=&direct_url=t2%3B%2CQueen%20*_NOUN%3B%2Cc0%3B%2Cs0%3B%3BQueen%20Elizabeth_NOUN%3B%2Cc0%3B%3BQueen%20Anne_NOUN%3B%2Cc0%3B%3BQueen%20Mary_NOUN%3B%2Cc0%3B%3BQueen%20Victoria_NOUN%3B%2Cc0%3B%3BQueen%20Caroline_NOUN%3B%2Cc0%3B%3BQueen%20Street_NOUN%3B%2Cc0%3B%3BQueen%20Charlotte_NOUN%3B%2Cc0%3B%3BQueen%20Margaret_NOUN%3B%2Cc0%3B%3BQueen%20Catherine_NOUN%3B%2Cc0%3B%3BQueen%20Guenever_NOUN%3B%2Cc0" width=800 height=400 marginwidth=0 marginheight=0 style="margin-bottom:-100px" hspace=0 vspace=0 frameborder=0 scrolling=no ></iframe><br />This graph also reveals that the frequency of mentions of the most popular queens has been decreasing steadily over time. (Language expert Ben Zimmer shows some other interesting examples in <a href="http://www.theatlantic.com/technology/archive/2013/10/Googles-Ngram-Viewer-Goes-Wild/280601/">his Atlantic article</a>.) Right-clicking collapses all of the series into a sum, allowing you to see the overall change. <br /><br />Another feature we’ve added is the ability to search for inflections: different grammatical forms of the same word. (Inflections of the verb “eat” include “ate”, “eating”, “eats”, and “eaten”.) Here, we can see that <a href="http://books.google.com/ngrams/graph?content=change_INF+roles&year_start=1900&year_end=2000&corpus=16&smoothing=3&share=&direct_url=t3%3B%2Cchange_INF%20roles%3B%2Cc0%3B%2Cs0%3B%3Bchanged%20roles%3B%2Cc0%3B%3Bchange%20roles%3B%2Cc0%3B%3Bchanging%20roles%3B%2Cc0%3B%3Bchanges%20roles%3B%2Cc0">the phrase “changing roles” has recently surged in popularity in English fiction</a>, besting “change roles”, which earlier dethroned “changed roles”:<br /><br /><iframe name="change_fiction_chart" src="http://books.google.com/ngrams/interactive_chart?content=change_INF+roles&year_start=1900&year_end=2000&corpus=16&smoothing=3&share=&direct_url=t3%3B%2Cchange_INF%20roles%3B%2Cc0%3B%2Cs0%3B%3Bchanged%20roles%3B%2Cc0%3B%3Bchange%20roles%3B%2Cc0%3B%3Bchanging%20roles%3B%2Cc0%3B%3Bchanges%20roles%3B%2Cc0" width=800 height=400 marginwidth=0 marginheight=0 style="margin-bottom:-100px" hspace=0 vspace=0 frameborder=0 scrolling=no></iframe><br />Curiously, this switching doesn’t happen <a href="http://books.google.com/ngrams/graph?content=change_INF+roles&year_start=1900&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t3%3B%2Cchange_INF%20roles%3B%2Cc0%3B%2Cs0%3B%3Bchanging%20roles%3B%2Cc0%3B%3Bchange%20roles%3B%2Cc0%3B%3Bchanged%20roles%3B%2Cc0%3B%3Bchanges%20roles%3B%2Cc0">when we add non-fiction into the mix</a>: “changing roles” is persistently on top, with an odd dip in the late 1980s. As with wildcards, right-clicking collapses and expands the data:<br /><br /><iframe name="change_all_chart" src="http://books.google.com/ngrams/interactive_chart?content=change_INF+roles&year_start=1900&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t3%3B%2Cchange_INF%20roles%3B%2Cc0%3B%2Cs0%3B%3Bchanging%20roles%3B%2Cc0%3B%3Bchange%20roles%3B%2Cc0%3B%3Bchanged%20roles%3B%2Cc0%3B%3Bchanges%20roles%3B%2Cc0" width=800 height=400 marginwidth=0 marginheight=0 style="margin-bottom:-100px" hspace=0 vspace=0 frameborder=0 scrolling=no></iframe><br />Finally, we’ve implemented the most common feature request from our users: the ability to search for multiple capitalization styles simultaneously. Until now, searching for <a href="http://books.google.com/ngrams/graph?content=mother+earth&case_insensitive=on&year_start=1900&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t4%3B%2Cmother%20earth%3B%2Cc0%3B%2Cs0%3B%3BMother%20Earth%3B%2Cc0%3B%3Bmother%20earth%3B%2Cc0%3B%3Bmother%20Earth%3B%2Cc0%3B%3BMOTHER%20EARTH%3B%2Cc0%3B%3BMother%20earth%3B%2Cc0#t4%3B%2Cmother%20earth%3B%2Cc0%3B%2Cs0%3B%3BMother%20Earth%3B%2Cc0%3B%3Bmother%20earth%3B%2Cc0%3B%3Bmother%20Earth%3B%2Cc0%3B%3BMOTHER%20EARTH%3B%2Cc0%3B%3BMother%20earth%3B%2Cc0">common capitalizations of “Mother Earth”</a> required using a plus sign to combine ngrams (e.g., “Mother Earth + mother Earth + mother earth”), but now the case-insensitive checkbox makes it easier:<br /><br /><iframe name="earth_chart" src="http://books.google.com/ngrams/interactive_chart?content=mother+earth&case_insensitive=on&year_start=1900&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t4%3B%2Cmother%20earth%3B%2Cc0%3B%2Cs0%3B%3BMother%20Earth%3B%2Cc0%3B%3Bmother%20earth%3B%2Cc0%3B%3Bmother%20Earth%3B%2Cc0%3B%3BMOTHER%20EARTH%3B%2Cc0%3B%3BMother%20earth%3B%2Cc0" width=800 height=400 marginwidth=0 marginheight=0 style="margin-bottom:-100px" hspace=0 vspace=0 frameborder=0 scrolling=no></iframe><br />As with our other two features, right-clicking toggles whether the variants are shown.<br /><br />We hope these features help you discover and share interesting trends in language use!]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-books/enhancing-linguistic-search-with-the-google-books-ngram-viewer/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Opening up Course Builder data</title>
		<link>https://googledata.org/google-research/opening-up-course-builder-data/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=opening-up-course-builder-data</link>
		<comments>https://googledata.org/google-research/opening-up-course-builder-data/#comments</comments>
		<pubDate>Wed, 09 Oct 2013 18:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>
		<category><![CDATA[open source]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=9d1575dfcb8cbaa2d942b77ce9666cc8</guid>
		<description><![CDATA[<span>Posted by John Cox and Pavel Simakov, Course Builder Team, Google Research</span>  <br /><br /><a href="https://code.google.com/p/course-builder/">Course Builder</a> is an experimental, open source platform for delivering massive online open courses. When you run Course Builder, you own everything from the production instance to the student data that builds up while your course is running.<br /><br />Part of being open is making it easy for you to access and work with your data. Earlier this year we shipped a tool called ETL (short for extract-transform-load) that you can use to pull your data out of Course Builder, run arbitrary computations on it, and load it back. We <a href="http://johnmcox.blogspot.com/2013/05/working-with-course-builder-data.html">wrote a post</a> that goes into detail on how you can use ETL to get copies of your data in an open, easy-to-read format, as well as write custom jobs for processing that data offline.<br /><br />Now we&#8217;ve taken the next step and added richer data processing tools to ETL. With them, you can <a href="http://johnmcox.blogspot.com/2013/09/building-data-analysis-pipelines-with.html">build data processing pipelines</a> that analyze large datasets with MapReduce. Inside Google we&#8217;ve used these tools to <a href="http://googleresearch.blogspot.com/2013/09/a-comparison-of-five-google-online.html">learn from the courses we&#8217;ve run</a>. We provide example pipelines ranging from the simple to the complex, along with formatters to convert your data into open formats (CSV, JSON, plain text, and XML) that play nice with third-party data analysis tools.<br /><br />We hope that adding robust data processing features to Course Builder will not only provide direct utility to organizations that need to process data to meet their internal business goals, but also make it easier for educators and researchers to gauge the efficacy of the massive online open courses run on the Course Builder platform.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by John Cox and Pavel Simakov, Course Builder Team, Google Research</span>  <br /><br /><a href="https://code.google.com/p/course-builder/">Course Builder</a> is an experimental, open source platform for delivering massive online open courses. When you run Course Builder, you own everything from the production instance to the student data that builds up while your course is running.<br /><br />Part of being open is making it easy for you to access and work with your data. Earlier this year we shipped a tool called ETL (short for extract-transform-load) that you can use to pull your data out of Course Builder, run arbitrary computations on it, and load it back. We <a href="http://johnmcox.blogspot.com/2013/05/working-with-course-builder-data.html">wrote a post</a> that goes into detail on how you can use ETL to get copies of your data in an open, easy-to-read format, as well as write custom jobs for processing that data offline.<br /><br />Now we’ve taken the next step and added richer data processing tools to ETL. With them, you can <a href="http://johnmcox.blogspot.com/2013/09/building-data-analysis-pipelines-with.html">build data processing pipelines</a> that analyze large datasets with MapReduce. Inside Google we’ve used these tools to <a href="http://googleresearch.blogspot.com/2013/09/a-comparison-of-five-google-online.html">learn from the courses we’ve run</a>. We provide example pipelines ranging from the simple to the complex, along with formatters to convert your data into open formats (CSV, JSON, plain text, and XML) that play nice with third-party data analysis tools.<br /><br />We hope that adding robust data processing features to Course Builder will not only provide direct utility to organizations that need to process data to meet their internal business goals, but also make it easier for educators and researchers to gauge the efficacy of the massive online open courses run on the Course Builder platform.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/opening-up-course-builder-data/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Projecting without a projector: sharing your smartphone content onto an arbitrary display</title>
		<link>https://googledata.org/google-research/projecting-without-a-projector-sharing-your-smartphone-content-onto-an-arbitrary-display/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=projecting-without-a-projector-sharing-your-smartphone-content-onto-an-arbitrary-display</link>
		<comments>https://googledata.org/google-research/projecting-without-a-projector-sharing-your-smartphone-content-onto-an-arbitrary-display/#comments</comments>
		<pubDate>Thu, 26 Sep 2013 16:45:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=acadecf94932dbf3be8e9f0cc47f570f</guid>
		<description><![CDATA[<span>Posted by Yang Li, Research Scientist, Google Research</span><br /><br />Previously, we presented <a href="http://googleresearch.blogspot.com/2011/07/what-you-capture-is-what-you-get-new.html">Deep Shot</a>, a system that allows a user to &#8220;capture&#8221; an application (such as Google Maps) running on a remote computer monitor via a smartphone camera and bring the application on the go. Today, we&#8217;d like to discuss how we support the opposite process, i.e., transferring mobile content to a remote display, again using the smartphone camera.<br /><br />Although the computing power of today&#8217;s mobile devices grows at an accelerated rate, the form factor of these devices remains small, which constrains both the input and output bandwidth for mobile interaction. To address this issue, we investigated how to enable users to leverage nearby IO resources to operate their mobile devices. As part of the effort, we developed <a href="http://yangl.org/pdf/openproject.pdf">Open Project</a>, an end-to-end framework that allows a user to &#8220;project&#8221; a native mobile application onto an arbitrary display using a smartphone camera, leveraging interaction spaces and input modality of the display. The display can range from a PC or laptop monitor, to a home Internet TV and to a public wall-sized display. Via an intuitive, projection-based metaphor, a user can easily share a mobile application by projecting it onto a target display. <br /><br />Open Project is an open, scalable, web-based framework for enabling mobile sharing and collaboration. It can turn any computer display projectable instantaneously and without deployment. Developers can add support for Open Project in native mobile apps by simply linking a library, requiring no additional hardware or sensors. Our user participants responded highly positively to Open Project-enabled applications for mobile sharing and collaboration.<br /><br /><div><br /></div><div></div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Yang Li, Research Scientist, Google Research</span><br /><br />Previously, we presented <a href="http://googleresearch.blogspot.com/2011/07/what-you-capture-is-what-you-get-new.html">Deep Shot</a>, a system that allows a user to “capture” an application (such as Google Maps) running on a remote computer monitor via a smartphone camera and bring the application on the go. Today, we’d like to discuss how we support the opposite process, i.e., transferring mobile content to a remote display, again using the smartphone camera.<br /><br />Although the computing power of today’s mobile devices grows at an accelerated rate, the form factor of these devices remains small, which constrains both the input and output bandwidth for mobile interaction. To address this issue, we investigated how to enable users to leverage nearby IO resources to operate their mobile devices. As part of the effort, we developed <a href="http://yangl.org/pdf/openproject.pdf">Open Project</a>, an end-to-end framework that allows a user to “project” a native mobile application onto an arbitrary display using a smartphone camera, leveraging interaction spaces and input modality of the display. The display can range from a PC or laptop monitor, to a home Internet TV and to a public wall-sized display. Via an intuitive, projection-based metaphor, a user can easily share a mobile application by projecting it onto a target display. <br /><br />Open Project is an open, scalable, web-based framework for enabling mobile sharing and collaboration. It can turn any computer display projectable instantaneously and without deployment. Developers can add support for Open Project in native mobile apps by simply linking a library, requiring no additional hardware or sensors. Our user participants responded highly positively to Open Project-enabled applications for mobile sharing and collaboration.<br /><br /><div style="text-align: center;"><br /></div><div style="text-align: center;"><iframe allowfullscreen="" frameborder="0" height="315" src="http://www.youtube.com/embed/w286qmNVBFc" width="420"></iframe></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/projecting-without-a-projector-sharing-your-smartphone-content-onto-an-arbitrary-display/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Broadening Google Patents</title>
		<link>https://googledata.org/google-research/broadening-google-patents-2/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=broadening-google-patents-2</link>
		<comments>https://googledata.org/google-research/broadening-google-patents-2/#comments</comments>
		<pubDate>Tue, 17 Sep 2013 14:38:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=e356853929792d798eccf2da92236ae9</guid>
		<description><![CDATA[<span>Posted by Jon Orwant, Engineering Manager</span>  <br /><br /><i>Cross-posted with the <a href="http://googlepublicpolicy.blogspot.com/">US Public Policy Blog</a>, the <a href="http://googlepolicyeurope.blogspot.com/">European Public Policy Blog</a>, and <a href="http://insidesearch.blogspot.com/">Inside Search Blog</a>.</i><br /><br />Last year, we launched two improvements to <a href="https://www.google.com/?tbm=pts">Google Patents</a>: the <a href="http://www.google.com/patents/related">Prior Art Finder</a> and European Patent Office (EPO) patents.  Today we&#8217;re happy to announce the addition of documents from four new patent agencies: China, Germany, Canada, and the World Intellectual Property Organization (WIPO).  Many of these documents may provide prior art for future patent applications, and we hope their increased discoverability will improve the quality of patents in the U.S. and worldwide.<br /><br />So if you want to learn about a <a href="https://www.google.com/patents/CN2578211Y">Chinese dual-drive bicycle</a>, a <a href="https://www.google.com/patents/DE102011002125B3">German valve for inflating bicycle tires</a>, attach a <a href="https://www.google.com/patents/CA2053546A1">Canadian trailer to your bike</a>, or read the <a href="https://www.google.com/patents/WO1999048749A1">WIPO application for pedalling with one leg</a>, those and millions of other inventions are now available on Google Patents. <br /><br />Thanks to <a href="http://translate.google.com/">Google Translate</a>, all patents are available in both their original languages and in English, and you can search across the world&#8217;s patents using terms in any of those languages.  When there are multiple submission languages, you can move between them with a single click on the tabs at the top of the page, as shown in the screenshot below:<br /><br /><div><a href="http://2.bp.blogspot.com/-IpkC-UK2PNQ/Ujhg_e_ycEI/AAAAAAAAAo0/zqPYCPymNss/s1600/Screen+Shot+2013-09-17+at+7.02.12+AM.png"><img border="0" src="http://2.bp.blogspot.com/-IpkC-UK2PNQ/Ujhg_e_ycEI/AAAAAAAAAo0/zqPYCPymNss/s400/Screen+Shot+2013-09-17+at+7.02.12+AM.png"></a></div><div></div><br />Happy patent searching!]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Jon Orwant, Engineering Manager</span>  <br /><br /><i>Cross-posted with the <a href="http://googlepublicpolicy.blogspot.com/">US Public Policy Blog</a>, the <a href="http://googlepolicyeurope.blogspot.com/">European Public Policy Blog</a>, and <a href="http://insidesearch.blogspot.com/">Inside Search Blog</a>.</i><br /><br />Last year, we launched two improvements to <a href="https://www.google.com/?tbm=pts">Google Patents</a>: the <a href="http://www.google.com/patents/related">Prior Art Finder</a> and European Patent Office (EPO) patents.  Today we’re happy to announce the addition of documents from four new patent agencies: China, Germany, Canada, and the World Intellectual Property Organization (WIPO).  Many of these documents may provide prior art for future patent applications, and we hope their increased discoverability will improve the quality of patents in the U.S. and worldwide.<br /><br />So if you want to learn about a <a href="https://www.google.com/patents/CN2578211Y">Chinese dual-drive bicycle</a>, a <a href="https://www.google.com/patents/DE102011002125B3">German valve for inflating bicycle tires</a>, attach a <a href="https://www.google.com/patents/CA2053546A1">Canadian trailer to your bike</a>, or read the <a href="https://www.google.com/patents/WO1999048749A1">WIPO application for pedalling with one leg</a>, those and millions of other inventions are now available on Google Patents. <br /><br />Thanks to <a href="http://translate.google.com/">Google Translate</a>, all patents are available in both their original languages and in English, and you can search across the world’s patents using terms in any of those languages.  When there are multiple submission languages, you can move between them with a single click on the tabs at the top of the page, as shown in the screenshot below:<br /><br /><div style="text-align: center;"><a href="http://2.bp.blogspot.com/-IpkC-UK2PNQ/Ujhg_e_ycEI/AAAAAAAAAo0/zqPYCPymNss/s1600/Screen+Shot+2013-09-17+at+7.02.12+AM.png" imageanchor="1"><img border="0" src="http://2.bp.blogspot.com/-IpkC-UK2PNQ/Ujhg_e_ycEI/AAAAAAAAAo0/zqPYCPymNss/s400/Screen+Shot+2013-09-17+at+7.02.12+AM.png" /></a></div><div style="text-align: center;"></div><br />Happy patent searching!]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/broadening-google-patents-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>We are joining the Open edX platform</title>
		<link>https://googledata.org/google-research/we-are-joining-the-open-edx-platform/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=we-are-joining-the-open-edx-platform</link>
		<comments>https://googledata.org/google-research/we-are-joining-the-open-edx-platform/#comments</comments>
		<pubDate>Tue, 10 Sep 2013 13:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=0e802478e621d1c008b92aec9d32c26b</guid>
		<description><![CDATA[<span>Posted by Dan Clancy, Director of Research</span><br /><br />A year ago, we released <a href="https://code.google.com/p/course-builder/">Course Builder</a>, an experimental platform for online education at scale. Since then, individuals have created courses on everything from game theory to philanthropy, offered to curious people around the world. Universities and non-profit organizations have used the platform to experiment with MOOCs, while maintaining direct relationships with their participants. Google has published a number of courses including <a href="https://webaccessibility.withgoogle.com/">Introduction to Web Accessibility</a>&#160;which opens for registration today.  This platform is helping to deliver on our goal of making education more accessible through technology, and enabling educators to easily teach at scale on top of cloud platform services.   <br /><br />Today, Google will begin working with <a href="https://www.edx.org/">edX</a> as a contributor to the open source platform, Open edX. We are taking our learnings from Course Builder and applying them to Open edX to further innovate on an open source MOOC platform.  We look forward to contributing to edX&#8217;s new site, MOOC.org, a new service for online learning which will allow any academic institution, business and individual to create and host online courses.<br /><br />Google and edX have a shared mission to broaden access to education, and by working together, we can advance towards our goals much faster. In addition, Google, with its breadth of applicable infrastructure and research capabilities, will continue to make contributions to the online education space, <a href="http://googleresearch.blogspot.com/2013/09/a-comparison-of-five-google-online.html">the findings of which</a> will be shared directly to the online education community and the Open edX platform.<br /><br />We support the development of a diverse education ecosystem, as learning expands in the online world. Part of that means that educational institutions should easily be able to bring their content online and manage their relationships with their students. Our industry is in the early stages of MOOCs, and lots of experimentation is still needed to find the best way to meet the educational needs of the world. An open ecosystem with multiple players encourages rapid experimentation and innovation, and we applaud the work going on in this space today.  <br /><br />We appreciate the community that has grown around the Course Builder open source project. We will continue to maintain Course Builder, but are focusing our development efforts on Open edX, and look forward to seeing edX&#8217;s MOOC.org platform develop.  In the future, we will provide an upgrade path to Open edX and MOOC.org from Course Builder. We hope that our continued contributions to open source education projects will enable anyone who builds online education products to benefit from our technology, services and scale. For learners, we believe that a more open online education ecosystem will make it easier for anyone to pick up new skills and concepts at any time, anywhere.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Dan Clancy, Director of Research</span><br /><br />A year ago, we released <a href="https://code.google.com/p/course-builder/">Course Builder</a>, an experimental platform for online education at scale. Since then, individuals have created courses on everything from game theory to philanthropy, offered to curious people around the world. Universities and non-profit organizations have used the platform to experiment with MOOCs, while maintaining direct relationships with their participants. Google has published a number of courses including <a href="https://webaccessibility.withgoogle.com/">Introduction to Web Accessibility</a>&nbsp;which opens for registration today.  This platform is helping to deliver on our goal of making education more accessible through technology, and enabling educators to easily teach at scale on top of cloud platform services.   <br /><br />Today, Google will begin working with <a href="https://www.edx.org/">edX</a> as a contributor to the open source platform, Open edX. We are taking our learnings from Course Builder and applying them to Open edX to further innovate on an open source MOOC platform.  We look forward to contributing to edX’s new site, MOOC.org, a new service for online learning which will allow any academic institution, business and individual to create and host online courses.<br /><br />Google and edX have a shared mission to broaden access to education, and by working together, we can advance towards our goals much faster. In addition, Google, with its breadth of applicable infrastructure and research capabilities, will continue to make contributions to the online education space, <a href="http://googleresearch.blogspot.com/2013/09/a-comparison-of-five-google-online.html">the findings of which</a> will be shared directly to the online education community and the Open edX platform.<br /><br />We support the development of a diverse education ecosystem, as learning expands in the online world. Part of that means that educational institutions should easily be able to bring their content online and manage their relationships with their students. Our industry is in the early stages of MOOCs, and lots of experimentation is still needed to find the best way to meet the educational needs of the world. An open ecosystem with multiple players encourages rapid experimentation and innovation, and we applaud the work going on in this space today.  <br /><br />We appreciate the community that has grown around the Course Builder open source project. We will continue to maintain Course Builder, but are focusing our development efforts on Open edX, and look forward to seeing edX’s MOOC.org platform develop.  In the future, we will provide an upgrade path to Open edX and MOOC.org from Course Builder. We hope that our continued contributions to open source education projects will enable anyone who builds online education products to benefit from our technology, services and scale. For learners, we believe that a more open online education ecosystem will make it easier for anyone to pick up new skills and concepts at any time, anywhere.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/we-are-joining-the-open-edx-platform/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Make Your Websites More Accessible to More Users with Introduction to Web Accessibility</title>
		<link>https://googledata.org/google-research/make-your-websites-more-accessible-to-more-users-with-introduction-to-web-accessibility/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=make-your-websites-more-accessible-to-more-users-with-introduction-to-web-accessibility</link>
		<comments>https://googledata.org/google-research/make-your-websites-more-accessible-to-more-users-with-introduction-to-web-accessibility/#comments</comments>
		<pubDate>Tue, 10 Sep 2013 12:45:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=30c0cc6789a8c1a586e2ed1ef2eedbcc</guid>
		<description><![CDATA[<span>Eve Andersson, Manager, Accessibility Engineering</span><br /><br /><i><span>Cross-posted with&#160;<a href="http://googledevelopers.blogspot.com/2013/09/make-your-website-more-accessible-to.html">Google Developer's Blog</a></span></i><br /><br />You work hard to build clean, intuitive websites. Traffic is high and still climbing, and your website provides a great user experience for all your users, right? Now close your eyes. Is your website easily navigable? According to the World Health Organization, 285 million people are visually impaired. That&#8217;s more than the populations of <a href="http://www.ons.gov.uk/ons/rel/mro/news-release/census-21---england-and-wales/census-gives-insights-into-characteristics-of-the-population-in-england-and-wales.html">England</a>, <a href="https://www.destatis.de/EN/PressServices/Press/pr/2013/05/PE13_188_121.html;jsessionid=BA2370EBB692E775960C5725FB854DD7.cae2">Germany</a>, and <a href="http://www.stat.go.jp/english/data/kokusei/pdf/20111026.pdf">Japan</a> combined! <br /><br />As the web has continued to evolve, websites have become more interactive and complex, and this has led to a reduction in accessibility for some users. Fortunately, there are some simple techniques you can employ to make your websites more accessible to blind and low-vision users and increase your potential audience. <a href="https://webaccessibility.withgoogle.com/">Introduction to Web Accessibility</a> is Google&#8217;s online course that helps you do just that. <br /><br /><br /><br />You&#8217;ll learn to make easy accessibility updates, starting with your HTML structure, without breaking code or sacrificing a beautiful user experience. You&#8217;ll also learn tips and tricks to inspect the accessibility of your websites using Google Chrome extensions. Introduction to Web Accessibility runs with support from Google content experts from September 17th - 30th, and is recommended for developers with basic familiarity with HTML, JavaScript, and CSS.<br /><br />There&#8217;s a lot to learn in the realm of web accessibility, and a lot of work to be done to ensure users aren&#8217;t excluded from being able to easily navigate the web. By introducing fundamental tips to improve web usage for users with visual impairments, Introduction to Web Accessibility is a starting point to learn how to build accessibility features into your code. <a href="https://webaccessibility.withgoogle.com/">Registration</a> is now open, so sign up today and help push the web toward becoming truly universally accessible.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Eve Andersson, Manager, Accessibility Engineering</span><br /><br /><i><span style="font-size: x-small;">Cross-posted with&nbsp;<a href="http://googledevelopers.blogspot.com/2013/09/make-your-website-more-accessible-to.html">Google Developer's Blog</a></span></i><br /><br />You work hard to build clean, intuitive websites. Traffic is high and still climbing, and your website provides a great user experience for all your users, right? Now close your eyes. Is your website easily navigable? According to the World Health Organization, 285 million people are visually impaired. That’s more than the populations of <a href="http://www.ons.gov.uk/ons/rel/mro/news-release/census-21---england-and-wales/census-gives-insights-into-characteristics-of-the-population-in-england-and-wales.html">England</a>, <a href="https://www.destatis.de/EN/PressServices/Press/pr/2013/05/PE13_188_121.html;jsessionid=BA2370EBB692E775960C5725FB854DD7.cae2">Germany</a>, and <a href="http://www.stat.go.jp/english/data/kokusei/pdf/20111026.pdf">Japan</a> combined! <br /><br />As the web has continued to evolve, websites have become more interactive and complex, and this has led to a reduction in accessibility for some users. Fortunately, there are some simple techniques you can employ to make your websites more accessible to blind and low-vision users and increase your potential audience. <a href="https://webaccessibility.withgoogle.com/">Introduction to Web Accessibility</a> is Google’s online course that helps you do just that. <br /><br /><iframe allowfullscreen="" frameborder="0" height="480" src="http://www.youtube.com/embed/qSzHhDE4pH0" width="640"></iframe><br /><br />You’ll learn to make easy accessibility updates, starting with your HTML structure, without breaking code or sacrificing a beautiful user experience. You’ll also learn tips and tricks to inspect the accessibility of your websites using Google Chrome extensions. Introduction to Web Accessibility runs with support from Google content experts from September 17th - 30th, and is recommended for developers with basic familiarity with HTML, JavaScript, and CSS.<br /><br />There’s a lot to learn in the realm of web accessibility, and a lot of work to be done to ensure users aren’t excluded from being able to easily navigate the web. By introducing fundamental tips to improve web usage for users with visual impairments, Introduction to Web Accessibility is a starting point to learn how to build accessibility features into your code. <a href="https://webaccessibility.withgoogle.com/">Registration</a> is now open, so sign up today and help push the web toward becoming truly universally accessible. ]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/make-your-websites-more-accessible-to-more-users-with-introduction-to-web-accessibility/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>A Comparison of Five Google Online Courses</title>
		<link>https://googledata.org/google-research/a-comparison-of-five-google-online-courses/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-comparison-of-five-google-online-courses</link>
		<comments>https://googledata.org/google-research/a-comparison-of-five-google-online-courses/#comments</comments>
		<pubDate>Thu, 05 Sep 2013 17:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=c704646cb2196a1d607328896256f688</guid>
		<description><![CDATA[<span>Posted by Julia Wilkowski, Senior Instructional Designer</span><br /><br />Google has taught five open online courses in the past year, reaching nearly 400,000 interested students. In this post I will share observations from experiments with a year&#8217;s worth of these courses. We were particularly surprised by how the size of our courses evolved during the year; how students responded to a non-linear, problem-based MOOC; and the value that many students got out of the courses, even after the courses ended.<br /><br /><b>Observation #1: Course size</b><br />We have seen varying numbers of registered students in the courses. Our first two courses (Power Searching versions one and two) garnered significant interest with over 100,000 students registering for each course. Our more recent courses have attracted closer to 40,000 students each. It&#8217;s likely that this is a result of initial interest in MOOCs starting to decline as well as students realizing that online courses require significant commitment of time and effort. We&#8217;d like other MOOC content aggregators to share their results so that we can identify overall MOOC patterns.<br /><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://2.bp.blogspot.com/-OWgicCHZmqc/Uii4SqNJFeI/AAAAAAAAAMk/MmybLA0VCAA/s1600/blogtable.jpg"><img border="0" src="http://2.bp.blogspot.com/-OWgicCHZmqc/Uii4SqNJFeI/AAAAAAAAAMk/MmybLA0VCAA/s1600/blogtable.jpg"></a></td></tr><tr><td><span><span><span>*based on surveys sent only to course completers. Other satisfaction scores represent aggregate survey results sent to all registrants.</span></span></span></td></tr></tbody></table><br /><b>Observation #2: Completion rates</b><br />Comparing these five two-week courses, we notice that most of them illustrate a completion rate (measured by the number of students who meet the course criteria for completion divided by the total number of registrants) of between 11-16%. Advanced Power Searching was an outlier at only 4%. Why? A possible answer can be found by comparing the culminating projects for each course: Power Searching consisted of students completing a multiple choice test; Advanced Power Searching students completed case studies of applying skills to research problems. After grading their work, students also had to solve a final search challenge. <br /><br />Advanced Power Searching also differed from all of the other courses in the way it presented content and activities. Power Searching offered videos and activities in a highly structured, linear path; Advanced Power Searching presented students with a selection of challenges followed by supporting lessons. We observed a decreasing number of views on each challenge page similar to the pattern in the linear course (see figure 1).<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://4.bp.blogspot.com/-HwrsGxeTrXM/Uii3vSM-5kI/AAAAAAAAAMc/GBDAruX85kQ/s1600/pageviewsPSAPS.jpg"><img border="0" src="http://4.bp.blogspot.com/-HwrsGxeTrXM/Uii3vSM-5kI/AAAAAAAAAMc/GBDAruX85kQ/s1600/pageviewsPSAPS.jpg"></a></td></tr><tr><td><span><span>Figure 1. Unique page views for Power Searching and Advanced Power Searching</span></span></td></tr></tbody></table><br />Students who did complete Advanced Power Searching expressed satisfaction with the course (95% of course completing students would recommend the course to others, compared with 94% of survey respondents from Power Searching). We surmise that the lower completion rate for Advanced Power Searching compared to Power Searching could be a result of the relative difficulty of this course (it assumed significantly more foundational knowledge than Power Searching), the unstructured nature of the course, or a combination of these and other factors. <br /><br />Even though completion rates seem low when compared with traditional courses, we are excited about the sheer number of students we&#8217;ve reached through our courses (over 51,000 earning certificates of completion). If we offered the same content to classrooms of 30 students, it would take over four and a half years of daily classes to teach the same information!<br /><br /><b>Observation #3: Students have varied goals</b><br />We would also like to move the discussion beyond completion rates. We&#8217;ve noticed that students register for online courses for many different reasons. In Mapping with Google, we asked students to select a goal during registration. We discovered that<br /><ul><li>52% of registrants intended to complete the course</li><li>48% merely wanted to learn a few new things about Google&#8217;s mapping tools</li></ul>Post-course surveys revealed that<br /><ul><li>78% of students achieved the goal they defined at registration</li><li>89% of students learned new features of Google Maps</li><li>76% reported learning new features of Google Earth</li></ul>Though a much smaller percentage of students completed course requirements, these statistics show that many of the students attained their learning goals.<br /><br /><b>Observation #4: Continued interest in post-course access</b><br />After each course ended, we kept many of the course materials (videos, activities) available. Though we removed access to the forums, final projects/assessments, and teaching assistants, we have seen significant interest in the content as measured by Google and YouTube Analytics. The Power Searching course pages have generated nearly three million page views after the courses finished; viewers have watched over 160,000 hours (18 years!) of course videos. In the two months since Mapping with Google finished, we have seen over 70,000 unique visitors to the course pages.<br /><br />In all of our courses, we saw a high number of students interested in learning online: 96% of Power Searching participants agreed or strongly agreed that they would take a course in a similar format. We have succeeded in teaching tens of thousands of students to be more savvy users of Google tools. Future posts will take an in-depth look at our experiments with self-graded assessments, community elements that enhance learning, and design elements that influence student success.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Julia Wilkowski, Senior Instructional Designer</span><br /><br />Google has taught five open online courses in the past year, reaching nearly 400,000 interested students. In this post I will share observations from experiments with a year’s worth of these courses. We were particularly surprised by how the size of our courses evolved during the year; how students responded to a non-linear, problem-based MOOC; and the value that many students got out of the courses, even after the courses ended.<br /><br /><b>Observation #1: Course size</b><br />We have seen varying numbers of registered students in the courses. Our first two courses (Power Searching versions one and two) garnered significant interest with over 100,000 students registering for each course. Our more recent courses have attracted closer to 40,000 students each. It’s likely that this is a result of initial interest in MOOCs starting to decline as well as students realizing that online courses require significant commitment of time and effort. We’d like other MOOC content aggregators to share their results so that we can identify overall MOOC patterns.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-OWgicCHZmqc/Uii4SqNJFeI/AAAAAAAAAMk/MmybLA0VCAA/s1600/blogtable.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-OWgicCHZmqc/Uii4SqNJFeI/AAAAAAAAAMk/MmybLA0VCAA/s1600/blogtable.jpg" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span id="docs-internal-guid-4478dfa7-ef10-e667-b2f1-6a5f628d3730"><span style="font-family: Arial; font-style: italic; vertical-align: baseline; white-space: pre-wrap;"><span style="font-size: xx-small;">*based on surveys sent only to course completers. Other satisfaction scores represent aggregate survey results sent to all registrants.</span></span></span></td></tr></tbody></table><br /><b>Observation #2: Completion rates</b><br />Comparing these five two-week courses, we notice that most of them illustrate a completion rate (measured by the number of students who meet the course criteria for completion divided by the total number of registrants) of between 11-16%. Advanced Power Searching was an outlier at only 4%. Why? A possible answer can be found by comparing the culminating projects for each course: Power Searching consisted of students completing a multiple choice test; Advanced Power Searching students completed case studies of applying skills to research problems. After grading their work, students also had to solve a final search challenge. <br /><br />Advanced Power Searching also differed from all of the other courses in the way it presented content and activities. Power Searching offered videos and activities in a highly structured, linear path; Advanced Power Searching presented students with a selection of challenges followed by supporting lessons. We observed a decreasing number of views on each challenge page similar to the pattern in the linear course (see figure 1).<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-HwrsGxeTrXM/Uii3vSM-5kI/AAAAAAAAAMc/GBDAruX85kQ/s1600/pageviewsPSAPS.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-HwrsGxeTrXM/Uii3vSM-5kI/AAAAAAAAAMc/GBDAruX85kQ/s1600/pageviewsPSAPS.jpg" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="text-align: start;"><span style="font-size: x-small;">Figure 1. Unique page views for Power Searching and Advanced Power Searching</span></span></td></tr></tbody></table><br />Students who did complete Advanced Power Searching expressed satisfaction with the course (95% of course completing students would recommend the course to others, compared with 94% of survey respondents from Power Searching). We surmise that the lower completion rate for Advanced Power Searching compared to Power Searching could be a result of the relative difficulty of this course (it assumed significantly more foundational knowledge than Power Searching), the unstructured nature of the course, or a combination of these and other factors. <br /><br />Even though completion rates seem low when compared with traditional courses, we are excited about the sheer number of students we’ve reached through our courses (over 51,000 earning certificates of completion). If we offered the same content to classrooms of 30 students, it would take over four and a half years of daily classes to teach the same information!<br /><br /><b>Observation #3: Students have varied goals</b><br />We would also like to move the discussion beyond completion rates. We’ve noticed that students register for online courses for many different reasons. In Mapping with Google, we asked students to select a goal during registration. We discovered that<br /><ul><li>52% of registrants intended to complete the course</li><li>48% merely wanted to learn a few new things about Google’s mapping tools</li></ul>Post-course surveys revealed that<br /><ul><li>78% of students achieved the goal they defined at registration</li><li>89% of students learned new features of Google Maps</li><li>76% reported learning new features of Google Earth</li></ul>Though a much smaller percentage of students completed course requirements, these statistics show that many of the students attained their learning goals.<br /><br /><b>Observation #4: Continued interest in post-course access</b><br />After each course ended, we kept many of the course materials (videos, activities) available. Though we removed access to the forums, final projects/assessments, and teaching assistants, we have seen significant interest in the content as measured by Google and YouTube Analytics. The Power Searching course pages have generated nearly three million page views after the courses finished; viewers have watched over 160,000 hours (18 years!) of course videos. In the two months since Mapping with Google finished, we have seen over 70,000 unique visitors to the course pages.<br /><br />In all of our courses, we saw a high number of students interested in learning online: 96% of Power Searching participants agreed or strongly agreed that they would take a course in a similar format. We have succeeded in teaching tens of thousands of students to be more savvy users of Google tools. Future posts will take an in-depth look at our experiments with self-graded assessments, community elements that enhance learning, and design elements that influence student success.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/a-comparison-of-five-google-online-courses/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Google Research Awards: Summer 2013</title>
		<link>https://googledata.org/google-research/google-research-awards-summer-2013/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-research-awards-summer-2013</link>
		<comments>https://googledata.org/google-research/google-research-awards-summer-2013/#comments</comments>
		<pubDate>Mon, 12 Aug 2013 19:45:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=61555dfb5d8662747c2faea109d60f63</guid>
		<description><![CDATA[<span>Posted by Maggie Johnson, Director of Education &#38; University Relations</span><br /><br />Another round of the <a href="http://research.google.com/university/relations/research_awards.html">Google Research Awards</a> is complete. This is our biannual open call for proposals on computer science-related topics including machine learning and structured data, policy, human computer interaction, and geo/maps. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google scientists and engineers.<br /><br />This round, we received 550 proposals from 50 countries. After expert reviews and committee discussions, we decided to fund 105 projects. The subject areas that received the highest level of support were human-computer interaction, systems and machine learning. In addition, 19% of the funding was awarded to universities outside the U.S.<br /><br />We noticed some new areas emerging in this round of proposals. In particular, an increase of interest in neural networks, accessibility-related projects, and some innovative ideas in robotics. One project features the use of <a href="http://www.android.com/">Android-based</a> multi-robot systems which are significantly more complex than single robot systems. Faculty researchers are looking to explore novel uses of <a href="http://www.google.com/glass/start/">Google Glass</a> such as an indoor navigation system for blind users, and how Glass can facilitate social interactions. <br /><br />Congratulations to the well-deserving <a href="https://docs.google.com/a/google.com/viewer?a=v&#38;pid=sites&#38;srcid=ZGVmYXVsdGRvbWFpbnxleHRlcm5hbGltYWdlaG9zdGluZ3JwfGd4OjEwZDVkZDI5OWI2ODc1Njg">recipients of this round&#8217;s awards</a>. If you are interested in applying for the next round (deadline is October 15), please visit <a href="http://research.google.com/university/relations/research_awards.html">our website</a> for more information.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Maggie Johnson, Director of Education &amp; University Relations</span><br /><br />Another round of the <a href="http://research.google.com/university/relations/research_awards.html">Google Research Awards</a> is complete. This is our biannual open call for proposals on computer science-related topics including machine learning and structured data, policy, human computer interaction, and geo/maps. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google scientists and engineers.<br /><br />This round, we received 550 proposals from 50 countries. After expert reviews and committee discussions, we decided to fund 105 projects. The subject areas that received the highest level of support were human-computer interaction, systems and machine learning. In addition, 19% of the funding was awarded to universities outside the U.S.<br /><br />We noticed some new areas emerging in this round of proposals. In particular, an increase of interest in neural networks, accessibility-related projects, and some innovative ideas in robotics. One project features the use of <a href="http://www.android.com/">Android-based</a> multi-robot systems which are significantly more complex than single robot systems. Faculty researchers are looking to explore novel uses of <a href="http://www.google.com/glass/start/">Google Glass</a> such as an indoor navigation system for blind users, and how Glass can facilitate social interactions. <br /><br />Congratulations to the well-deserving <a href="https://docs.google.com/a/google.com/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxleHRlcm5hbGltYWdlaG9zdGluZ3JwfGd4OjEwZDVkZDI5OWI2ODc1Njg">recipients of this round’s awards</a>. If you are interested in applying for the next round (deadline is October 15), please visit <a href="http://research.google.com/university/relations/research_awards.html">our website</a> for more information.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/google-research-awards-summer-2013/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Computer Science Teaching Fellows Starting Up in Charleston, SC</title>
		<link>https://googledata.org/google-research/computer-science-teaching-fellows-starting-up-in-charleston-sc/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=computer-science-teaching-fellows-starting-up-in-charleston-sc</link>
		<comments>https://googledata.org/google-research/computer-science-teaching-fellows-starting-up-in-charleston-sc/#comments</comments>
		<pubDate>Wed, 07 Aug 2013 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Research @ Google]]></dc:creator>
				<category><![CDATA[Google Research]]></category>
		<category><![CDATA[education]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=d32eaf606b546a4a77fdd5f39fa45cab</guid>
		<description><![CDATA[<span>Posted by Cameron Fadjo, Program Lead, Computer Science Teaching Fellows</span><br /><br />Google recently started up an exciting new program to ignite interest in computer science (CS) for K12 kids. Located in our&#160;<a href="http://www.google.com/about/datacenters/inside/locations/berkeley-county/index.html">South Carolina data center</a>, the Computer Science Teaching Fellows is a two-year post graduate fellowship for new STEM teachers and CS graduates. The goal is to bring computer science and computational thinking to <b>all</b> children, especially underrepresented minorities and girls, and close the gap between the ever-increasing demand in CS and the inadequate supply. We hope to learn what really works and scale those best practices regionally and then nationally.<br /><br />The supply of CS majors in the pipeline has been a concern for many years. In 2007, the Computer Science education community was alarmed by the lack of CS majors and enrollments in US colleges and universities.<br /><br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="http://4.bp.blogspot.com/-7x06mb61_x8/UgFHUlGo05I/AAAAAAAAAMM/uaunAptEF8s/s1600/CSmajors_updated.png"><img border="0" height="465" src="http://4.bp.blogspot.com/-7x06mb61_x8/UgFHUlGo05I/AAAAAAAAAMM/uaunAptEF8s/s640/CSmajors_updated.png" width="640"></a></td></tr><tr><td>Source: 2009-2010 CRA Taulbee Survey (<a href="http://www.cra.org/resources/">http://www.cra.org/resources/</a>)</td></tr></tbody></table><div></div><br />This prompted the development of several programs and activities to start raising awareness about the demand and opportunities for computer scientists, and to spark the interest of K12 students in CS. For example, the <a href="http://nsf.gov/">NSF</a> funded curriculum and professional development around the new <a href="http://www.csprinciples.org/">CS Principles</a> Advanced Placement course. The <a href="http://csta.acm.org/">CSTA</a> published <a href="http://csta.acm.org/Curriculum/sub/K12Standards.html">standards</a> for K12 CS and a <a href="http://csta.acm.org/runningonempty/">report</a> on the limited extent to which schools, districts and states provide CS instruction to their students. CS advocacy groups, <a href="http://www.computinginthecore.org/">Computing in the Core</a> and <a href="http://code.org/">Code.org</a> have played an instrumental role in adding provisions to the reauthorization of the <a href="http://www2.ed.gov/policy/elsec/leg/esea02/beginning.html">Elementary and Secondary School Act</a> to <a href="http://www.computinginthecore.org/newsroom">support CS education</a>.  More generally, we have seen innovations in online learning with <a href="http://en.wikipedia.org/wiki/Massive_open_online_course">MOOCs</a>, <a href="http://en.wikipedia.org/wiki/Machine_learning">machine learning</a> to provide personalized learning experiences, and platforms like <a href="https://www.khanacademy.org/">Khan Academy</a> that allow flipped classrooms. <br /><br />All of these activities represent a convergence in the CS education space, where existing programs are ready for scale, and technological advancements can support that scale in innovative ways. Our Teaching Fellows will be testing after school programs, classroom curriculum and online CS  programs to determine what works and why. They&#8217;ll start in the local Charleston area and then spread the best programs and curriculum to South Carolina, Georgia, North Carolina (where we also have large data centers). They are currently preparing programs for the fall semester. <br /><br />We are very excited about the convergence we are seeing in CS education and the potential to bring many more kids into a field that offers not only great career opportunities but also a shot at  really making a difference in the world. We&#8217;ll keep you posted on the progress of our Teaching Fellows.<br /><br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Cameron Fadjo, Program Lead, Computer Science Teaching Fellows</span><br /><br />Google recently started up an exciting new program to ignite interest in computer science (CS) for K12 kids. Located in our&nbsp;<a href="http://www.google.com/about/datacenters/inside/locations/berkeley-county/index.html">South Carolina data center</a>, the Computer Science Teaching Fellows is a two-year post graduate fellowship for new STEM teachers and CS graduates. The goal is to bring computer science and computational thinking to <b>all</b> children, especially underrepresented minorities and girls, and close the gap between the ever-increasing demand in CS and the inadequate supply. We hope to learn what really works and scale those best practices regionally and then nationally.<br /><br />The supply of CS majors in the pipeline has been a concern for many years. In 2007, the Computer Science education community was alarmed by the lack of CS majors and enrollments in US colleges and universities.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-7x06mb61_x8/UgFHUlGo05I/AAAAAAAAAMM/uaunAptEF8s/s1600/CSmajors_updated.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="465" src="http://4.bp.blogspot.com/-7x06mb61_x8/UgFHUlGo05I/AAAAAAAAAMM/uaunAptEF8s/s640/CSmajors_updated.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Source: 2009-2010 CRA Taulbee Survey (<a href="http://www.cra.org/resources/">http://www.cra.org/resources/</a>)</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div><br />This prompted the development of several programs and activities to start raising awareness about the demand and opportunities for computer scientists, and to spark the interest of K12 students in CS. For example, the <a href="http://nsf.gov/">NSF</a> funded curriculum and professional development around the new <a href="http://www.csprinciples.org/">CS Principles</a> Advanced Placement course. The <a href="http://csta.acm.org/">CSTA</a> published <a href="http://csta.acm.org/Curriculum/sub/K12Standards.html">standards</a> for K12 CS and a <a href="http://csta.acm.org/runningonempty/">report</a> on the limited extent to which schools, districts and states provide CS instruction to their students. CS advocacy groups, <a href="http://www.computinginthecore.org/">Computing in the Core</a> and <a href="http://code.org/">Code.org</a> have played an instrumental role in adding provisions to the reauthorization of the <a href="http://www2.ed.gov/policy/elsec/leg/esea02/beginning.html">Elementary and Secondary School Act</a> to <a href="http://www.computinginthecore.org/newsroom">support CS education</a>.  More generally, we have seen innovations in online learning with <a href="http://en.wikipedia.org/wiki/Massive_open_online_course">MOOCs</a>, <a href="http://en.wikipedia.org/wiki/Machine_learning">machine learning</a> to provide personalized learning experiences, and platforms like <a href="https://www.khanacademy.org/">Khan Academy</a> that allow flipped classrooms. <br /><br />All of these activities represent a convergence in the CS education space, where existing programs are ready for scale, and technological advancements can support that scale in innovative ways. Our Teaching Fellows will be testing after school programs, classroom curriculum and online CS  programs to determine what works and why. They’ll start in the local Charleston area and then spread the best programs and curriculum to South Carolina, Georgia, North Carolina (where we also have large data centers). They are currently preparing programs for the fall semester. <br /><br />We are very excited about the convergence we are seeing in CS education and the potential to bring many more kids into a field that offers not only great career opportunities but also a shot at  really making a difference in the world. We’ll keep you posted on the progress of our Teaching Fellows.<br /><br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-research/computer-science-teaching-fellows-starting-up-in-charleston-sc/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
	</channel>
</rss>
