Plus codes: A new way to help pinpoint places on the map
August 12th, 2015 | by Lat Long | published in Google Earth
Posted by Rasťo Šrámek, Software Engineer, Google Maps
August 12th, 2015 | by Lat Long | published in Google Earth
Posted by Rasťo Šrámek, Software Engineer, Google Maps
August 12th, 2015 | by Jane Smith | published in Google Apps
Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates
August 12th, 2015 | by Reto Meier | published in Google Android
Posted by Roman Nurik, Design Advocate, and Richard The, Google Creative Lab
Android was created as an open and flexible platform, giving people more ways to come together to imagine and create. This spirit of invention has allowed developers to push the boundaries of mobile development and has helped make Android the go-to platform for creative projects in more places—from phones, to tablets, to watches, and beyond. We set out to find a way to celebrate the creative, experimental Android work of developers everywhere and inspire more developers to get creative with technology and code.
Today, we’re excited to launch Android Experiments: a showcase of inspiring projects on Android and an open invitation for all developers to submit their own experiments to the gallery.
The 20 initial experiments show a broad range of creative work–from camera experiments to innovative Android Wear apps to hardware hacks to cutting edge OpenGL demos. All are built using platforms such as the Android SDK and NDK, Android Wear, the IOIO board, Cinder, Processing, OpenFrameworks and Unity. Each project creatively examines in small and big ways how we think of the devices we interact with every day.
Today is just the beginning as we’re opening up experiment submissions to creators everywhere. Whether you’re a student just starting out, or you’ve been at it for a while, and no matter the framework it uses or the device it runs on, Android Experiments is open to everybody.
Check out Android Experiments to view the completed projects, or to submit one of your own. While we can’t post every submission, we’d love to see what you’ve created.
Follow along to see what others build at AndroidExperiments.com.
August 11th, 2015 | by Rob Newton | published in Google Adwords
We hope you’re excited as we are about last week’s launch of the Report Editor. This powerful new AdWords tool lets you explore your account data in brand new ways from within your browser.
On the heels of that update, you should also be sure to check out our brand new guide to AdWords reporting: Special Reports. Inside the guide you’ll find advice about how to surface the right insights in your account and how to turn those insights into action. Specifically, learn how to:
There are plenty of example reports throughout that are intended to inspire you as you put together your own analyses. Whether you run those analyses via the Report Editor or via any of the other different ways to report on AdWords performance, find actionable insights through smart reporting. Happy analyzing.
Want to stay on top of even more Best Practices? Sign up to receive our monthly newsletter.
Posted by Matt Lawson, Director, Performance Ads Marketing
August 11th, 2015 | by Research Blog | published in Google Research
Posted by Françoise Beaufays, Research Scientist
Over the past several years, deep learning has shown remarkable success on some of the world’s most difficult computer science challenges, from image classification and captioning to translation to model visualization techniques. Recently we announced improvements to Google Voice transcription using Long Short-term Memory Recurrent Neural Networks (LSTM RNNs)—yet another place neural networks are improving useful services. We thought we’d give a little more detail on how we did this.
Since it launched in 2009, Google Voice transcription had used Gaussian Mixture Model (GMM) acoustic models, the state of the art in speech recognition for 30+ years. Sophisticated techniques like adapting the models to the speaker’s voice augmented this relatively simple modeling method.
Then around 2012, Deep Neural Networks (DNNs) revolutionized the field of speech recognition. These multi-layer networks distinguish sounds better than GMMs by using “discriminative training,” differentiating phonetic units instead of modeling each one independently.
But things really improved rapidly with Recurrent Neural Networks (RNNs), and especially LSTM RNNs, first launched in Android’s speech recognizer in May 2012. Compared to DNNs, LSTM RNNs have additional recurrent connections and memory cells that allow them to “remember” the data they’ve seen so far—much as you interpret the words you hear based on previous words in a sentence.
By then, Google’s old voicemail system, still using GMMs, was far behind the new state of the art. So we decided to rebuild it from scratch, taking advantage of the successes demonstrated by LSTM RNNs. But there were some challenges.
![]() |
An LSTM memory cell, showing the gating mechanisms that allow it to store and communicate information. Image credit: Alex Graves |
There’s more to speech recognition than recognizing individual sounds in the audio: sequences of sounds need to match existing words, and sequences of words should make sense in the language. This is called “language modeling.” Language models are typically trained over very large corpora of text, often orders of magnitude larger than the acoustic data. It’s easy to find lots of text, but not so easy to find sources that match naturally spoken sentences. Shakespeare’s plays in 17th-century English won’t help on voicemails.
We decided to retrain both the acoustic and language models, and to do so using existing voicemails. We already had a small set of voicemails users had donated for research purposes and that we could transcribe for training and testing, but we needed much more data to retrain the language models. So we asked our users to donate their voicemails in bulk, with the assurance that the messages wouldn’t be looked at or listened to by anyone—only to be used by computers running machine learning algorithms. But how does one train models from data that’s never been human-validated or hand-transcribed?
We couldn’t just use our old transcriptions, because they were already tainted with recognition errors—garbage in, garbage out. Instead, we developed a delicate iterative pipeline to retrain the models. Using improved acoustic models, we could recognize existing voicemails offline to get newer, better transcriptions the language models could be retrained on, and with better language models we could recognize again the same data, and repeat the process. Step by step, the recognition error rate dropped, finally settling at roughly half what it was with the original system! That was an excellent surprise.
There were other (not so positive) surprises too. For example, sometimes the recognizer would skip entire audio segments; it felt as if it was falling asleep and waking up a few seconds later. It turned out that the acoustic model would occasionally get into a “bad state” where it would think the user was not speaking anymore and what it heard was just noise, so it stopped outputting words. When we retrained on that same data, we’d think all those spoken sounds should indeed be ignored, reinforcing that the model should do it even more. It took careful tuning to get the recognizer out of that state of mind.
It was also tough to get punctuation right. The old system relied on hand-crafted rules or “grammars,” which, by design, can’t easily take textual context into account. For example, in an early test our algorithms transcribed the audio “I got the message you left me” as “I got the message. You left me.” To try and tackle this, we again tapped into neural networks, teaching an LSTM to insert punctuation at the right spots. It’s still not perfect, but we’re continually working on ways to improve our accuracy.
In speech recognition as in many other complex services, neural networks are rapidly replacing previous technologies. There’s always room for improvement of course, and we’re already working on new types of networks that show even more promise!
August 11th, 2015 | by Kim T | published in Google DoubleClick
As a brand trying to reach consumers in today’s increasingly fragmented media landscape, it is critical that you understand the impact of your ads on brand metrics such as awareness and consideration.
Viewability is the starting point, an initial understanding of whether the ad had a chance to be seen. We have talked before about why measuring the viewability of advertising matters.
In December 2014, we shared insights on the state of display ad viewability across the web. As a continuation of that effort, in May we released new insights from our video ad platforms, including YouTube, to start the discussion about the state of video ad viewability.
We wanted to take this research a step further, by analyzing the relationship between viewability and brand metrics.
To do so, we took our Brand Lift solution, which gives you insights into what impact your ads have on the consumer journey – from awareness, to ad recall, to brand interest – and tied the data to viewability metrics from our Active View technology for a set of YouTube TrueView ads. By connecting these two solutions, we were able to draw out some insights about the relationship between viewability and brand metrics.
When it comes to brand metrics, ad recall is a foundation for measuring the impact of your ad. As a brand advertiser, knowing if your ad breaks through with users is a key first step to understanding the overall impact of an ad on a suite of brand metrics. In this analysis, we were able to analyze how being able to hear and see your ad affected a user’s ability to recall your ad.
Our data shows that users exposed to even one aspect of your video ad (audio or video only), exhibit significant lift in ad recall. However, the full immersive experience of sight, sound and motion delivers more ad recall than either audio or video alone. In fact, the impact on ad recall was 23% higher when users were exposed to ads with audio and video together versus ads with just audio alone.
Time in view also plays a large role when it comes to moving the needle on brand awareness and consideration. We recently introduced the ability for Active View users to measure average viewable time – the average time, in seconds, a given ad appeared on screen – in Doubleclick Bid Manager. By connecting these measurements, we can see the relationship between viewable time and brand metrics.
We found that there is a consistent relationship between how long an ad is viewable and increases in brand awareness and consideration. The longer a user views your ad, the higher the lift in these two important brand metrics:
These results prompt you to think about your brand advertising in a few important ways:
This is just the beginning of understanding what impacts brand metrics for video ads. As brands look to measure the effectiveness of their digital video advertising, a continued understanding of what factors drive brand metrics will be crucial to more effective brand spend.
Read further research on the impact of online video.
To read all of our research on viewability, check out thinkwithgoogle.com/viewability.
To see how viewability is measured, visit our interactive Active View demo.
![]() |
Posted by Sanaz Ahari Group Product Manager, Brand Measurement, Google |
August 11th, 2015 | by Kim T | published in Google DoubleClick
As a brand trying to reach consumers in today’s increasingly fragmented media landscape, it is critical that you understand the impact of your ads on brand metrics such as awareness and consideration.
Viewability is the starting point, an initial understanding of whether the ad had a chance to be seen. We have talked before about why measuring the viewability of advertising matters.
In December 2014, we shared insights on the state of display ad viewability across the web. As a continuation of that effort, in May we released new insights from our video ad platforms, including YouTube, to start the discussion about the state of video ad viewability.
We wanted to take this research a step further, by analyzing the relationship between viewability and brand metrics.
To do so, we took our Brand Lift solution, which gives you insights into what impact your ads have on the consumer journey – from awareness, to ad recall, to brand interest – and tied the data to viewability metrics from our Active View technology for a set of YouTube TrueView ads. By connecting these two solutions, we were able to draw out some insights about the relationship between viewability and brand metrics.
When it comes to brand metrics, ad recall is a foundation for measuring the impact of your ad. As a brand advertiser, knowing if your ad breaks through with users is a key first step to understanding the overall impact of an ad on a suite of brand metrics. In this analysis, we were able to analyze how being able to hear and see your ad affected a user’s ability to recall your ad.
Our data shows that users exposed to even one aspect of your video ad (audio or video only), exhibit significant lift in ad recall. However, the full immersive experience of sight, sound and motion delivers more ad recall than either audio or video alone. In fact, the impact on ad recall was 23% higher when users were exposed to ads with audio and video together versus ads with just audio alone.
Time in view also plays a large role when it comes to moving the needle on brand awareness and consideration. We recently introduced the ability for Active View users to measure average viewable time – the average time, in seconds, a given ad appeared on screen – in Doubleclick Bid Manager. By connecting these measurements, we can see the relationship between viewable time and brand metrics.
We found that there is a consistent relationship between how long an ad is viewable and increases in brand awareness and consideration. The longer a user views your ad, the higher the lift in these two important brand metrics:
These results prompt you to think about your brand advertising in a few important ways:
This is just the beginning of understanding what impacts brand metrics for video ads. As brands look to measure the effectiveness of their digital video advertising, a continued understanding of what factors drive brand metrics will be crucial to more effective brand spend.
Read further research on the impact of online video.
To read all of our research on viewability, check out thinkwithgoogle.com/viewability.
To see how viewability is measured, visit our interactive Active View demo.
![]() |
Posted by Sanaz Ahari Group Product Manager, Brand Measurement, Google |
August 10th, 2015 | by Jane Smith | published in Google Apps, Google Docs
Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates
August 10th, 2015 | by Google Blogs | published in Google Blog
As Sergey and I wrote in the original founders letter 11 years ago, “Google is not a conventional company. We do not intend to become one.” As part of that, we also said that you could expect us to make “smaller bets in areas that might seem very speculative or even strange when compared to our current businesses.” From the start, we’ve always strived to do more, and to do important and meaningful things with the resources we have.
We did a lot of things that seemed crazy at the time. Many of those crazy things now have over a billion users, like Google Maps, YouTube, Chrome, and Android. And we haven’t stopped there. We are still trying to do things other people think are crazy but we are super excited about.
We’ve long believed that over time companies tend to get comfortable doing the same thing, just making incremental changes. But in the technology industry, where revolutionary ideas drive the next big growth areas, you need to be a bit uncomfortable to stay relevant.
Our company is operating well today, but we think we can make it cleaner and more accountable. So we are creating a new company, called Alphabet (http://abc.xyz). I am really excited to be running Alphabet as CEO with help from my capable partner, Sergey, as President.
What is Alphabet? Alphabet is mostly a collection of companies. The largest of which, of course, is Google. This newer Google is a bit slimmed down, with the companies that are pretty far afield of our main Internet products contained in Alphabet instead. What do we mean by far afield? Good examples are our health efforts: Life Sciences (that works on the glucose-sensing contact lens), and Calico (focused on longevity). Fundamentally, we believe this allows us more management scale, as we can run things independently that aren’t very related. Alphabet is about businesses prospering through strong leaders and independence. In general, our model is to have a strong CEO who runs each business, with Sergey and me in service to them as needed. We will rigorously handle capital allocation and work to make sure each business is executing well. We’ll also make sure we have a great CEO for each business, and we’ll determine their compensation. In addition, with this new structure we plan to implement segment reporting for our Q4 results, where Google financials will be provided separately than those for the rest of Alphabet businesses as a whole.
This new structure will allow us to keep tremendous focus on the extraordinary opportunities we have inside of Google. A key part of this is Sundar Pichai. Sundar has been saying the things I would have said (and sometimes better!) for quite some time now, and I’ve been tremendously enjoying our work together. He has really stepped up since October of last year, when he took on product and engineering responsibility for our Internet businesses. Sergey and I have been super excited about his progress and dedication to the company. And it is clear to us and our board that it is time for Sundar to be CEO of Google. I feel very fortunate to have someone as talented as he is to run the slightly slimmed down Google and this frees up time for me to continue to scale our aspirations. I have been spending quite a bit of time with Sundar, helping him and the company in any way I can, and I will of course continue to do that. Google itself is also making all sorts of new products, and I know Sundar will always be focused on innovation — continuing to stretch boundaries. I know he deeply cares that we can continue to make big strides on our core mission to organize the world’s information. Recent launches like Google Photos and Google Now using machine learning are amazing progress. Google also has some services that are run with their own identity, like YouTube. Susan is doing a great job as CEO, running a strong brand and driving incredible growth.
Sergey and I are seriously in the business of starting new things. Alphabet will also include our X lab, which incubates new efforts like Wing, our drone delivery effort. We are also stoked about growing our investment arms, Ventures and Capital, as part of this new structure.
Alphabet Inc. will replace Google Inc. as the publicly-traded entity and all shares of Google will automatically convert into the same number of shares of Alphabet, with all of the same rights. Google will become a wholly-owned subsidiary of Alphabet. Our two classes of shares will continue to trade on Nasdaq as GOOGL and GOOG.
For Sergey and me this is a very exciting new chapter in the life of Google — the birth of Alphabet. We liked the name Alphabet because it means a collection of letters that represent language, one of humanity’s most important innovations, and is the core of how we index with Google search! We also like that it means alpha-bet (Alpha is investment return above benchmark), which we strive for! I should add that we are not intending for this to be a big consumer brand with related products–the whole point is that Alphabet companies should have independence and develop their own brands.
We are excited about…
What could be better? No wonder we are excited to get to work with everyone in the Alphabet family. Don’t worry, we’re still getting used to the name too!
Posted by Larry Page, CEO
Alphabet is mostly a collection of companies. The largest of which, of course, is Google.
August 10th, 2015 | by Open Source Programs Office | published in Google Open Source
Over the weekend, we released Shaderc: a library and command-line tool for translating graphics shaders from GLSL into SPIR-V. It is a wrapper around Glslang, the open source reference compiler for GLSL published by the Khronos Group.Shaderc is d…
August 10th, 2015 | by Jane Smith | published in Google Apps
A new version of the Hangouts Android app is coming to Google Play this week. New features include:Custom status messages: As launched previously for web and iOS, people using the Hangouts Android app can now add custom status messages and see the stat…
August 10th, 2015 | by Google Blogs | published in Google Blog
The best conversations just flow: you barely have to think about what to say. With Hangouts, we want to help you stay in the moment, no matter what device you’re using or how you’re getting your voice across, from texting to talking to video. So we’ve been hard at work on big improvements to make Hangouts faster, simpler, and easier on the eyes.
Today, we’re releasing Google Hangouts 4.0 for Android, with a fresh new look, improved speed, and features that make it even easier to reach people the way you want to reach them.
![]() |
New ways to compose, attach and use your watch make Hangouts simpler than ever |
Here’s what’s new:
And more…
Make affordable calls from the Hangouts Dialer, now with Outbound Caller ID (so your friends won’t get any more calls from the great “Unknown”). Receive and reply to group MMS messages in Hangouts using Google Voice. Use Hangouts on Android Wear when you really have an eye on the time (just say, “Ok Google, send a Hangouts message,” into your watch). And if you’re not around but want friends to know what’s on your mind? Custom status messages are back.
The new Hangouts starts rolling out to Android users today and will be available in the Play Store (iOS users recently got many of the same updates). We hope you enjoy the new look and feel. And we look forward to keeping the conversation flowing!
Posted by Amit Fulay, Product Manager, Communications
With Hangouts, we want to help you stay in the moment, no matter what device you’re using or how you’re getting your voice across, from texting to talking to video.
August 10th, 2015 | by Reto Meier | published in Google Android
Posted by Shannon Woods, Technical Program Manager
Developers of games and 3D graphics applications have one key challenge to meet: How complex a scene can they draw in a small fraction of a second? Much of the work in graphics development goes into organizing data so it can be efficiently consumed by the GPU for rendering. But even the most careful developers can hit unforeseen bottlenecks, in part because the drivers for some graphics processors may reorganize all of that data before it can actually be processed. The APIs used to control these drivers are also not designed for multi-threaded use, requiring synchronization with locks around calls that could be more efficiently done in parallel. All of this results in CPU overhead, which consumes time and power that you’d probably prefer to spend drawing your scene.
In order to address some of the sources of CPU overhead and provide developers with more explicit control over rendering, we’ve been working to bring a new 3D rendering API, Vulkan™, to Android. Like OpenGL™ ES, Vulkan is an open standard for 3D graphics and rendering maintained by Khronos. Vulkan is being designed from the ground up to minimize CPU overhead in the driver, and allow your application to control GPU operation more directly. Vulkan also enables better parallelization by allowing multiple threads to perform work such as command buffer construction at once.
To make it easier to write an application once that works across a variety of devices, Android 5.0 Lollipop significantly expanded the Android Compatibility Test Suite (CTS) with over fifty thousand new tests for OpenGL ES, and many more have been added since. This provides an extensive open source test suite for identifying problems in drivers so that they can be fixed, creating a more robust and reliable experience for both developers and end users. For Vulkan, we’ll not only develop similar tests for use in the Android CTS, but we’ll also contribute them to Khronos for use in Vulkan’s own open source Conformance Test Suite. This will enable Khronos to test Vulkan drivers across platforms and hardware, and improve the 3D graphics ecosystem as a whole.
We’ll be working hard to help create, test, and ship Vulkan, but at the same time, we’re also going to contribute to and support OpenGL ES. As a developer, you’ll be able to choose which API is right for you: the simplicity of OpenGL ES, or the explicit control of Vulkan. We’re committed to providing an excellent developer experience, no matter which API you choose.
Vulkan is still under development, but you’ll be able to find specifications, tests, and tools once they are released at http://www.khronos.org/vulkan.
August 7th, 2015 | by Google Blogs | published in Google Blog
Has it been a week or what? It most definitely has, which means it’s time to kick back, relax and read on as we dig into Google’s trending searches list to find out what the world wanted to learn more about this week.
The Grand Old Party
With 10 candidates on stage for the prime-time event, Thursday night’s Republican presidential debate debate did seem quite a bit like a party. An estimated 24 million people tuned in to watch the broadcast on the Fox News Channel (a record for the network, and cable TV, it turns out), and searches for “Republican debate” topped 5 million.
Not surprisingly, the candidates themselves generated some serious search action. Donald Trump claimed the “most searched” position in the days before and hours after the event, and held on to the spot for nearly all of the debate itself, save a few appearances from Ben Carson, Carly Fiorina and Lindsey Graham.
See a full-size version of this incredible data viz here. You can also explore the most searched issues during the debate as well — education, taxes and abortion all took turns in the top spot.
Jon-Voyage
All good things come to an end, and this week the world said goodbye to Jon Stewart — at least as host of The Daily Show. Airing just moments after a debate that was ripe for Stewart’s particular brand of editorializing, Thursday night’s show was his last. Searches for the (now former) host topped 500,000 this week.
Check out another cool data viz — this one on search interest in Jon Stewart across the United States — right here.
The Doctor is in
Artist turned producer turned mogul Dr. Dre gave us all a gift this week — his first new album in 16 years. Compton — which Dre is calling his grand finale – is self-reflective and grand in all the best ways, and dripping with appearances and collaborations from generations of other artists. It also brought in an impressive 200,000 searches.
Makeups and breakups
Continuing what seems like a very common theme this summer, the week’s trending searches had no shortage of celebrity couple gossip. Jennifer Aniston, who pulled off a surprise backyard wedding to Justin Theroux, made the list with 200,000+ searches. The world panicked (and searched 100,000+ times) following news that Will Smith and Jada Pinkett Smith were divorcing — news that turned out not to be true at all, according to the couple. Reba McEntire, however, is definitely separating from her husband (100,000+ searches for that news), as are Gwen Stefani and Gavin Rossdale (200,000+ searches) — and there’s no doubt about that one, either.
Posted by Megan Slack, who searched this week for [japanese candy delivery]
August 7th, 2015 | by Google Testing Bloggers | published in Google Testing
Posted by Anthony Vallone on behalf of the GTAC Committee
The deadline to apply for GTAC 2015 is this Monday, August 10th, 2015. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to submit your proposal for consideration. If you would like to speak or attend, be sure to complete the form by Monday.
We will be making regular updates to the GTAC site (developers.google.com/gtac/2015/) over the next several weeks, and you can find conference details there.
For those that have already signed up to attend or speak, we will contact you directly by mid-September.