Google Americas Faculty Summit Day 1: Mobile Search
July 15th, 2011 | Published in Google Research
On July 14 and 15, we held our seventh annual Faculty Summit for the Americas with our New York City offices hosting for the first time. Over the next few days, we will be bringing you a series of blog posts dedicated to sharing the Summit's events, topics and speakers. --Ed
Google’s mobile speech team has a lofty goal: recognize any search query spoken in English and return the relevant results. Regardless of whether your accent skews toward a Southern drawl, a Boston twang, or anything in between, spoken searches like “navigate to the Metropolitan Museum,” “call California Pizza Kitchen” or “weather, Scarsdale, New York” should provide immediate responses with a map, the voice of the hostess at your favorite pizza place or an online weather report. The responses must be fast and accurate or people will stop using the tool, and—given that the number of speech queries has more than doubled over the past year—the team is clearly succeeding.
As a software engineer on the mobile speech team, I took the opportunity of the Faculty Summit this week to present some of the interesting challenges surrounding developing and implementing mobile search. One of the immediate puzzles we have to solve is how to train a computer system to recognize speech queries. There are two aspects to consider: the acoustic model, or the sound of letters and words in a language; and the language model, which in English is essentially grammar, or what allows us to predict words that follow one another. The language model we can put together using a huge amount of data gathered from our query logs. The acoustic model, however, is more challenging.
To build our acoustic model, we could conduct “supervised learning” where we collect 100+ hours of audio data from search queries and then transcribe and label the data. We use this data to translate a speech query into a written query. This approach works fairly well, but it doesn’t improve as we collect more audio data. Thus, we use an “unsupervised model” where we continuously add more audio data to our training set as users do speech queries.
Given the scale of this system, another interesting challenge is testing accuracy. The traditional approach is to have human testers run assessments. Over the past year, however, we have determined that our automated system has the same or better level of accuracy as our human testers, so we’ve decided to create a new method for automated testing at scale, a project we are working on now.
The current voice search system is trained on over 230 billion words and has a one million word vocabulary, meaning it understands all the different contexts in which those one million words can be used. It requires multiple CPU decades for training and data processing, plus a significant amount of storage, so this is an area where Google’s large infrastructure is essential. It’s exciting to be a part of such cutting edge research, and the Faculty Summit was an excellent opportunity to share our latest innovations with people who are equally inspired by this area of computer science.