November 4th, 2010 | Published in Google Research
It seems clear that the TV is a growing source of online audio-video content that you select by searching. Entering characters of a search string one by one using a traditional remote control and onscreen keyboard is extremely tiresome. People have been working on building better ways to search on the TV, ranging from small keyboards to voice input to interesting gestures you might make to let the TV know what you want. But currently the traditional left-right-up-down clicker dominates as the family room input device. To enter the letters of a show, you click over and over until you get to the desired letter on the on-screen keyboard and then you hit enter to select it. You repeat this mind-numbingly slow process until you type in your query string or at least enough letters that the system can put up a list of suggested completions. Can we instead use a Google AutoComplete style recommendation model and novel interface to make character entry less painful?
We have developed an interaction model that reduces the distance to the predicted next letter without scrambling or moving letters on the underlying keyboard (which is annoying and increases the time it takes to find the next letter). We reuse the highlight ring around the currently selected letter and fill it with 4 possible characters that might be next, but we do not change the underlying keyboard layout. With 4 slots to suggest the next letter and a good prediction model trained on the target corpus, the next letter is often right where you are looking and just a click away.
To learn more about this combination of User Experience and Machine Learning to address a growing problem with searching on TVs, check out our WWW 2010 publication,QuickSuggest.