Explore Images with Google Image Swirl
November 23rd, 2009 | Published in Google Research
Earlier this week, we announced the Labs launch of Google Image Swirl, an experimental search tool that organizes image-search results. We hope to take this opportunity to explain some of the research underlying this feature, and why it is an important area of focus for computer vision research at Google.
As the Web becomes more "visual," it is important for Google to go beyond traditional text and hyperlink analysis to unlock the information stored in the image pixels. If our search algorithms can understand the content of images and organize search results accordingly, we can provide users with a more engaging and useful image-search experience.
Google Image Swirl represents a concrete step towards reaching that goal. It looks at the pixel values of the top search results and organizes and presents them in visually distinctive groups. For example, in ambiguous queries such as "jaguar," Image Swirl separates the top search results into categories such as jaguar the animal and jaguar the brand of car. The top-level groups are further divided into collections of subgroups, allowing users to explore a broad set of visual concepts associated with the query, such as the front view of a Jaguar car or Eiffel Tower at night or from a distance. This is a distinct departure from the way images are ranked by the Google Similar Images, which excels at finding images very visually similar to the query image.
No matter how much work goes into engineering image and text features to represent the content of images, there will always be errors and inconsistencies. Sometimes two images share many visual or text features, but have little real-world connection. In other cases, objects that look similar to the human eye may appear drastically different to computer vision algorithms. Most difficult of all, the system has to work at Web Scale -- it must cover a large fraction of query traffic, and handle ambiguities and inconsistencies in the quality of information extracted from Web images.
In Google Image Swirl, we address this set of challenges by organizing all available information about an image set into a pairwise similarity graph, and applying novel graph-analysis algorithms to discover higher-order similarity and category information from this graph. Given the high dimensionality of image features and the noise in the data, it can be difficult to train a monolithic categorization engine that can generalize across all queries. In contrast, image similarities need only be defined for similar enough objects and trained with limited sets of data. Also, invariance to certain transformations or typical intra-class variation can be built into the perceptual similarity function. Different features or similarity functions may be selected, or learned, for different types of queries or image contents. Given a robust set of similarity functions, one can generate a graph (nodes are images and edges are similarity values) and apply graph analysis algorithms to infer similarities and categorical relationships that are not immediately obvious. In this work, we combined multiple sources of similarity such as those used in Google Similar Images, landmark recognition, Picasa's face recognition, anchor text similarity, and category-instance relationships between keywords similar to that in WordNet. It is a continuation of our prior effort [paper] to rank images based on visual similarity.
As with any practical application of computer vision techniques, there are a number of ad hoc details which are critical to the success of the system but are scientifically less interesting. One important direction of our future work will be to generalize some of the heuristics present in the system to make them more robust, while at the same time making the algorithm easier to analyze and evaluate against existing state-of-the-art methods. We hope that this work will lead to further research in the area of content-based image organization and look forward to your feedback.