It’s unclear just what features the first editions of Google’s computerized glasses, Google Glass, will include when mailed out to a selected crop of software developers in early 2013.
But a recent update to Google’s “Search by Image,” service, as well as open statements by Google researchers working on the hi-tech specs, point to a future in which people wearing Google Glass will be able to look at just about anything and receive a wealth of information about the objects and subjects in their view.
On Monday, Google published a blog post explaining that it had now made several improvements to its Search by Image feature — which allows users to upload a photo or other image from their computers or mobile devices directly into Google’s search bar. Google’s search then returns relevant information and other similar photos, based purely on the visual data itself, no text necessary (although it is optional and can be entered to help further narrow an image search).
The changes include more precise guesses from Google about what an image uploaded to the search bar actually is — “bird of paradise” instead of the more generic “flower,” for example. Another new feature: Timely information and links to news articles about upcoming events, such as displaying concert times if a poster of the concert is uploaded.
By far the most radical update though is the fact that Google will now display a short box of text describing the image itself to the right of the search results, along with similar searches and images. Check out an example posted by Google below. The new box of information is to the very right.
“This could be a biography of a famous person, information about a plant or animal, or much more,” wrote software engineer Sean O’Malley in Google’s official blog post on the changes to Search by Image.
In many cases, the added information eliminates the need for a user to actually click through to the search results themselves, since Google is already plucking the most relevant information and putting it immediately in front of users.
The same functionality has been a part of the main Google search engine since May. It’s based on what Google refers to as its “Knowledge Graph,” which pulls descriptive information about each search term from public sources such as Wikipedia and the CIA World Factbook. When it was first introduced, Google said its Knowledge Graph contained more than “500 million objects, as well as more than 3.5 billion facts about and relationships between these different objects.”
As Google’s O’Malley put it about Knowledge Graph for images:
“Google is starting to understand the world the way people do. Instead of treating webpages as strings of letters like ‘dog’ or ‘kitten,’ we can understand the concepts behind these words.”
While such a functionality is certainly useful to those photographers, artists, designers and anyone else who works with images on a daily basis, Search By Image is currently somewhat still a cumbersome process, involving a user having to either upload an image directly to the Google images search bar or drag and drop that image from a user’s desktop into the search bar.
But if users had a digital camera or mobile devices that could instantly upload images to Google’s Search By Image service and retrieve all of the relevant information Google is striving to pluck directly from the Web, it could offer them a much more seamless and arguably useful experience.
Google Glass, Google’s still in-development high-tech glasses, which are equipped with a tiny still and video camera, as well as a tiny, eyeball-facing screen for displaying information to the wearer, would seem to present the perfect opportunity for Google to integrate its new hardware and software technologies to create a compelling new service for users.
Put another way: Imagine wearing Google Glasses and staring at a famous but unknown landmark or natural feature. If so enabled, Google’s Search by Image could instantly retrieve the name of the landmark, the dates of its construction or discovery, and other facts about it, displaying them to the user in the tiny screen in front of their eyes.
For now, the company is not saying whether or not such a functionality will be enabled ever on Google Glass, let alone with the developer-only model, known as Google Glass “Explorer Edition,” which Google said it would ship in “early 2013.”
Asked about the possibility of Google Glass with Search by Image enabled, a Google spokesperson provided the following statement to TPM: “We don’t have future plans to share at the moment.”
But the main Google researchers working on the company’s ambitious wearable computing technology have made it abundantly clear that they believe communication through and surrounding images is one of the main assets of Google Glass so far.
As Google Glass lead researcher Babak Parviz told Wired’s Steven Levyin a recent interview: “There are two broad areas that we’re looking at. One is to enable people to communicate with images in new ways, and in a better way. The second is very rapid access to information…[Glass] can help you do something, it would help you connect to other people with images or video, or it would help you get a snippet of information very quickly.”
Parviz also said the the company was working to integrate audio search into the high-tech spectacles, but did not specifically name Search by Image as a component.
Still, the recent updates to the Search by Image service do appear to be line with what Google has envisioned for Google Glass.