Prototype combines augmented reality with technologies already used on YouTube and Google Translate; see demo
“It’s like a subtitle to the world,” says a Google employee soon after handing over a prototype of glasses to a volunteer. “You’ll probably see what I’m talking about, just spelled out in real time,” he explains, while the girl is amazed.
The microphone inside the glasses records speech, which is processed and converted to text – and displayed on the lens using the augmented reality mini-projector. The gadget is also able to translate what people are talking about. In the second presentation, one person speaks English, while another who wears glasses sees the sentences in Spanish.
If it works well, the translation system could finally make augmented reality glasses an attractive product for the masses – almost ten years after the launch of its first quality product, Google Glass (it failed and was discontinued in 2015).
But the most impressive comes in the third presentation, where a deaf volunteer talks about his experience testing a new prototype. When they record and translate speech in real time, glasses can revolutionize the lives of the deaf – they fully understand what anyone is saying. This is no substitute for communication through Libras (Brazilian Sign Language), but it can dramatically expand the social integration of people with severe hearing impairments.
The glasses do not yet have a release date. It is also not clear from the Google Demo where the data will be processed: whether it is on the glasses itself or on a smartphone, and whether translation / transcription is likely to require an Internet connection (causing potential connectivity and latency issues).
In any case, the prototype looks very useful, as Google already has the technology to make it work: the excellent speech capture and transcription algorithm used on YouTube, and the high-quality robotic translation of the Google app. Turn.