Researchers including two of Indian origin at Massachusetts Institute of Technology (MIT) have developed a computer interface called ‘Alter Ego’ and an associated computing system that can transcribe words that the user verbalises internally but does not actually speak aloud. The machine developed by the scientists picks up the neuromuscular signals in the jaw and face that are triggered by the internal verbalisations when a person says something in his head but are undetectable to the human eye. The system consists of a wearable device and an associated computing system. The signals are fed to a Machine learning (Ml) system that has been trained to correlate signals with words.
Alter Ego includes a pair of bone-conduction headphones which transmits vibrations through the bones of the face to the inner ear. Since the device does not obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting the conversation or otherwise interfering with the user’s auditory experience. Researchers developed a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.
Watch Demo Video of Alter Ego
“The motivation for this was to build an IA device – an intelligence-augmentation device. Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition,” said Arnav Kapur, graduate student at the MIT Media lab who led the development of the new system. Pattie Maes, professor of Media Arts and Sciences is the senior author and he is joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.
The idea that internal verbalisations have physical correlates has been around since the 19th century and it was seriously investigated in the 1950. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalisation or also known as subvocalisation. However, subvocalisation as a computer interface is largely unexplored.
The researchers’ first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. Kapur said, “We’re in the middle of collecting data and the results look nice. I think we’ll achieve full conversation someday.” The researchers have described their device in a paper presented at the Association for Computing Machinery’s ‘ACM Intelligent User Interface’ conference.
(The above story first appeared on LatestLY on Apr 09, 2018 03:55 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).