Skip to content

Two US Students Design Gloves to Translate Sign Language into Spoken English

Two entrepreneurial technology students from the University of Washington, Thomas Pryor and Navid Azodi, are giving people living without sound or sight a chance to have their voices heard through a new piece of technology they designed called “SignAloud”.

SignAloud, which is based on machine learning algorithms, uses embedded sensors to observe both the position and the movement of the user’s hands.

gpi-sign language translation-home

The gloves then send the collected data to a computer through Bluetooth, enabling the gloves to learn and check the movements from a catalogue of signing examples. If a hand gesture is matched to the catalogue, the gloves then sound the word through a speaker via a computerized voice.

The students developed the technology in order to break down the communication barriers of those outside the deaf and hard-of-hearing communities, where sign language can lose its meaning to those who do not sign.

“Many of the sign language translation devices out there are not practical for everyday use. Some use video input, while other have sensors that cover the user’s entire arm or body,” explained Pryor.

However, their design is lightweight, compact and only worn on the user’s hands and can be compared to the likes of hearing aids and contact lenses as the gloves are ergonomic and can be used every day.

The innovative design won the top prize in the Lemelson-MIT Prize competition, and currently only translates American Sign Language to English.

To read more, please see: UW undergraduate team wins $10,000 Lemelson-MIT Student Prize for gloves that translate sign language.

Subscribe to our Newsletter