Researchers at Michigan State University have recently created a device that can assist those who are deaf and hard-of-hearing. This new technology, known as DeepASL, is able to help hearing-impaired individuals communicate with others who can hear well and/or don’t comprehend sign language.
According to the World Health Institute, devastating hearing loss impacts around 466 million people, or 5 percent of the Earth’s population. This statistic includes eight percent of Americans, who use American Sign Language ( ASL) to communicate with those around them.
However, many Americans, including myself, do not understand ASL and must rely on an interpreter to figure out what sign language users are attempting to convey.
“Hundreds of thousands of hard-of-hearing people rely upon American Sign Language, or ASL, to communicate,” mentioned Mi Zhang, assistant professor of electrical and computer engineering at Michigan State. “Without an interpreter present, they don’t have the same employment opportunities and are oftentimes left at a disadvantage in delicate or sensitive situations”
Many of the translating products that are on the market can be bulky and humiliating for the user. Some are only able to slowly pick up hand gestures from the user, which makes naturally flowing conversation difficult.
DeepASL, on the other hand, is around the size of a tube of chapstick and uses multiple cameras to capture the signer’s gestures. It also offers the user personalized feedback in real time.
“Hard-of-hearing individuals who need to communicate with someone who doesn’t understand sign language can have a personalized, virtual interpreter at anytime, anywhere,” Zhang said.
How does this technology work?
DeepASL utilizes light sensing technology known as Leap Motion, which takes information from the joints in the hand and turns it into sign language.
Biyi Fang, one of Zhang’s colleagues who helped develop DeepASL expands on what Leap Motion entails:
“Leap Motion converts the motions of one’s hands and fingers into skeleton-like joints,” Fang said. “Our deep learning algorithm picks up data from the skeleton-like joints and matches it to signs of ASL.”
The product also uses an algorithm based on deep learning, which involves a machine learning through data that is similar in structure to the human brain.
When a user signs different words with hand gestures, DeepASL automatically converts the signs into English and broadcasts the translated speech through a speaker worn on the user’s neck.
As the receiver responds to the sender, DeepASL recognizes the speech and translates it to text. The newly transformed words are then displayed to the user by utilizing a pair of augmented reality glasses.
How well does it work?
According to a paper published by Zhang, Fang and undergraduate student Jillian Co, DeepASL is 94.5 percent accurate when it comes to word translation. Meanwhile, translated words in sentences performed by unseen senders come with an average error rate of 16.1 percent.
However, the program is able to translate signs from users in various postures or in different lighting conditions with remarkable accuracy. DeepASL translates words from ASL to English with a success rate of more than 91.8 percent.
“Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people’s lives,” the paper reads.
Zhang and his fellow researchers are currently ready to market DeepASL to the public, and the product will retail for around $78. They are also interested in teaching the program new sign languages and giving it iPhone compatibility.