Content Tags

There are no tags.

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation.

RSS Source
Biyi Fang, Jillian Co, Mi Zhang

There is an undeniable communication barrier between deaf people and peoplewith normal hearing ability. Although innovations in sign language translationtechnology aim to tear down this communication barrier, the majority ofexisting sign language translation systems are either intrusive or constrainedby resolution or ambient lighting conditions. Moreover, these existing systemscan only perform single-sign ASL translation rather than sentence-leveltranslation, making them much less useful in daily-life communicationscenarios. In this work, we fill this critical gap by presenting DeepASL, atransformative deep learning-based sign language translation technology thatenables ubiquitous and non-intrusive American Sign Language (ASL) translationat both word and sentence levels. DeepASL uses infrared light as its sensingmechanism to non-intrusively capture the ASL signs. It incorporates a novelhierarchical bidirectional deep recurrent neural network (HB-RNN) and aprobabilistic framework based on Connectionist Temporal Classification (CTC)for word-level and sentence-level ASL translation respectively. To evaluate itsperformance, we have collected 7,306 samples from 11 participants, covering 56commonly used ASL words and 100 ASL sentences. DeepASL achieves an average94.5% word-level translation accuracy and an average 8.2% word error rate ontranslating unseen ASL sentences. Given its promising performance, we believeDeepASL represents a significant step towards breaking the communicationbarrier between deaf people and hearing majority, and thus has the significantpotential to fundamentally change deaf people's lives.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.