Student in India develops AI model that turns sign language to English
DHAKA — A third-year engineering student from India’s Vellore Institute of Technology (VIT), Priyanjali Gupta, has developed a remarkable artificial intelligence model capable of translating American sign language into English in real-time.
According to Priyanjali, her newly developed AI-powered model was inspired by data scientist Nicholas Renotte’s video on Real-Time Sign language Detection. She invented the AI model using Tensorflow object detection API that translates hand gestures using transfer learning from a pre-trained model named ssd_mobilenet.
Article continues after this advertisement“The dataset is manually made with a computer webcam and given annotations. The model, for now, is trained on single frames. To detect videos, the model has to be trained on multiple frames, for which I’m likely to use Long short-term memory (LSTM) networks.” said Priyanjali in an interview with Analytics Drift.
She also mentioned that building a deep learning model dedicated to sign language detection is quite challenging and believes that the open-source community will find a solution soon. She further said that it might be possible to build deep learning models solely for sign languages in the future.
Earlier in 2016, two students from the University of Washington named Thomas Pryor and Navid Azodi invented a pair of gloves called ‘SignAloud’, which could translate sign language into speech or text.
Article continues after this advertisementThey won the Lemelson-MIT competition for their entry of SignAloud.
RELATED STORIES
Artificial intelligence will upend the Philippines
Understand the Future of Work through Artificial Intelligence and Machine Learning
Future of commerce lies in AI-powered conversations
Artificial intelligence may be pandemic lifesaver… one day