Real-Time Vision-Based Sign Language Bilateral Communication Device for Signers and Non-Signers using Convolutional Neural Network
Department of Electronics Engineering, College of Engineering, Pamantasan ng Lungsod ng Maynila, Manila, Philippines.
Research Article
World Journal of Advanced Research and Reviews, 2023, 18(03), 934–943
Article DOI: 10.30574/wjarr.2023.18.3.1169
Publication history:
Received on 07 May 2023; revised on 15 June 2023; accepted on 17 June 2023
Abstract:
The use of sign language is an important means of communication for individuals with hearing and speech impairments, but communication barriers can still arise due to differences in grammatical rules across different sign languages. In an effort to address these barriers, this study aimed to develop a real-time two-way communication device that uses image processing and recognition systems to translate two-handed Filipino Sign Language (FSL) gestures and facial expressions into speech; the system can recognize gestures that correspond to specific words and phrases. Specifically, the researchers utilized Convolutional Neural Networks (CNNs) to enhance the processing speed and accuracy of the device. The system also includes a speech-to-text (STT) feature that helps non-signers communicate with deaf individuals without relying on an interpreter. The study's results showed that the device achieved a 93% accuracy rate in recognizing facial expressions and FSL gestures using CNN, indicating that it is highly accurate. Additionally, the system performed in real-time, with an overall average conversion time of 1.84 and 2.74 seconds for sign language to speech and speech to text, respectively. Finally, the device was well-received by both signers and non-signers, with a total approval rating of 85.50% from participants at Manila High School, suggesting that it effectively facilitates two-way communication and has the potential to break down communication barriers.
Keywords:
Filipino Sign Language; Two-Way Communication; Facial Expression Recognition; Convolutional Neural Networks
Full text article in PDF:
Copyright information:
Copyright © 2023 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0