Abstract
Even though the hearing and vocally impaired populace rely entirely on Sign
Language (SL) as a way of communication, the majority of the worldwide people are
unable to interpret it. This creates a significant language barrier between these two
categories. The need for developing Sign Language Recognition (SLR) systems has
arisen as a result of the communication breakdown between the deaf-mute and the
general populace. This paper proposes a Hybrid Convolutional Recurrent Neural
Network-based (H-CRNN) framework for Isolated Indian Sign Language recognition.
The proposed framework is divided into two modules: the Feature Extraction module
and the Sign Model Recognition module. The Feature Extraction module exploits the
Convolutional Neural Network-based framework, and the Model recognition exploits
the LSTM/GRU-based framework for Indian sign representation of English Alphabets
and numbers. The proposed models are evaluated using a newly created Isolated Sign
dataset called ISLAN, the first multi-signer Indian Sign Language representation for
English Alphabets and Numbers. The performance evaluation with the other state-o-
-the-art neural network models have shown that the proposed H-CRNN model has
better accuracy.
Keywords: Hybrid Neural Networks, Isolated Indian Sign Language, Image segmentation, Sign Language Recognition.