SNCHAR : Sign language Character Recognition
Amit Chaurasia1, Harshul Shire2

1Amit Chaurasia, Department of Computer Engineering & Applications, GLA University, Mathura, India.
2Harshul Kshire, Department of Computer Engineering & Applications, GLA University, Mathura, India.

Manuscript received on 1 August 2019. | Revised Manuscript received on 8 August 2019. | Manuscript published on 30 September 2019. | PP: 465-468 | Volume-8 Issue-3 September 2019 | Retrieval Number: C4226098319/19©BEIESP | DOI: 10.35940/ijrte.C4226.098319
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (

Abstract: Hard of hearing and unable to speak individuals convey among themselves utilizing gesture based communication yet typical individuals think that it’s hard to comprehend their language. Utilizing two hands frequently prompts lack of definition of highlights because of covering of hands. Our undertaking goes for making the essential stride in crossing over the correspondence hole between typical individuals and tragically challenged individuals utilizing Sign language. Powerful augmentation of this undertaking to words and typical statements may not just cause the deaf and dumb individuals to impart quicker and simpler with external world, yet in addition give a lift in creating self-sufficient frameworks for comprehension and supporting them. Gesture based communication is the favored technique for correspondence among the hard of hearing and the meeting debilitated individuals everywhere throughout the world. Acknowledgment of communication via gestures can have shifting level of achievement when utilized in a computer vision or some other techniques. Communication via gestures is said to have an organized arrangement of signals where each motion is having a particular significance. We propose a solution to this problem as SNCHAR will allow easy interaction between the deaf and the hearing impaired people and the ones who are not. Here SN stands for Sign language, CHA for Character, and R for Recognition system. The project “SNCHAR: Sign language Character Recognition” system is a python based application. It uses live video as input, and predicts the letters the user is gesturing in the live feed. It captures the frames, and recognizes the area of hand gesture by looking for skin color intensity object. It separates the gesture area from the rest of the frame, and feeds that part to our pre-trained model. This pre-trained model, using the hand gesture as input predicts a value that represents an alphabet. This alphabet is displayed on the screen. User can hear the text predicted on the screen by pressing “P” on the keyboard. The predicted text can be erased if required by using “Z” from the keyboard .
Keywords:   Tensorflow, Keras, Sign language

Scope of the Article:
Pattern Recognition