Interpretation and Translation of American Sign Language for Hearing Impaired Individuals using Image Processing
Shreyas Rajan1, Rahul Nagarajan2, Akash Kumar Sahoo3, M. Gowtham Sethupati4
1G.Shreyas Rajan, pre-final year student Dept of Computer Science & Engineering at SRM Institute of Science & Technology, Ramapuram.
2Rahul Nagarajan, pre-final year student Dept of Computer Science & Engineering at SRM Institute of Science & Technology, Ramapuram.
3Akash Kumar Sahoo, pre-final year student Dept of Computer Science & Engineering at SRM Institute of Science & Technology, Ramapuram.
4M. Gowtham Sethupathi, Assistant Professor Dept of Computer Science & Engineering at SRM Institute of Science & Technology, Ramapura.

Manuscript received on November 15, 2019. | Revised Manuscript received on November 23, 2019. | Manuscript published on November 30, 2019. | PP: 415-420 | Volume-8 Issue-4, November 2019. | Retrieval Number: D6966118419/2019©BEIESP | DOI: 10.35940/ijrte.D6966.118419

Open Access | Ethics and Policies | Cite  | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: The growth of technology has influenced development in various fields. Technology has helped people achieve their dreams over the past years. One such field that technology involves is aiding the hearing and speech impaired people. The obstruction between common individuals and individuals with hearing and language incapacities can be resolved by using the current technology to develop an environment such that the aforementioned easily communicate among one and other. ASL Interpreter aims to facilitate communication among the hearing and speech impaired individuals. This project mainly focuses on the development of software that can convert American Sign Language to Communicative English Language and vice-versa. This is accomplished via Image-Processing. The latter is a system that does a few activities on a picture, to acquire an improved picture or to extricate some valuable data from it. Image processing in this project is done by using MATLAB, software by MathWorks. The latter is programmed in a way that it captures the live image of the hand gesture. The captured gestures are put under the spotlight by being distinctively colored in contrast with the black background. The contrasted hand gesture will be delivered in the database as a binary equivalent of the location of each pixel and the interpreter would now link the binary value to its equivalent translation delivered in the database. This database shall be integrated into the mainframe image processing interface. The Image Processing toolbox, which is an inbuilt toolkit provided by MATLAB is used in the development of the software and Histogramic equivalents of the images are brought to the database and the extracted image will be converted to a histogram using the ‘imhist()’ function and would be compared with the same. The concluding phase of the project i.e. translation of speech to sign language is designed by matching the letter equivalent to the hand gesture in the database and displaying the result as images. The software will use a webcam to capture the hand gesture made by the user. This venture plans to facilitate the way toward learning gesture-based communication and supports hearing-impaired people to converse without trouble.
Keywords: Sign Language, Image Processing, Database, Hand Gesture.
Scope of the Article: Signal and Image Processing.