A Real Time System for Two Ways Communication of Hearing and Speech Impaired People
L. Latha1, M. Kaviya2

1L. Latha, Professor, Department of Computer Science and Engineering, Kumaraguru College of Technology Coimbatore (Tamil Nadu), India.
2M. Kaviya, PG Student, Department of Computer Science and Engineering, Kumaraguru College of Technology, Coimbatore (Tamil Nadu), India.
Manuscript received on 16 December 2018 | Revised Manuscript received on 28 December 2018 | Manuscript Published on 24 January 2019 | PP: 382-385 | Volume-7 Issue-4S2 December 2018 | Retrieval Number: ES2086017518/19©BEIESP
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Generally Sign language uses hand gestures for communication; it is used by the hearing and the speech impaired people to interact with others. But it is very difficult for the normal people to understand it, so this paper proposes a real time system for better communication with normal people and disabled people. The gestures shown by the impaired people will be captured and the corresponding voice output is produced as one way and the voice input by normal people is taken and the periodic gesture will be displayed to them as another.This system uses RASPBERRY PI kit as the hardware, where a Pi camera, LCD display, Speaker and Microphone will be attached along with it. First the image acquisition is carried where it captures the input image and then image pre-processing is done to extract the foreground image from the background, then feature extraction is carried out to extract the necessary details. The extracted image is matched with the dataset and the corresponding voice output is generated for that gesture. Likewise, a microphone is used to capture the speech input of the normal people, then it is pre-processed to remove the extra noise in the speech signal and feature extraction is carried out to identify the necessary details and finally extracted voice is matched with the dataset and the corresponding hand gestured image will be displayed in LCD display. By using this method the communication gap between the impaired and normal people get reduced.
Keywords: Feature Extraction, Pre-processing, Matching.
Scope of the Article: Multimedia and Real-Time Communication