Learning Effective Video Features for Facial Expression Recognition via Hybrid Deep Learning
A. Rajesh kumar1, G. Divya2
1A. Rajeshkumar*, Department of Computer Science and Engineering, , Saveetha school of engineering , Saveetha Institute of Medical and Technical Sciences, Chennai, India.
2G. Divya, Assistant Professor, Department of Computer Science and Engineering, Saveetha school of engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India.
Manuscript received on January 02, 2020. | Revised Manuscript received on January 15, 2020. | Manuscript published on January 30, 2020. | PP: 5602-5604 | Volume-8 Issue-5, January 2020. | Retrieval Number: E6767018520/2020©BEIESP | DOI: 10.35940/ijrte.E6767.018520

Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Facial Expression Recognition is one of the recent trends to detect human expression in streaming video sequences. To identify emotions of video like sad, happy or angry. In this paper, the proposed method employs two individual deep convolution neural networks (CNNs), including a permanent CNN processing of static facial images and a temporary CN network processing of optical flow images, to separately learn high-level spatial and temporal characteristics on the separated video segments. Such two CNNs are fine tuned from a pre-trained CNN model to target video facial expression datasets. The spatial and temporal characteristics obtained at the segment level are then incorporated into a deep fusion network built with a model of deep belief network (DBN). This deep fusion network is used to learn spatiotemporal discriminative features together
Keywords: Machine learning algorithms, Neural networks,
Scope of the Article: Deep Learning