Generating Video From Images using GAN and CVAE
Anoosh G P1, Chetan G2, Mohan Kumar M3, Priyanka B N4, Nagashree Nagaraj5

1Anoosh G P, Pursuing, Bachelor of Engineering in Computer Science, Vidyavardhaka College of Engineering, Mysuru.
2Chetan G, Pursuing, Bachelor of Engineering in Computer Science, Vidyavardhaka College of Engineering, Mysuru.
3Priyanka B N, Pursuing, Bachelor of Engineering in Computer Science, Vidyavardhaka College of Engineering, Mysuru.
4Mohan Kumar M, Pursuing, Bachelor of Engineering in Computer Science, Vidyavardhaka College of Engineering, Mysuru.
5Nagashree Nagaraj, Assistant Professor, Department of computer science and engineering, vidyavardhaka College of Engineering. Mysuru.
Manuscript received on January 02, 2020. | Revised Manuscript received on January 15, 2020. | Manuscript published on January 30, 2020. | PP: 1401-1404 | Volume-8 Issue-5, January 2020. | Retrieval Number: E6425018520/2020©BEIESP | DOI: 10.35940/ijrte.E6425.018520

Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: In a given scene, people can often easily predict a lot of quick future occasions that may occur. However generalized pixel-level expectation in Machine Learning systems is difficult in light of the fact that it struggles with the ambiguity inherent in predicting what’s to come. However, the objective of the paper is to concentrate on predicting the dense direction of pixels in a scene — what will move in the scene, where it will travel, and how it will deform through the span of one second for which we propose a conditional variational autoencoder as a solution for this issue. We likewise propose another structure for assessing generative models through an adversarial procedure, wherein we simultaneously train two models, a generative model G that catches the information appropriation, and a discriminative model D that gauges the likelihood that an example originated from the training data instead of G. We focus on two uses of GANs semi-supervised learning, and the age of pictures that human’s find visually realistic. We present the Moments in Time Dataset, an enormous scale human-clarified assortment of one million short recordings relating to dynamic situations unfolding within three seconds.
Keywords: Generative Adversarial Network, Conditional Variational Autoencoders, Video Generation, Generative model, Discriminative model.
Scope of the Article: Next Generation Internet & Web Architectures.