Distinctly Trained Multi-Source CNN for Multi-Camera Based Vehicle Tracking System
Sanda Sri Harsha1, Harika Simhadri2, Karaganda Raghu3, K.V. Prasad4
1Dr. Sanda Sri Harsha, Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur (Dist)- 522502, India.
2Harika Simhadri, IT Consultant, Param Technologies Inc. Minneapolis, Dallas.
3Dr. Katragadda Raghu, Department of Business Management, V R Siddhartha Engineering College, Vijaywada, India.
4Dr. K.V. Prasad, Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur (Dist)- 522502, India.
Manuscript received on 04 March 2019 | Revised Manuscript received on 09 March 2019 | Manuscript published on 30 July 2019 | PP: 624-734 | Volume-8 Issue-2, July 2019 | Retrieval Number: B1639078219/19©BEIESP | DOI: 10.35940/ijrte.B1639.078219
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: In the last few years the exponential rise in the demands or robust surveillance systems have revitalized academia-industries to achieve more efficient vision based computing systems. Vision based computing methods have been found potential for the different surveillance purposes such as Intelligent Transport System (ITS), civil surveillance, defense and other public-private establishment security. However, computational complexities turn-out to be more complicate for ITS under occlusion where multiple cameras could be synchronized together to track certain target vehicle. Classical texture, color based approaches are confined and often leads false positive outcome thus impacting decision making efficiency. Considering this as motivation, in this paper a highly robust and novel Distinctly Trained Multi-Source Convolutional Neural Network (DCNN) has been developed that exhibits pre-training of the real-time traffic videos from multiple cameras to track certain targeted vehicle. Our proposed DCNNvehicle tracking model encompasses multiple shared layers with multiple branchesof the source-specific layers. In other words, DCNN is implemented on each camera or source where it performs feature learning and enables a set of features shared by each camera, which is then learnt to identify Region of Interest (ROI) signifying the “targeted vehicle”. Our proposed DCNNmodel trains each source input iteratively to achieve ROI representations in the shared layers. To perform tracking in a new sequence, DCNNforms a new network by combining the shared layers in the pre-trained DCNN with a new binary classification layer, which is updated online. This process enables online tracking by retrieving the ROI windows arbitrarily sampled near the previous ROI state. It helps achieving real-time vehicle tracking even under occlusion and dynamic background conditions.
Keywords: Multiple Camera Based Vehicle Tracking, Vision Technology, , Convolutional Neural Network, Distinctly Trained Multi-Source CNN.
Scope of the Article: Bio-Science and Bio-Technology