Cloud-Assisted Multi-View Video Summarization using CNN and Bi-Directional LSTM

TANVEER HUSSAIN, Khan Muhammad, Amin Ullah, Zehong Cao, Sung Wook Baik*, Victor Hugo C. De Albuquerque

*Corresponding author for this work

Research output: Contribution to journalArticle (journal)peer-review

109 Citations (Scopus)

Abstract

The massive amount of video data produced by surveillance networks in industries instigate various challenges in exploring these videos for many applications, such as video summarization (VS), analysis, indexing, and retrieval. The task of multiview video summarization (MVS) is very challenging due to the gigantic size of data, redundancy, overlapping in views, light variations, and interview correlations. To address these challenges, various low-level features and clustering-based soft computing techniques are proposed that cannot fully exploit MVS. In this article, we achieve MVS by integrating deep neural network based soft computing techniques in a two-tier framework. The first online tier performs target-appearance-based shots segmentation and stores them in a lookup table that is transmitted to cloud for further processing. The second tier extracts deep features from each frame of a sequence in the lookup table and pass them to deep bidirectional long short-term memory (DB-LSTM) to acquire probabilities of informativeness and generates a summary. Experimental evaluation on benchmark dataset and industrial surveillance data from YouTube confirms the better performance of our system compared to the state-of-the-art MVS methods.
Original languageEnglish
Pages (from-to)77-86
JournalIEEE Transactions on Industrial Informatics
Volume16
Issue number1
DOIs
Publication statusPublished - 17 Jul 2019

Fingerprint

Dive into the research topics of 'Cloud-Assisted Multi-View Video Summarization using CNN and Bi-Directional LSTM'. Together they form a unique fingerprint.

Cite this