Abstract
The widespread usage of surveillance cameras in smart cities has resulted in a gigantic volume of video data whose indexing, retrieval and management is a challenging issue. Video summarization tends to detect important visual data from the surveillance stream and can help in efficient indexing and retrieval of required data from huge surveillance datasets. In this research article, we propose an efficient convolutional neural network based summarization method for surveillance videos of resource-constrained devices. Shot segmentation is considered as a backbone of video summarization methods and it affects the overall quality of the generated summary. Thus, we propose an effective shot segmentation method using deep features. Furthermore, our framework maintains the interestingness of the generated summary using image memorability and entropy. Within each shot, the frame with highest memorability and entropy score is considered as a keyframe. The proposed method is evaluated on two benchmark video datasets and the results are encouraging compared to state-of-the-art video summarization methods.
Original language | English |
---|---|
Pages (from-to) | 370-375 |
Number of pages | 6 |
Journal | Pattern Recognition Letters |
Volume | 130 |
DOIs | |
Publication status | Published - 29 Feb 2020 |
Keywords
- Energy-efficiency
- Resource-constrained devices
- Surveillance
- Video analysis
- Video summarization