Conflux LSTMs Network: A Novel Approach for Multi-View Action Recognition

A. Ullah, K. Muhammad, T. Hussain, S.W. Baik

Research output: Contribution to journalArticle (journal)peer-review

41 Citations (Scopus)


Multi-view action recognition (MVAR) is an optimal technique to acquire numerous clues from different views data for effective action recognition, however, it is not well explored yet. There exist several challenges to MVAR domain such as divergence in viewpoints, invisible regions, and different scales of appearance in each view require better solutions for real world applications. In this paper, we present a conflux long short-term memory (LSTMs) network to recognize actions from multi-view cameras. The proposed framework has four major steps; 1) frame level feature extraction, 2) its propagation through conflux LSTMs network for view self-reliant patterns learning, 3) view inter-reliant patterns learning and correlation computation, and 4) action classification. First, we extract deep features from a sequence of frames using a pre-trained VGG19 CNN model for each view. Second, we forward the extracted features to conflux LSTMs network to learn the view self-reliant patterns. In the next step, we compute the inter-view correlations using the pairwise dot product from output of the LSTMs network corresponding to different views to learn the view inter-reliant patterns. In the final step, we use flatten layers followed by SoftMax classifier for action recognition. Experimental results over benchmark datasets compared to state-of-the-art report an increase of 3% and 2% on northwestern-UCLA and MCAD datasets, respectively.
Original languageEnglish
Pages (from-to)1-9
Publication statusPublished - 23 Feb 2021


  • Artificial intelligence
  • Deep learning
  • Action recognition
  • Multi-view video analytics arning LSTM CNN Multi-view action recognition
  • Sequence learning
  • LSTM
  • CNN
  • Multi-view action recognition


Dive into the research topics of 'Conflux LSTMs Network: A Novel Approach for Multi-View Action Recognition'. Together they form a unique fingerprint.

Cite this