Context-driven Multi-stream LSTM (M-LSTM) for Recognizing Fine-Grained Activity of Drivers

Ardhendu Behera, Alex Keidel, Bappaditya Debnath

Research output: Contribution to journalArticle (journal)peer-review

16 Citations (Scopus)
402 Downloads (Pure)


Automatic recognition of in-vehicle activities has significant impact on the next generation intelligent vehicles. In this paper, we present a novel Multi-stream Long Short-Term Memory (M-LSTM) network for recognizing driver activities. We bring together ideas from recent works on LSTMs, transfer learning for object detection and body pose by exploring the use of deep convolutional neural networks (CNN). Recent work has also shown that representations such as hand-object interactions are important cues in characterizing human activities. The proposed M-LSTM integrates these ideas under one framework, where two streams focus on appearance information with two different levels of abstractions. The other two streams analyze the contextual information involving configuration of body parts and body-object interactions. The proposed contextual descriptor is built to be semantically rich and meaningful, and even when coupled with appearance features it is turned out to be highly discriminating. We validate this on two challenging datasets consisting driver activities.
Original languageEnglish
Pages (from-to)298-314
JournalLecture Notes in Computer Sciences (LNCS) - Pattern Recognition
Early online date14 Feb 2019
Publication statusE-pub ahead of print - 14 Feb 2019


  • Multi-stream Long Short-Term Memory (M-LSTM)
  • Deep Learning
  • Transfer Learning
  • Autonomous Vehicles
  • In-vehicle Activity Monitoring
  • Body pose
  • Modelling body-objects interactions


Dive into the research topics of 'Context-driven Multi-stream LSTM (M-LSTM) for Recognizing Fine-Grained Activity of Drivers'. Together they form a unique fingerprint.

Cite this