DepNet: An Automated Intelligent System using Deep Learning for Video-based Depression Analysis

HARI MOHAN PANDEY*, Lang He, Chenguang Guo, Prayag Tiwari, Rui Su, Wei Dang

*Corresponding author for this work

Research output: Contribution to journalArticle (journal)peer-review

Abstract

As a common mental disorder, depression has attracted many researchers from affective computing field to estimate the depression severity. However, existing approaches based on Deep Learning (DL) are mainly focused on single facial image without considering the sequence information for predicting the depression scale. In this paper, an integrated framework, termed DepNet, for automatic diagnosis of depression that adopts facial images sequence from videos is proposed. Specifically, several pre-trained models are adopted to represent the Low Level features (LLF), and Feature Aggregation Module (FAM) is proposed to capture the high level characteristic information for depression analysis. More importantly, the discriminative characteristic of depression on faces can be mined to assist the clinicians to diagnose the severity of the depressed subjects. Multi-scale experiments, carried out on AVEC2013 and AVEC2014 databases have shown the excellent performance of the intelligent approach. The root mean square error (RMSE) between the predicted values and the BDI-II scores is 9.17 and 9.01 on the two databases, respectively, which are lower that those of the state of the art video-based depression recognition methods.
Original languageEnglish
Article numberINT2.20210482R3
JournalInternational Journal of Intelligent Systems
Early online date6 Oct 2021
DOIs
Publication statusE-pub ahead of print - 6 Oct 2021

Keywords

  • Depression
  • Industrial intelligent system (IIS)
  • Deep Learning (DL)
  • Pattern recognition
  • Feature aggregation module (FAM)

Fingerprint

Dive into the research topics of 'DepNet: An Automated Intelligent System using Deep Learning for Video-based Depression Analysis'. Together they form a unique fingerprint.

Cite this