Rotation Axis Focused Attention Network (RAFA-Net) for Estimating Head Pose

ARDHENDU BEHERA, Zachary Wharton, PRADEEP RUWAN PADMASIRI GALBOKKA HEWAGE, SWAGAT KUMAR

Research output: Contribution to journalConference proceeding article (ISSN)peer-review

334 Downloads (Pure)

Abstract

Head pose is a vital indicator of human attention and behavior. Therefore, automatic estimation of head pose from images is key to many real-world applications. In this paper, we propose a novel approach for head pose estimation from a single RGB image. Many existing approaches often predict head poses by localizing facial landmarks and then solve 2D to 3D correspondence problem with a mean head model. Such approaches completely rely on the landmark detection accuracy, an ad-hoc alignment step, and the extraneous head model. To address this drawback, we present an end-to-end deep network, which explores rotation axis (yaw, pitch, and roll) focused innovative attention mechanism to capture the subtle changes in images. The mechanism uses attentional spatial pooling from a self-attention layer and learns the importance over fine-grained to coarse spatial structures and combine them to capture rich semantic information concerning a given rotation axis. The experimental evaluation of our approach using three benchmark datasets is very competitive to state-of-the-art methods, including with and without landmark-based approaches.
Original languageEnglish
JournalAsian Conference on Computer Vision - ACCV 2020
Publication statusAccepted/In press - 18 Sept 2020

Keywords

  • Deep Regression
  • Attention Network
  • Attentional pooling
  • CNN
  • Head pose estimation
  • Vanilla deep regression
  • Self-attention
  • Coarse-to-fine pooling

Research Centres

  • Centre for Intelligent Visual Computing Research
  • Data Science STEM Research Centre
  • Data and Complex Systems Research Centre

Fingerprint

Dive into the research topics of 'Rotation Axis Focused Attention Network (RAFA-Net) for Estimating Head Pose'. Together they form a unique fingerprint.

Cite this