A CNN Model for Head Pose Recognition using Wholes and Regions

ARDHENDU BEHERA, Andrew Gidney, Zachary Wharton, Daniel Robinson, Keiron Quinn

Research output: Contribution to journalConference proceeding article (ISSN)Researchpeer-review

Abstract

Head pose recognition and monitoring is key to many real-world applications, since it is a vital indicator for human attention and behavior. Currently, head pose is often computed by localizing landmarks on a targeted face and solving 2D to 3D correspondence problem with a mean head model. Recent research has shown that this is a brittle approach since it relies entirely on the accuracy of landmark detection, the extraneous head model and an ad-hoc alignment step. Recent work has also shown that the best-performing methods often combine multiple low-level image features with high-level contextual cues. In this paper, we present a novel end-to-end deep network, which is inspired by these ideas and explores regions within an image to capture topological changes due to changes in viewpoint. We adapt the existing state-of-the-art deep CNNs to use more than one region for accurate head pose recognition. Our regions consist of one or more consecutive cells and is adapted from the strategies used in computing HOG descriptor. Extensive experimental results on head pose recognition using four different large-scale datasets, demonstrate that the proposed approach outperforms many state-of-the-art deep CNN models. We also compare our pose recognition performance with the latest OpenFace 2.0 facial behavior analysis toolkit. In addition, we contribute head pose annotation to a large-scale dataset (VGGFace2).
Original languageEnglish
JournalIEEE International Conference on Automatic Face and Gesture Recognition
Early online date11 Jul 2019
DOIs
Publication statusE-pub ahead of print - 11 Jul 2019
Event14th IEEE International Conference on Automatic Face & Gesture Recognition - Lille, France
Duration: 14 May 201918 May 2019

Fingerprint

Monitoring

Keywords

  • Face orientation
  • Face gestures
  • CNN
  • Deep learning
  • head pose

Cite this

@article{e74457cf97c64ae0ad01c0e2234a8ef6,
title = "A CNN Model for Head Pose Recognition using Wholes and Regions",
abstract = "Head pose recognition and monitoring is key to many real-world applications, since it is a vital indicator for human attention and behavior. Currently, head pose is often computed by localizing landmarks on a targeted face and solving 2D to 3D correspondence problem with a mean head model. Recent research has shown that this is a brittle approach since it relies entirely on the accuracy of landmark detection, the extraneous head model and an ad-hoc alignment step. Recent work has also shown that the best-performing methods often combine multiple low-level image features with high-level contextual cues. In this paper, we present a novel end-to-end deep network, which is inspired by these ideas and explores regions within an image to capture topological changes due to changes in viewpoint. We adapt the existing state-of-the-art deep CNNs to use more than one region for accurate head pose recognition. Our regions consist of one or more consecutive cells and is adapted from the strategies used in computing HOG descriptor. Extensive experimental results on head pose recognition using four different large-scale datasets, demonstrate that the proposed approach outperforms many state-of-the-art deep CNN models. We also compare our pose recognition performance with the latest OpenFace 2.0 facial behavior analysis toolkit. In addition, we contribute head pose annotation to a large-scale dataset (VGGFace2).",
keywords = "Face orientation, Face gestures, CNN, Deep learning, head pose",
author = "ARDHENDU BEHERA and Andrew Gidney and Zachary Wharton and Daniel Robinson and Keiron Quinn",
year = "2019",
month = "7",
day = "11",
doi = "10.1109/FG.2019.8756536",
language = "English",
journal = "IEEE International Conference on Automatic Face and Gesture Recognition",
publisher = "IEEE Explore",

}

A CNN Model for Head Pose Recognition using Wholes and Regions. / BEHERA, ARDHENDU; Gidney, Andrew; Wharton, Zachary; Robinson, Daniel; Quinn, Keiron.

In: IEEE International Conference on Automatic Face and Gesture Recognition, 11.07.2019.

Research output: Contribution to journalConference proceeding article (ISSN)Researchpeer-review

TY - JOUR

T1 - A CNN Model for Head Pose Recognition using Wholes and Regions

AU - BEHERA, ARDHENDU

AU - Gidney, Andrew

AU - Wharton, Zachary

AU - Robinson, Daniel

AU - Quinn, Keiron

PY - 2019/7/11

Y1 - 2019/7/11

N2 - Head pose recognition and monitoring is key to many real-world applications, since it is a vital indicator for human attention and behavior. Currently, head pose is often computed by localizing landmarks on a targeted face and solving 2D to 3D correspondence problem with a mean head model. Recent research has shown that this is a brittle approach since it relies entirely on the accuracy of landmark detection, the extraneous head model and an ad-hoc alignment step. Recent work has also shown that the best-performing methods often combine multiple low-level image features with high-level contextual cues. In this paper, we present a novel end-to-end deep network, which is inspired by these ideas and explores regions within an image to capture topological changes due to changes in viewpoint. We adapt the existing state-of-the-art deep CNNs to use more than one region for accurate head pose recognition. Our regions consist of one or more consecutive cells and is adapted from the strategies used in computing HOG descriptor. Extensive experimental results on head pose recognition using four different large-scale datasets, demonstrate that the proposed approach outperforms many state-of-the-art deep CNN models. We also compare our pose recognition performance with the latest OpenFace 2.0 facial behavior analysis toolkit. In addition, we contribute head pose annotation to a large-scale dataset (VGGFace2).

AB - Head pose recognition and monitoring is key to many real-world applications, since it is a vital indicator for human attention and behavior. Currently, head pose is often computed by localizing landmarks on a targeted face and solving 2D to 3D correspondence problem with a mean head model. Recent research has shown that this is a brittle approach since it relies entirely on the accuracy of landmark detection, the extraneous head model and an ad-hoc alignment step. Recent work has also shown that the best-performing methods often combine multiple low-level image features with high-level contextual cues. In this paper, we present a novel end-to-end deep network, which is inspired by these ideas and explores regions within an image to capture topological changes due to changes in viewpoint. We adapt the existing state-of-the-art deep CNNs to use more than one region for accurate head pose recognition. Our regions consist of one or more consecutive cells and is adapted from the strategies used in computing HOG descriptor. Extensive experimental results on head pose recognition using four different large-scale datasets, demonstrate that the proposed approach outperforms many state-of-the-art deep CNN models. We also compare our pose recognition performance with the latest OpenFace 2.0 facial behavior analysis toolkit. In addition, we contribute head pose annotation to a large-scale dataset (VGGFace2).

KW - Face orientation

KW - Face gestures

KW - CNN

KW - Deep learning

KW - head pose

U2 - 10.1109/FG.2019.8756536

DO - 10.1109/FG.2019.8756536

M3 - Conference proceeding article (ISSN)

JO - IEEE International Conference on Automatic Face and Gesture Recognition

JF - IEEE International Conference on Automatic Face and Gesture Recognition

ER -