TY - GEN
T1 - A CNN model for head pose recognition using wholes and regions
AU - Behera, Ardhendu
AU - Gidney, Andrew G.
AU - Wharton, Zachary
AU - Robinson, Daniel
AU - Quinn, Keiron
PY - 2019/5/1
Y1 - 2019/5/1
N2 - Head pose recognition and monitoring is key to many real-world applications, since it is a vital indicator for human attention and behavior. Currently, head pose is often computed by localizing landmarks on a targeted face and solving 2D to 3D correspondence problem with a mean head model. Recent research has shown that this is a brittle approach since it relies entirely on the accuracy of landmark detection, the extraneous head model and an ad-hoc alignment step. Recent work has also shown that the best-performing methods often combine multiple low-level image features with high-level contextual cues. In this paper, we present a novel end-to-end deep network, which is inspired by these ideas and explores regions within an image to capture topological changes due to changes in viewpoint. We adapt the existing state-of-the-art deep CNNs to use more than one region for accurate head pose recognition. Our regions consist of one or more consecutive cells and is adapted from the strategies used in computing HOG descriptor. Extensive experimental results on head pose recognition using four different large-scale datasets, demonstrate that the proposed approach outperforms many state-of-the-art deep CNN models. We also compare our pose recognition performance with the latest OpenFace 2.0 facial behavior analysis toolkit. In addition, we contribute head pose annotation to a large-scale dataset (VGGFace2).
AB - Head pose recognition and monitoring is key to many real-world applications, since it is a vital indicator for human attention and behavior. Currently, head pose is often computed by localizing landmarks on a targeted face and solving 2D to 3D correspondence problem with a mean head model. Recent research has shown that this is a brittle approach since it relies entirely on the accuracy of landmark detection, the extraneous head model and an ad-hoc alignment step. Recent work has also shown that the best-performing methods often combine multiple low-level image features with high-level contextual cues. In this paper, we present a novel end-to-end deep network, which is inspired by these ideas and explores regions within an image to capture topological changes due to changes in viewpoint. We adapt the existing state-of-the-art deep CNNs to use more than one region for accurate head pose recognition. Our regions consist of one or more consecutive cells and is adapted from the strategies used in computing HOG descriptor. Extensive experimental results on head pose recognition using four different large-scale datasets, demonstrate that the proposed approach outperforms many state-of-the-art deep CNN models. We also compare our pose recognition performance with the latest OpenFace 2.0 facial behavior analysis toolkit. In addition, we contribute head pose annotation to a large-scale dataset (VGGFace2).
UR - http://www.scopus.com/inward/record.url?scp=85070451101&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85070451101&partnerID=8YFLogxK
U2 - 10.1109/FG.2019.8756536
DO - 10.1109/FG.2019.8756536
M3 - Conference proceeding (ISBN)
T3 - Proceedings - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
BT - Proceedings - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
Y2 - 14 May 2019 through 18 May 2019
ER -