TY - JOUR
T1 - SR-GNN
T2 - Spatial Relation-Aware Graph Neural Network for Fine-Grained Image Categorization
AU - Bera, Asish
AU - Wharton, Zachary
AU - Liu, Yonghuai
AU - Bessis, Nik
AU - Behera, Ardhendu
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2022/9/14
Y1 - 2022/9/14
N2 - Over the past few years, a significant progress has been made in deep convolutional neural networks (CNNs)-based image recognition. This is mainly due to the strong ability of such networks in mining discriminative object pose and parts information from texture and shape. This is often inappropriate for fine-grained visual classification (FGVC) since it exhibits high intra-class and low inter-class variances due to occlusions, deformation, illuminations, etc. Thus, an expressive feature representation describing global structural information is a key to characterize an object/ scene. To this end, we propose a method that effectively captures subtle changes by aggregating context-aware features from most relevant image-regions and their importance in discriminating fine-grained categories avoiding the bounding-box and/or distinguishable part annotations. Our approach is inspired by the recent advancement in self-attention and graph neural networks (GNNs) approaches to include a simple yet effective relation-aware feature transformation and its refinement using a context-aware attention mechanism to boost the discriminability of the transformed feature in an end-to-end learning process. Our model is evaluated on eight benchmark datasets consisting of fine-grained objects and human-object interactions. It outperforms the state-of-the-art approaches by a significant margin in recognition accuracy.
AB - Over the past few years, a significant progress has been made in deep convolutional neural networks (CNNs)-based image recognition. This is mainly due to the strong ability of such networks in mining discriminative object pose and parts information from texture and shape. This is often inappropriate for fine-grained visual classification (FGVC) since it exhibits high intra-class and low inter-class variances due to occlusions, deformation, illuminations, etc. Thus, an expressive feature representation describing global structural information is a key to characterize an object/ scene. To this end, we propose a method that effectively captures subtle changes by aggregating context-aware features from most relevant image-regions and their importance in discriminating fine-grained categories avoiding the bounding-box and/or distinguishable part annotations. Our approach is inspired by the recent advancement in self-attention and graph neural networks (GNNs) approaches to include a simple yet effective relation-aware feature transformation and its refinement using a context-aware attention mechanism to boost the discriminability of the transformed feature in an end-to-end learning process. Our model is evaluated on eight benchmark datasets consisting of fine-grained objects and human-object interactions. It outperforms the state-of-the-art approaches by a significant margin in recognition accuracy.
KW - Attention mechanism
KW - convolutional neural networks
KW - fine-grained visual recognition
KW - graph neural networks
KW - human action
KW - relation-aware feature transformation
UR - https://research.edgehill.ac.uk/en/publications/a64439b1-80b3-4d69-bc5d-e04d17d8d6b3
U2 - 10.1109/TIP.2022.3205215
DO - 10.1109/TIP.2022.3205215
M3 - Article (journal)
AN - SCOPUS:85139335547
SN - 1057-7149
VL - 31
SP - 6017
EP - 6031
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -