Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases

A. Altadmri, A. Ahmed

Research output: Chapter in Book/Report/Conference proceedingConference proceeding (ISBN)peer-review

8 Citations (Scopus)

Abstract

In this paper, we introduce a novel framework for automatic Semantic Video Annotation. As this framework detects possible events occurring in video clips, it forms the annotating base of a video search engine. To achieve this purpose, the system has to able to operate on uncontrolled wide-domain videos. Thus, all layers have to be based on generic features. The aim is to help bridge the “semantic gap“, which is the difference between the low-level visual features and the human's perception, by finding videos with similar visual events, then analyzing their free text annotation to find the best description for this new video using commonsense knowledgebases. Experiments were performed on wide-domain video clips from the TRECVID 2005 BBC rush standard database. Results from these experiments show promising integrity between those two layers in order to find expressing annotations for the input video. These results were evaluated based on retrieval performance.
Original languageEnglish
Title of host publicationICSIPA09 - 2009 IEEE International Conference on Signal and Image Processing Applications, Conference Proceedings
Pages74-79
DOIs
Publication statusPublished - 19 Nov 2009
Event2009 IEEE International conference on Signal and Image Processing Applications - Kuala Lumpur, Malaysia
Duration: 18 Nov 200919 Nov 2009

Conference

Conference2009 IEEE International conference on Signal and Image Processing Applications
Country/TerritoryMalaysia
CityKuala Lumpur
Period18/11/0919/11/09

Research Centres

  • Centre for Intelligent Visual Computing Research

Fingerprint

Dive into the research topics of 'Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases'. Together they form a unique fingerprint.

Cite this