Project Details
Description
Chinese opera is one of the most important intangible cultural heritages of China.
As the young generation pays less and less attention to the traditional art, it
faces a crisis of rapid shrinkage and disappearance. The key to the survival of
the opera art in the new era is to integrate the virtual presentation methods such
as light and shadow holography that enables virtual and immersive interaction.
However, to implement fully automatic functions of virtual presentation and scene
tracking and switching, it is indispensable to automatically perceive, understand
and generate facial expressions, facial masks, costumes and singing audio of opera
performers.The specific costumes and facial expressions in traditional Chinese
opera represent certain roles and emotional tone. While he current emotion
research is still limited to analyze facial expressions and audio, it is difficult
to meet the requirements of the multiple types of expressive art of opera,
therefore, so it is urgent to make a thorough investigation and breakthrough.
Based on the goal of understanding and generating opera scenes, this project
intends to investigate the multi-modal emotion recognition and semi-supervised
classification techniques, integrates facial expression recognition, facial
expression generation of performers, audio recognition, costume and facial makeup
analysis and other factors, and tackles the key challenges in multi-modal emotion
recognition and generation such as insufficient samples and situation perception,
understanding and representation. This project aims to break through the key
technical challenges in the new form deduction , and lay a sound technical
foundation for the development and popularization of opera with a new life in the
new era.
As the young generation pays less and less attention to the traditional art, it
faces a crisis of rapid shrinkage and disappearance. The key to the survival of
the opera art in the new era is to integrate the virtual presentation methods such
as light and shadow holography that enables virtual and immersive interaction.
However, to implement fully automatic functions of virtual presentation and scene
tracking and switching, it is indispensable to automatically perceive, understand
and generate facial expressions, facial masks, costumes and singing audio of opera
performers.The specific costumes and facial expressions in traditional Chinese
opera represent certain roles and emotional tone. While he current emotion
research is still limited to analyze facial expressions and audio, it is difficult
to meet the requirements of the multiple types of expressive art of opera,
therefore, so it is urgent to make a thorough investigation and breakthrough.
Based on the goal of understanding and generating opera scenes, this project
intends to investigate the multi-modal emotion recognition and semi-supervised
classification techniques, integrates facial expression recognition, facial
expression generation of performers, audio recognition, costume and facial makeup
analysis and other factors, and tackles the key challenges in multi-modal emotion
recognition and generation such as insufficient samples and situation perception,
understanding and representation. This project aims to break through the key
technical challenges in the new form deduction , and lay a sound technical
foundation for the development and popularization of opera with a new life in the
new era.
Status | Finished |
---|---|
Effective start/end date | 1/01/21 → 31/12/24 |
Collaborative partners
- Edge Hill University
- Northwest University China (lead)
Keywords
- Opera intangible cultural heritage
- Multi-modal scene perception
- Facial emotion recognition
- Facial masks emotion recognition
- Deep neural network
Research Groups
- Visual Computing Lab
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.