Multimodal medical image fusion algorithm in the era of big data

Wei Tan, Prayag Tiwari, Hari Mohan Pandey*, Catarina Moreira, Amit Kumar Jaiswal

*Corresponding author for this work

Research output: Contribution to journalArticle (journal)peer-review

167 Citations (Scopus)
45 Downloads (Pure)

Abstract

In image-based medical decision-making, different modalities of medical images of a given organ of a patient are captured. Each of these images will represent a modality that will render the examined organ differently, leading to different observations of a given phenomenon (such as stroke). The accurate analysis of each of these modalities promotes the detection of more appropriate medical decisions. Multimodal medical imaging is a research field that consists in the development of robust algorithms that can enable the fusion of image information acquired by different sets of modalities. In this paper, a novel multimodal medical image fusion algorithm is proposed for a wide range of medical diagnostic problems. It is based on the application of a boundary measured pulse-coupled neural network fusion strategy and an energy attribute fusion strategy in a non-subsampled shearlet transform domain. Our algorithm was validated in dataset with modalities of several diseases, namely glioma, Alzheimer’s, and metastatic bronchogenic carcinoma, which contain more than 100 image pairs. Qualitative and quantitative evaluation verifies that the proposed algorithm outperforms most of the current algorithms, providing important ideas for medical diagnosis.

Original languageEnglish
Article numberNCAA-D-20-00698
JournalNeural Computing and Applications
Early online date8 Jul 2020
DOIs
Publication statusE-pub ahead of print - 8 Jul 2020

Keywords

  • Medical image fusion
  • Multimodal medical imaging
  • Non-subsampled shearlet transform
  • Pulse-coupled neural network

Fingerprint

Dive into the research topics of 'Multimodal medical image fusion algorithm in the era of big data'. Together they form a unique fingerprint.

Cite this