Multimodal Image Fusion in Clinical Research
Gurpreet Kaur1, Sukhwinder Singh2, Renu Vig3

1Gurpreet Kaur, Department of Computer Science and Engineering, UIET, Panjab University, Chandigarh, India.
2Sukhwinder Singh, Department of Computer Science and Engineering, UIET, Panjab University, Chandigarh, India.
3Renu Vig, Department of Electronic Engineering and Communication, UIET, Panjab University, Chandigarh, India.

Manuscript received on 12 August 2019. | Revised Manuscript received on 17  August 2019. | Manuscript published on 30 September 2019. | PP: 5202-5211 | Volume-8 Issue-3 September 2019 | Retrieval Number: C5820098319/2019©BEIESP | DOI: 10.35940/ijrte.C5820.098319
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: With the advancement in medical imaging, far reaching changes are perceptible in clinical analysis. Most of the diagnosed medical evaluation is a resultant of imaging or it is in close synchronization with imaging techniques. This promotes the need to closely read, evaluate and collaborate images. Medical image fusion is a technique to collaborate the clinical images acquired from one or many modalities. Multimodal image analysis during fusion well capitalizes the strength of each medical modality. Incorporating the features from multimodal input images thus holds added potential to abet better-quality diagnosis. Medical image fusion is an intricate task specially when high quality fused image, possessing all relevant information and reasonable operating speed is aimed. Many efforts have been undertaken in this field resulting in diverse research approaches. Image fusion can be performed using medical images obtained from single modality or from multiple modalities. This paper has been designed collaborating the work based on multimodal medical images in multiscale image fusion domain. When the same region, organ or a tissue is captured from various different perceptions, complimentary information is maximized and diagnostic value is reinforced. The fusion framework using Mexican Hat wavelet is proposed using adaptive median filtering detailing each executable fusion block. The Fusion techniques, pre and post processing aspects and evaluation mechanisms are illustrated from literature. The drifts of researchers from single processing to multiple processing hybrid techniques are discussed. The medical modality aspects are detailed. These may provide as a valuable reference to understand the image fusion trade-offs comprehensively with future viabilities.
Keywords: Image Fusion, Spatial Domain, Transform Domain, Modality, Fractional Wavelets, Fusion Rules.

Scope of the Article:
Image Security