Generic placeholder image

Current Medical Imaging

Editor-in-Chief

ISSN (Print): 1573-4056
ISSN (Online): 1875-6603

Research Article

Classification of Artifacts in Multimodal Fused Images using Transfer Learning with Convolutional Neural Networks

Author(s): Shehanaz Shaik and Sitaramanjaneya Reddy Guntur*

Volume 20, 2024

Published on: 08 November, 2024

Article ID: e15734056256872 Pages: 20

DOI: 10.2174/0115734056256872240909112137

open_access

Open Access Journals Promotions 2
conference banner
Abstract

Introduction: Multimodal medical image fusion techniques play an important role in clinical diagnosis and treatment planning. The process of combining multimodal images involves several challenges depending on the type of modality, transformation techniques, and mapping of structural and metabolic information.

Methods: Artifacts can form during data acquisition, such as minor movement of patients, or data pre-processing, registration, and normalization. Unlike single-modality images, the detection of an artifact is a more challenging task in complementary fused multimodal images. Many medical image fusion techniques have been developed by various researchers, but not many have tested the robustness of their fusion approaches. The main objective of this study is to identify and classify the noise and artifacts present in the fused MRI-SPECT brain images using transfer learning by fine-tuned CNN networks. Deep neural network-based techniques are capable of detecting minor amounts of noise in images. In this study, three pre-trained convolutional neural network-based models (ResNet50, DenseNet 169, and InceptionV3) were used to detect artifacts and various noises including Gaussian, Speckle, Random, and mixed noises present in fused MRI -SPECT brain image datasets using transfer learning.

Results: The five-fold stratified cross-validation (SCV) is used to evaluate the performance of networks. The obtained performance results for the pretrained DenseNet169 model for various folds were greater as compared with the rest of the models; the former had an average accuracy of five-fold of 93.8±5.8%, 98%±3.9%, 97.8±1.64%, and 93.8±5.8%, whereas InceptionNetV3 had a value of 90.6±9.8%, 98.8±1.6%, 91.4±9.74%, and 90.6±9.8%, and ResNet50 had a value of 75.8±21%.84.8±7.6%, 73.8±22%, and 75.8±21% for Gaussian, speckle, random and mixed noise, respectively.

Conclusion: Based on the performance results obtained, the pre-trained DenseNet 169 model provides the highest accuracy among the other four used models.

Keywords: Deep learning, Five-fold stratified cross-validation, MRI-SPECT image fusion artifact and Nosie, ResNet 50, DenseNet169, InceptionV3.


© 2024 Bentham Science Publishers | Privacy Policy