A multi-modal CNN framework for integrating medical imaging for COVID-19 Diagnosis
University of Illinois, Urbana Champaign, USA.
Review Article
World Journal of Advanced Research and Reviews, 2020, 08(03), 475–493
Publication history:
Received on 15 October 2020; revised on 24 November 2020; accepted on 26 November 2020
Abstract:
Due to COVID-19 spreading fast, traditional methods have revealed many inadequacies, showing there is a strong need for better and faster tests. In this article, we discuss the structure, arrangement and clinical use of a multi-modal CNN framework for including medical imaging in COVID-19 diagnosis. When data from X-rays, CT scans and ultrasound are combined, it helps doctors better understand how a disease is showing up in the body. CNN, using each imaging technique’s special features, fuses, extracts data and integrates with attention to help diseases be identified more accurately. The article explores how CNN functions, reviews its training approaches and reviews relevant evaluation methods like accuracy, precision, recall, F1-score and AUC. Case studies are shown such as how technology is used in hospitals, for faster triage decisions and remote medical diagnosis. Using case studies and benchmark data sets consistently demonstrates that the model performs better than past, single-technique diagnostic techniques. Problems linked to a lack of data, mislabeling, understanding models and the ethics of AI in healthcare are all discussed. Future work aims to integrate MRI and PET imaging and allow numerous centers to co-operate securely through federated learning. Thanks to the multi-modal CNN framework, medical experts have access to a powerful new tool that not only fights COVID-19 but handles numerous other difficult diseases as well. Merging AI with medical imaging highlights how precision medicine of the future will use machine learning to boost doctor’s decisions and help patients faster.
Keywords:
Multi-Modal CNN; COVID-19 Diagnosis; Medical Imaging; Deep Learning; AI In Healthcare
Full text article in PDF:
Copyright information:
Copyright © 2020 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0
