Deep-learning-based mobile application for detecting COVID-19
Patients infected with the COVID-19 virus develop severe pneumonia, which typically results in death. Radiological data show that the disease involves interstitial lung involvement, lung opacities, bilateral ground-glass opacities, and patchy opacities. This study aimed to improve COVID-19 diagnosis...
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
University of Baghdad
2025
|
| Online Access: | http://psasir.upm.edu.my/id/eprint/121116/ http://psasir.upm.edu.my/id/eprint/121116/1/121116.pdf |
| _version_ | 1848868295507181568 |
|---|---|
| author | Al-Qazzaz, Noor Kamal Aldoori, Alaa A. Hussein, Tabarak Emad Mohammed Mahdi, Massarra Mohd Ali, Sawal Hamid Ahmad, Siti Anom |
| author_facet | Al-Qazzaz, Noor Kamal Aldoori, Alaa A. Hussein, Tabarak Emad Mohammed Mahdi, Massarra Mohd Ali, Sawal Hamid Ahmad, Siti Anom |
| author_sort | Al-Qazzaz, Noor Kamal |
| building | UPM Institutional Repository |
| collection | Online Access |
| description | Patients infected with the COVID-19 virus develop severe pneumonia, which typically results in death. Radiological data show that the disease involves interstitial lung involvement, lung opacities, bilateral ground-glass opacities, and patchy opacities. This study aimed to improve COVID-19 diagnosis via radiological chest X-ray (CXR) image analysis, making a substantial contribution to the development of a mobile application that efficiently identifies COVID-19, saving medical professionals time and resources. It also allows for timely preventative interventions by using more than 18000 CXR lung images and the MobileNetV2 convolutional neural network (CNN) architecture. The MobileNetV2 deep-learning model performances were evaluated using precision, sensitivity, specificity, accuracy, and F-measure to classify CXR images into COVID-19, non-COVID-19 lung opacity, and normal control. Results showed a precision of 92.91%, sensitivity of 90.6, specificity of 96.45%, accuracy of 90.6%, and F-measure of 91.74% in COVID-19 detection. Indeed, the suggested MobileNetV2 deep-learning CNN model can improve classification performance by minimising the time required to collect per-image results for a mobile application. |
| first_indexed | 2025-11-15T14:50:07Z |
| format | Article |
| id | upm-121116 |
| institution | Universiti Putra Malaysia |
| institution_category | Local University |
| language | English |
| last_indexed | 2025-11-15T14:50:07Z |
| publishDate | 2025 |
| publisher | University of Baghdad |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | upm-1211162025-10-27T08:08:04Z http://psasir.upm.edu.my/id/eprint/121116/ Deep-learning-based mobile application for detecting COVID-19 Al-Qazzaz, Noor Kamal Aldoori, Alaa A. Hussein, Tabarak Emad Mohammed Mahdi, Massarra Mohd Ali, Sawal Hamid Ahmad, Siti Anom Patients infected with the COVID-19 virus develop severe pneumonia, which typically results in death. Radiological data show that the disease involves interstitial lung involvement, lung opacities, bilateral ground-glass opacities, and patchy opacities. This study aimed to improve COVID-19 diagnosis via radiological chest X-ray (CXR) image analysis, making a substantial contribution to the development of a mobile application that efficiently identifies COVID-19, saving medical professionals time and resources. It also allows for timely preventative interventions by using more than 18000 CXR lung images and the MobileNetV2 convolutional neural network (CNN) architecture. The MobileNetV2 deep-learning model performances were evaluated using precision, sensitivity, specificity, accuracy, and F-measure to classify CXR images into COVID-19, non-COVID-19 lung opacity, and normal control. Results showed a precision of 92.91%, sensitivity of 90.6, specificity of 96.45%, accuracy of 90.6%, and F-measure of 91.74% in COVID-19 detection. Indeed, the suggested MobileNetV2 deep-learning CNN model can improve classification performance by minimising the time required to collect per-image results for a mobile application. University of Baghdad 2025-03-01 Article PeerReviewed text en cc_by_4 http://psasir.upm.edu.my/id/eprint/121116/1/121116.pdf Al-Qazzaz, Noor Kamal and Aldoori, Alaa A. and Hussein, Tabarak Emad and Mohammed Mahdi, Massarra and Mohd Ali, Sawal Hamid and Ahmad, Siti Anom (2025) Deep-learning-based mobile application for detecting COVID-19. Al-Khwarizmi Engineering Journal, 21 (1). pp. 13-27. ISSN 1818-1171; eISSN: 2312-0789 https://alkej.uobaghdad.edu.iq/index.php/alkej/article/view/935 10.22153/kej.2025.12.001 |
| spellingShingle | Al-Qazzaz, Noor Kamal Aldoori, Alaa A. Hussein, Tabarak Emad Mohammed Mahdi, Massarra Mohd Ali, Sawal Hamid Ahmad, Siti Anom Deep-learning-based mobile application for detecting COVID-19 |
| title | Deep-learning-based mobile application for detecting COVID-19 |
| title_full | Deep-learning-based mobile application for detecting COVID-19 |
| title_fullStr | Deep-learning-based mobile application for detecting COVID-19 |
| title_full_unstemmed | Deep-learning-based mobile application for detecting COVID-19 |
| title_short | Deep-learning-based mobile application for detecting COVID-19 |
| title_sort | deep-learning-based mobile application for detecting covid-19 |
| url | http://psasir.upm.edu.my/id/eprint/121116/ http://psasir.upm.edu.my/id/eprint/121116/ http://psasir.upm.edu.my/id/eprint/121116/ http://psasir.upm.edu.my/id/eprint/121116/1/121116.pdf |