Harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions

The fast-changing world of fog-cloud computing poses various challenges and opportunities, especially in terms of optimizing resources, adaptability, and system efficiency. Reinforcement Learning (RL) is a powerful tool to tackle these challenges due to its ability to learn and adjust from interacti...

Full description

Bibliographic Details
Main Authors: Al-Hashimi, Mustafa, Rahiman, Amir Rizaan, Muhammed, Abdullah, Hamid, Nor Asilah Wati
Format: Article
Language:English
Published: Little Lion Scientific R&D 2024
Online Access:http://psasir.upm.edu.my/id/eprint/111000/
http://psasir.upm.edu.my/id/eprint/111000/1/20Vol102No5.pdf
_version_ 1848865570752036864
author Al-Hashimi, Mustafa
Rahiman, Amir Rizaan
Muhammed, Abdullah
Hamid, Nor Asilah Wati
author_facet Al-Hashimi, Mustafa
Rahiman, Amir Rizaan
Muhammed, Abdullah
Hamid, Nor Asilah Wati
author_sort Al-Hashimi, Mustafa
building UPM Institutional Repository
collection Online Access
description The fast-changing world of fog-cloud computing poses various challenges and opportunities, especially in terms of optimizing resources, adaptability, and system efficiency. Reinforcement Learning (RL) is a powerful tool to tackle these challenges due to its ability to learn and adjust from interactions. This article explores the different RL algorithms, emphasizing their distinct strengths, weaknesses, and practical implications in fog-cloud environments. We present a comprehensive comparative analysis, from the deterministic nature of Q-Learning to the scalability of DQN and the adaptability of PPO, providing insights that can assist both practitioners and researchers. Additionally, we discuss the ethical considerations, real-world applicability, and scalability challenges associated with deploying RL in fog-cloud systems. In conclusion, while integrating RL in fog-cloud computing shows promise, it requires a comprehensive, interdisciplinary approach to ensure that advancements are ethical, efficient, and beneficial for everyone.
first_indexed 2025-11-15T14:06:49Z
format Article
id upm-111000
institution Universiti Putra Malaysia
institution_category Local University
language English
last_indexed 2025-11-15T14:06:49Z
publishDate 2024
publisher Little Lion Scientific R&D
recordtype eprints
repository_type Digital Repository
spelling upm-1110002024-04-18T08:17:13Z http://psasir.upm.edu.my/id/eprint/111000/ Harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions Al-Hashimi, Mustafa Rahiman, Amir Rizaan Muhammed, Abdullah Hamid, Nor Asilah Wati The fast-changing world of fog-cloud computing poses various challenges and opportunities, especially in terms of optimizing resources, adaptability, and system efficiency. Reinforcement Learning (RL) is a powerful tool to tackle these challenges due to its ability to learn and adjust from interactions. This article explores the different RL algorithms, emphasizing their distinct strengths, weaknesses, and practical implications in fog-cloud environments. We present a comprehensive comparative analysis, from the deterministic nature of Q-Learning to the scalability of DQN and the adaptability of PPO, providing insights that can assist both practitioners and researchers. Additionally, we discuss the ethical considerations, real-world applicability, and scalability challenges associated with deploying RL in fog-cloud systems. In conclusion, while integrating RL in fog-cloud computing shows promise, it requires a comprehensive, interdisciplinary approach to ensure that advancements are ethical, efficient, and beneficial for everyone. Little Lion Scientific R&D 2024 Article PeerReviewed text en http://psasir.upm.edu.my/id/eprint/111000/1/20Vol102No5.pdf Al-Hashimi, Mustafa and Rahiman, Amir Rizaan and Muhammed, Abdullah and Hamid, Nor Asilah Wati (2024) Harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions. Journal of Theoretical and Applied Information Technology, 102 (5). pp. 1908-1919. ISSN 1992-8645; ESSN: 1817-3195 http://www.jatit.org/volumes/Vol102No5/20Vol102No5.pdf
spellingShingle Al-Hashimi, Mustafa
Rahiman, Amir Rizaan
Muhammed, Abdullah
Hamid, Nor Asilah Wati
Harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions
title Harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions
title_full Harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions
title_fullStr Harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions
title_full_unstemmed Harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions
title_short Harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions
title_sort harnessing reinforcement learning in fog-cloud computing: challenges, insights, and future directions
url http://psasir.upm.edu.my/id/eprint/111000/
http://psasir.upm.edu.my/id/eprint/111000/
http://psasir.upm.edu.my/id/eprint/111000/1/20Vol102No5.pdf