Optimizing cloud resource allocation using advanced AI techniques: A comparative study of reinforcement learning and genetic algorithms in multi-cloud environments
Independent Researcher.
Research Article
World Journal of Advanced Research and Reviews, 2020, 07(02), 359–369
Publication history:
Received on 19 July 2020; revised on 27 August 2020; accepted on 30 August 2020
Abstract:
In the evolving landscape of cloud computing, efficient resource allocation is pivotal for optimizing performance and minimizing costs, particularly within multi-cloud environments. Traditional resource allocation methods often fall short in addressing the complexities and dynamism inherent in these settings. This study presents a comparative analysis of two advanced artificial intelligence techniques—Reinforcement Learning (RL) and Genetic Algorithms (GA)—for cloud resource allocation. RL, known for its adaptive learning capabilities through interaction with dynamic environments, and GA, renowned for its robust global optimization through evolutionary strategies, were implemented and evaluated across various scenarios in a multi-cloud setup. The findings reveal that while RL excels in adaptability and continuous learning, GA demonstrates superior speed in converging to optimal solutions. However, each technique's effectiveness is context-dependent, with RL being more suitable for highly dynamic environments and GA for stable, rapid optimization needs. The study also explores the potential benefits of hybrid approaches, combining the strengths of both RL and GA, to further enhance resource allocation strategies. These insights provide valuable guidance for cloud service providers and users aiming to achieve more efficient, cost-effective, and scalable resource management in multi-cloud environments.
Keywords:
Cloud Computing; Resource Allocation; Multi-Cloud Environments; Reinforcement Learning; Genetic Algorithms; Machine Learning; Artificial Intelligence
Full text article in PDF:
Copyright information:
Copyright © 2020 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0