Decentralized machine learning model orchestration in distributed cloud environments: A meta-learning framework integrating Artificial Intelligence for predictive resource allocation
Independent Researcher.
Review Article
World Journal of Advanced Research and Reviews, 2019, 03(01), 043–053
Publication history:
Received on 15 April 2019; revised on 20 May 2019; accepted on 24 May 2019
Abstract:
As the market of cloud computing is in the state of constant active development, the distribution of the decentralized machine learning models, and the management of their resources in the distributed context is a problem as well. As a theoretical imperative for this research, this paper puts forth a meta-learning framework that augments resource prediction deep learning using artificial intelligence. This type of investigation integrates theoretical planning with practical implementation exercises rather than using a number of computational models to create an orchestration model of resource management according to the workload capacity and system capability. The studies found the discussed framework enhances resource utilisation by up to 30%, compared to elementary means, and cuts latency by about 25%. The implications of this work are shared by distributed cloud service providers and firms as the proposed framework not only determines ideal resource allocation but also improves system dependability and expansibility. The current research therefore provides fresh insight on the possibility of effective resource management in loosely coupled infrastructures and lays down a framework for future research in the realms of intelligent cloud computing and machine learning.
Keywords:
Decentralized Machine Learning; Model Orchestration; Distributed Cloud Environments; Meta-Learning Framework
Full text article in PDF:
Copyright information:
Copyright © 2019 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0