A Microservices-Based Framework for Distributed Machine Learning Model Training and Deployment Using Artificial Intelligence
Keywords:
Microservices, Distributed Machine Learning, AI Orchestration, Model Deployment, Cloud Computing, MLOpsAbstract
Purpose – This paper proposes a microservices-based framework for the efficient training and deployment of machine learning (ML) models in distributed environments, leveraging artificial intelligence (AI) to optimize model performance and scalability.
Design/methodology/approach – The framework utilizes containerized microservices to modularize ML tasks such as preprocessing, model training, validation, and deployment. These services communicate over APIs, enabling distributed execution across cloud-native infrastructures. AI-based orchestration selects optimal resources and configurations dynamically.
Findings – The microservices architecture enhances flexibility, fault isolation, scalability, and continuous integration/deployment (CI/CD) of ML pipelines. AI integration reduces resource waste by learning optimal configurations for task allocation. Experimental simulations demonstrate reduced model training time and improved inference throughput.
Practical implications – This framework facilitates collaboration among ML engineers and DevOps teams by abstracting model development workflows into manageable services. It supports hybrid cloud deployment, GPU pooling, and federated learning scenarios.
Originality/value – This paper uniquely combines AI orchestration with microservices for end-to-end ML pipeline automation in distributed systems, demonstrating significant improvements in model lifecycle management.
Posted
License
Copyright (c) 2023 Jason Sankai J, (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.