Skip to content

An end-to-end ML model deployment pipeline on GCP: train in Cloud Shell, containerize with Docker, push to Artifact Registry, deploy on GKE, and build a basic frontend to interact through exposed endpoints. This showcases the benefits of containerized deployments, centralized image management, and automated orchestration using GCP tools.

Notifications You must be signed in to change notification settings

Shreyjain203/Machine-Learning-Model-to-Production-Website-using-GKE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

End-to-End ML Model Deployment on Google Cloud Platform (GCP)

This project demonstrates a complete workflow for deploying a machine learning model on Google Cloud Platform. It covers the following steps:

  1. Model Development: Train and build your machine learning model using any framework (e.g., TensorFlow, PyTorch), I used Google Cloud Shell, a free browser-based command-line environment on GCP.
  2. Dockerization: Create a Dockerfile and containerize your model, including its dependencies and environment. This ensures consistent and portable deployment across different environments.
  3. Artifact Registry(AR): Push the built Docker image to Google Artifact Registry, a secure and managed repository for storing container images(Docker Images).
  4. Kubernetes Engine (GKE) Deployment: Deploy the containerized model as a service on Google Kubernetes Engine, a managed container orchestration platform. This allows for scalable and automated deployments.
  5. Frontend Integration: Create a basic frontend application (e.g., using Flask or Streamlit) to interact with the exposed endpoint of your deployed model.
This project provides a starting point for learning how to deploy machine learning models on GCP using industry-standard tools and practices. It showcases the benefits of:

  • Containerization: Enables consistent and portable deployments.
  • Artifact Registry: Provides a secure and centralized location for storing container images.
  • Kubernetes Engine: Offers an automated, scalable, and flexible platform for container orchestration.

About

An end-to-end ML model deployment pipeline on GCP: train in Cloud Shell, containerize with Docker, push to Artifact Registry, deploy on GKE, and build a basic frontend to interact through exposed endpoints. This showcases the benefits of containerized deployments, centralized image management, and automated orchestration using GCP tools.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published