Frameworks & Pipelines

Explore leading frameworks and pipelines for building, training, and deploying machine learning and AI models. This section provides overviews, best practices, and hands-on guides for integrating tools like Metaflow, MLflow, LangChain, and LlamaIndex into your AI/ML workflows, enabling efficient experiment tracking, workflow automation, and scalable model management.

Deploying a Persistent Chatbot on Google Cloud Platform with LangChain, Streamlit, and IAP

In this tutorial, you will learn how to deploy a chatbot application using LangChain and Streamlit on Google Cloud Platform (GCP).

Llamaindex in GKE cluster

This tutorial will guide you through creating a robust Retrieval-Augmented Generation (RAG) system using LlamaIndex and deploying it on Google Kubernetes Engine (GKE).

Fine-Tuning Gemma 2-9B on GKE using Metaflow and Argo Workflows

This tutorial will provide instructions on how to deploy and use the Metaflow framework on GKE (Google Kubernetes Engine) and operate AI/ML workloads using Argo-Workflows.

Fine-tune gemma-2-9b and track as an experiment in MLFlow

In this tutorial we will fine-tune gemma-2-9b using LoRA as an experiment in MLFlow. We will deploy MLFlow on a GKE cluster and set up MLFlow to store artifacts inside a GCS bucket. In the end, we will deploy a fine-tuned model using KServe.

Continue reading: