Home > Technology peripherals > AI > Fine Tuning Google Gemma: Enhancing LLMs with Customized Instructions

Fine Tuning Google Gemma: Enhancing LLMs with Customized Instructions

Lisa Kudrow
Release: 2025-03-07 10:01:10
Original
227 people have browsed it

Google DeepMind's Gemma: A Deep Dive into Open-Source LLMs

The AI landscape is buzzing with activity, particularly concerning open-source Large Language Models (LLMs). Tech giants like Google, Meta, and Twitter are increasingly embracing open-source development. Google DeepMind recently unveiled Gemma, a family of lightweight, open-source LLMs built using the same underlying research and technology as Google's Gemini models. This article explores Gemma models, their accessibility via cloud GPUs and TPUs, and provides a step-by-step guide to fine-tuning the Gemma 7b-it model on a role-playing dataset.

Understanding Google's Gemma

Gemma (meaning "precious stone" in Latin) is a family of decoder-only, text-to-text open models developed primarily by Google DeepMind. Inspired by the Gemini models, Gemma is designed for lightweight operation and broad framework compatibility. Google has released model weights for two Gemma sizes: 2B and 7B, each available in pre-trained and instruction-tuned variants (e.g., Gemma 2B-it and Gemma 7B-it). Gemma's performance rivals other open models, notably outperforming Meta's Llama-2 across various LLM benchmarks.

Fine Tuning Google Gemma: Enhancing LLMs with Customized Instructions Image Source

Gemma's versatility extends to its support for multiple frameworks (Keras 3.0, PyTorch, JAX, Hugging Face Transformers) and diverse hardware (laptops, desktops, IoT devices, mobile, and cloud). Inference and supervised fine-tuning (SFT) are possible on free Cloud TPUs using popular machine learning frameworks. Furthermore, Google provides a Responsible Generative AI Toolkit alongside Gemma, offering developers guidance and tools for creating safer AI applications. Beginners in AI and LLMs are encouraged to explore the AI Fundamentals skill track for foundational knowledge.

Accessing Google's Gemma Model

Accessing Gemma is straightforward. Free access is available via HuggingChat and Poe. Local usage is also possible by downloading model weights from Hugging Face and utilizing GPT4ALL or LMStudio. This guide focuses on using Kaggle's free GPUs and TPUs for inference.

Running Gemma Inference on TPUs

To run Gemma inference on TPUs using Keras, follow these steps:

  1. Navigate to Keras/Gemma, select the "gemma_instruct_2b_en" model variant, and click "New Notebook."
  2. In the right panel, select "TPU VM v3-8" as the accelerator.
  3. Install necessary Python libraries:
!pip install -q tensorflow-cpu
!pip install -q -U keras-nlp tensorflow-hub
!pip install -q -U keras>=3
!pip install -q -U tensorflow-text
Copy after login
  1. Verify TPU availability using jax.devices().
  2. Set jax as the Keras backend: os.environ["KERAS_BACKEND"] = "jax"
  3. Load the model using keras_nlp and generate text using the generate function.

Fine Tuning Google Gemma: Enhancing LLMs with Customized Instructions Image Source

Running Gemma Inference on GPUs

For GPU inference using Transformers, follow these steps:

  1. Navigate to google/gemma, select "transformers," choose the "7b-it" variant, and create a new notebook.
  2. Select GPT T4 x2 as the accelerator.
  3. Install required packages:
%%capture
%pip install -U bitsandbytes
%pip install -U transformers
%pip install -U accelerate
Copy after login
  1. Load the model using 4-bit quantization with BitsAndBytes for VRAM management.
  2. Load the tokenizer.
  3. Create a prompt, tokenize it, pass it to the model, decode the output, and display the result.

Fine Tuning Google Gemma: Enhancing LLMs with Customized Instructions Image Source

Fine-Tuning Google's Gemma: A Step-by-Step Guide

This section details fine-tuning Gemma 7b-it on the hieunguyenminh/roleplay dataset using a Kaggle P100 GPU.

Setting Up

  1. Install necessary packages:
%%capture 
%pip install -U bitsandbytes 
%pip install -U transformers 
%pip install -U peft 
%pip install -U accelerate 
%pip install -U trl
%pip install -U datasets
Copy after login
  1. Import required libraries.
  2. Define variables for the base model, dataset, and fine-tuned model name.
  3. Log in to Hugging Face CLI using your API key.
  4. Initialize Weights & Biases (W&B) workspace.

Loading the Dataset

Load the first 1000 rows of the role-playing dataset.

Loading the Model and Tokenizer

Load the Gemma 7b-it model using 4-bit precision with BitsAndBytes. Load the tokenizer and configure the pad token.

Adding the Adapter Layer

Add a LoRA adapter layer to efficiently fine-tune the model.

Training the Model

Define training arguments (hyperparameters) and create an SFTTrainer. Train the model using .train().

Saving the Model

Save the fine-tuned model locally and push it to the Hugging Face Hub.

Model Inference

Generate responses using the fine-tuned model.

Gemma 7B Inference with Role Play Adapter

This section demonstrates how to load the base model and the trained adapter, merge them, and generate responses.

Final Thoughts

Google's release of Gemma signifies a shift towards open-source collaboration in AI. This tutorial provided a comprehensive guide to using and fine-tuning Gemma models, highlighting the power of open-source development and cloud computing resources. The next step is to build your own LLM-based application using frameworks like LangChain.

The above is the detailed content of Fine Tuning Google Gemma: Enhancing LLMs with Customized Instructions. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template