Google DeepMind's Gemma: A Deep Dive into Open-Source LLMs
The AI landscape is buzzing with activity, particularly concerning open-source Large Language Models (LLMs). Tech giants like Google, Meta, and Twitter are increasingly embracing open-source development. Google DeepMind recently unveiled Gemma, a family of lightweight, open-source LLMs built using the same underlying research and technology as Google's Gemini models. This article explores Gemma models, their accessibility via cloud GPUs and TPUs, and provides a step-by-step guide to fine-tuning the Gemma 7b-it model on a role-playing dataset.
Understanding Google's Gemma
Gemma (meaning "precious stone" in Latin) is a family of decoder-only, text-to-text open models developed primarily by Google DeepMind. Inspired by the Gemini models, Gemma is designed for lightweight operation and broad framework compatibility. Google has released model weights for two Gemma sizes: 2B and 7B, each available in pre-trained and instruction-tuned variants (e.g., Gemma 2B-it and Gemma 7B-it). Gemma's performance rivals other open models, notably outperforming Meta's Llama-2 across various LLM benchmarks.
Image Source
Gemma's versatility extends to its support for multiple frameworks (Keras 3.0, PyTorch, JAX, Hugging Face Transformers) and diverse hardware (laptops, desktops, IoT devices, mobile, and cloud). Inference and supervised fine-tuning (SFT) are possible on free Cloud TPUs using popular machine learning frameworks. Furthermore, Google provides a Responsible Generative AI Toolkit alongside Gemma, offering developers guidance and tools for creating safer AI applications. Beginners in AI and LLMs are encouraged to explore the AI Fundamentals skill track for foundational knowledge.
Accessing Google's Gemma Model
Accessing Gemma is straightforward. Free access is available via HuggingChat and Poe. Local usage is also possible by downloading model weights from Hugging Face and utilizing GPT4ALL or LMStudio. This guide focuses on using Kaggle's free GPUs and TPUs for inference.
Running Gemma Inference on TPUs
To run Gemma inference on TPUs using Keras, follow these steps:
!pip install -q tensorflow-cpu !pip install -q -U keras-nlp tensorflow-hub !pip install -q -U keras>=3 !pip install -q -U tensorflow-text
jax.devices()
.jax
as the Keras backend: os.environ["KERAS_BACKEND"] = "jax"
keras_nlp
and generate text using the generate
function.
Image Source
Running Gemma Inference on GPUs
For GPU inference using Transformers, follow these steps:
%%capture %pip install -U bitsandbytes %pip install -U transformers %pip install -U accelerate
Image Source
Fine-Tuning Google's Gemma: A Step-by-Step Guide
This section details fine-tuning Gemma 7b-it on the hieunguyenminh/roleplay
dataset using a Kaggle P100 GPU.
Setting Up
%%capture %pip install -U bitsandbytes %pip install -U transformers %pip install -U peft %pip install -U accelerate %pip install -U trl %pip install -U datasets
Loading the Dataset
Load the first 1000 rows of the role-playing dataset.
Loading the Model and Tokenizer
Load the Gemma 7b-it model using 4-bit precision with BitsAndBytes. Load the tokenizer and configure the pad token.
Adding the Adapter Layer
Add a LoRA adapter layer to efficiently fine-tune the model.
Training the Model
Define training arguments (hyperparameters) and create an SFTTrainer. Train the model using .train()
.
Saving the Model
Save the fine-tuned model locally and push it to the Hugging Face Hub.
Model Inference
Generate responses using the fine-tuned model.
Gemma 7B Inference with Role Play Adapter
This section demonstrates how to load the base model and the trained adapter, merge them, and generate responses.
Final Thoughts
Google's release of Gemma signifies a shift towards open-source collaboration in AI. This tutorial provided a comprehensive guide to using and fine-tuning Gemma models, highlighting the power of open-source development and cloud computing resources. The next step is to build your own LLM-based application using frameworks like LangChain.
The above is the detailed content of Fine Tuning Google Gemma: Enhancing LLMs with Customized Instructions. For more information, please follow other related articles on the PHP Chinese website!