This guide demonstrates fine-tuning the Microsoft Phi-4 large language model (LLM) for specialized tasks using Low-Rank Adaptation (LoRA) adapters and Hugging Face. By focusing on specific domains, you can optimize Phi-4's performance for applications like customer support or medical advice. The efficiency of LoRA makes this process faster and less resource-intensive.
Key Learning Outcomes:
unsloth
library.SFTTrainer
.Prerequisites:
Before starting, ensure you have:
unsloth
librarytransformers
and datasets
librariesInstall necessary libraries using:
pip install unsloth pip install --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git
Fine-Tuning Phi-4: A Step-by-Step Approach
This section details the fine-tuning process, from setup to deployment on Hugging Face.
Step 1: Model Setup
This involves loading the model and importing essential libraries:
from unsloth import FastLanguageModel import torch max_seq_length = 2048 load_in_4bit = True model, tokenizer = FastLanguageModel.from_pretrained( model_name="unsloth/Phi-4", max_seq_length=max_seq_length, load_in_4bit=load_in_4bit, ) model = FastLanguageModel.get_peft_model( model, r=16, target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], lora_alpha=16, lora_dropout=0, bias="none", use_gradient_checkpointing="unsloth", random_state=3407, )
Step 2: Dataset Preparation
We'll use the FineTome-100k dataset in ShareGPT format. unsloth
helps convert this to Hugging Face's format:
from datasets import load_dataset from unsloth.chat_templates import standardize_sharegpt, get_chat_template dataset = load_dataset("mlabonne/FineTome-100k", split="train") dataset = standardize_sharegpt(dataset) tokenizer = get_chat_template(tokenizer, chat_template="phi-4") def formatting_prompts_func(examples): texts = [ tokenizer.apply_chat_template(convo, tokenize=False, add_generation_prompt=False) for convo in examples["conversations"] ] return {"text": texts} dataset = dataset.map(formatting_prompts_func, batched=True)
Step 3: Model Fine-tuning
Fine-tune using Hugging Face's SFTTrainer
:
from trl import SFTTrainer from transformers import TrainingArguments, DataCollatorForSeq2Seq from unsloth import is_bfloat16_supported from unsloth.chat_templates import train_on_responses_only trainer = SFTTrainer( # ... (Trainer configuration as in the original response) ... ) trainer = train_on_responses_only( trainer, instruction_part="user", response_part="assistant", )
Step 4: GPU Usage Monitoring
Monitor GPU memory usage:
import torch # ... (GPU monitoring code as in the original response) ...
Step 5: Inference
Generate responses:
pip install unsloth pip install --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git
Step 6: Saving and Uploading
Save locally or push to Hugging Face:
from unsloth import FastLanguageModel import torch max_seq_length = 2048 load_in_4bit = True model, tokenizer = FastLanguageModel.from_pretrained( model_name="unsloth/Phi-4", max_seq_length=max_seq_length, load_in_4bit=load_in_4bit, ) model = FastLanguageModel.get_peft_model( model, r=16, target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], lora_alpha=16, lora_dropout=0, bias="none", use_gradient_checkpointing="unsloth", random_state=3407, )
Remember to replace <your_hf_token></your_hf_token>
with your actual Hugging Face token.
Conclusion:
This streamlined guide empowers developers to efficiently fine-tune Phi-4 for specific needs, leveraging the power of LoRA and Hugging Face for optimized performance and easy deployment. Remember to consult the original response for complete code snippets and detailed explanations.
The above is the detailed content of How to Fine-Tune Phi-4 Locally?. For more information, please follow other related articles on the PHP Chinese website!