Home > Technology peripherals > AI > How to Fine-Tune Phi-4 Locally?

How to Fine-Tune Phi-4 Locally?

尊渡假赌尊渡假赌尊渡假赌
Release: 2025-03-08 11:49:14
Original
774 people have browsed it

This guide demonstrates fine-tuning the Microsoft Phi-4 large language model (LLM) for specialized tasks using Low-Rank Adaptation (LoRA) adapters and Hugging Face. By focusing on specific domains, you can optimize Phi-4's performance for applications like customer support or medical advice. The efficiency of LoRA makes this process faster and less resource-intensive.

Key Learning Outcomes:

  • Fine-tune Microsoft Phi-4 using LoRA adapters for targeted applications.
  • Configure and load Phi-4 efficiently with 4-bit quantization.
  • Prepare and transform datasets for fine-tuning with Hugging Face and the unsloth library.
  • Optimize model performance using Hugging Face's SFTTrainer.
  • Monitor GPU usage and save/upload fine-tuned models to Hugging Face for deployment.

Prerequisites:

Before starting, ensure you have:

  • Python 3.8
  • PyTorch (with CUDA support for GPU acceleration)
  • unsloth library
  • Hugging Face transformers and datasets libraries

Install necessary libraries using:

pip install unsloth
pip install --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git
Copy after login
Copy after login

Fine-Tuning Phi-4: A Step-by-Step Approach

This section details the fine-tuning process, from setup to deployment on Hugging Face.

Step 1: Model Setup

This involves loading the model and importing essential libraries:

from unsloth import FastLanguageModel
import torch

max_seq_length = 2048
load_in_4bit = True

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Phi-4",
    max_seq_length=max_seq_length,
    load_in_4bit=load_in_4bit,
)

model = FastLanguageModel.get_peft_model(
    model,
    r=16,
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
    lora_alpha=16,
    lora_dropout=0,
    bias="none",
    use_gradient_checkpointing="unsloth",
    random_state=3407,
)
Copy after login
Copy after login

How to Fine-Tune Phi-4 Locally? How to Fine-Tune Phi-4 Locally?

Step 2: Dataset Preparation

We'll use the FineTome-100k dataset in ShareGPT format. unsloth helps convert this to Hugging Face's format:

from datasets import load_dataset
from unsloth.chat_templates import standardize_sharegpt, get_chat_template

dataset = load_dataset("mlabonne/FineTome-100k", split="train")
dataset = standardize_sharegpt(dataset)
tokenizer = get_chat_template(tokenizer, chat_template="phi-4")

def formatting_prompts_func(examples):
    texts = [
        tokenizer.apply_chat_template(convo, tokenize=False, add_generation_prompt=False)
        for convo in examples["conversations"]
    ]
    return {"text": texts}

dataset = dataset.map(formatting_prompts_func, batched=True)
Copy after login

How to Fine-Tune Phi-4 Locally? How to Fine-Tune Phi-4 Locally?

Step 3: Model Fine-tuning

Fine-tune using Hugging Face's SFTTrainer:

from trl import SFTTrainer
from transformers import TrainingArguments, DataCollatorForSeq2Seq
from unsloth import is_bfloat16_supported
from unsloth.chat_templates import train_on_responses_only

trainer = SFTTrainer(
    # ... (Trainer configuration as in the original response) ...
)

trainer = train_on_responses_only(
    trainer,
    instruction_part="user",
    response_part="assistant",
)
Copy after login

How to Fine-Tune Phi-4 Locally? How to Fine-Tune Phi-4 Locally?

Step 4: GPU Usage Monitoring

Monitor GPU memory usage:

import torch
# ... (GPU monitoring code as in the original response) ...
Copy after login

How to Fine-Tune Phi-4 Locally?

Step 5: Inference

Generate responses:

pip install unsloth
pip install --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git
Copy after login
Copy after login

How to Fine-Tune Phi-4 Locally? How to Fine-Tune Phi-4 Locally?

Step 6: Saving and Uploading

Save locally or push to Hugging Face:

from unsloth import FastLanguageModel
import torch

max_seq_length = 2048
load_in_4bit = True

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Phi-4",
    max_seq_length=max_seq_length,
    load_in_4bit=load_in_4bit,
)

model = FastLanguageModel.get_peft_model(
    model,
    r=16,
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
    lora_alpha=16,
    lora_dropout=0,
    bias="none",
    use_gradient_checkpointing="unsloth",
    random_state=3407,
)
Copy after login
Copy after login

How to Fine-Tune Phi-4 Locally?

Remember to replace <your_hf_token></your_hf_token> with your actual Hugging Face token.

Conclusion:

This streamlined guide empowers developers to efficiently fine-tune Phi-4 for specific needs, leveraging the power of LoRA and Hugging Face for optimized performance and easy deployment. Remember to consult the original response for complete code snippets and detailed explanations.

The above is the detailed content of How to Fine-Tune Phi-4 Locally?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template