Table of Contents
Phi-2 Benchmarks
Accessing Phi-2
Phi-2 Applications
Fine-tuning Phi-2
Setup and Installation
Hugging Face Login
Loading the Dataset
Loading Model and Tokenizer
Adding Adapter Layers
Training
Saving and Pushing the Model
Model Evaluation
Conclusion
Home Technology peripherals AI Getting Started with Phi-2

Getting Started with Phi-2

Mar 08, 2025 am 10:50 AM

This blog post delves into Microsoft's Phi-2 language model, comparing its performance to other models and detailing its training process. We'll also cover how to access and fine-tune Phi-2 using the Transformers library and a Hugging Face role-playing dataset.

Phi-2, a 2.7 billion-parameter model from Microsoft's "Phi" series, aims for state-of-the-art performance despite its relatively small size. It employs a Transformer architecture, trained on 1.4 trillion tokens from synthetic and web datasets focusing on NLP and coding. Unlike many larger models, Phi-2 is a base model without instruction fine-tuning or RLHF.

Two key aspects drove Phi-2's development:

  • High-Quality Training Data: Prioritizing "textbook-quality" data, including synthetic datasets and high-value web content, to instill common sense reasoning, general knowledge, and scientific understanding.
  • Scaled Knowledge Transfer: Leveraging knowledge from the 1.3 billion parameter Phi-1.5 model to accelerate training and boost benchmark scores.

For insights into building similar LLMs, consider the Master LLM Concepts course.

Phi-2 Benchmarks

Phi-2 surpasses 7B-13B parameter models like Llama-2 and Mistral across various benchmarks (common sense reasoning, language understanding, math, coding). Remarkably, it outperforms the significantly larger Llama-2-70B on multi-step reasoning tasks.

Getting Started with Phi-2

Image Source

This focus on smaller, easily fine-tuned models allows for deployment on mobile devices, achieving performance comparable to much larger models. Phi-2 even outperforms Google Gemini Nano 2 on Big Bench Hard, BoolQ, and MBPP benchmarks.

Getting Started with Phi-2

Image Source

Accessing Phi-2

Explore Phi-2's capabilities via the Hugging Face Spaces demo: Phi 2 Streaming on GPU. This demo offers basic prompt-response functionality.

Getting Started with Phi-2

New to AI? The AI Fundamentals skill track is a great starting point.

Let's use the transformers pipeline for inference (ensure you have the latest transformers and accelerate installed).

!pip install -q -U transformers
!pip install -q -U accelerate

from transformers import pipeline

model_name = "microsoft/phi-2"

pipe = pipeline(
    "text-generation",
    model=model_name,
    device_map="auto",
    trust_remote_code=True,
)
Copy after login
Copy after login

Generate text using a prompt, adjusting parameters like max_new_tokens and temperature. Markdown output is converted to HTML.

from IPython.display import Markdown

prompt = "Please create a Python application that can change wallpapers automatically."

outputs = pipe(
    prompt,
    max_new_tokens=300,
    do_sample=True,
    temperature=0.7,
    top_k=50,
    top_p=0.95,
)
Markdown(outputs[0]["generated_text"])
Copy after login
Copy after login

Phi-2's output is impressive, generating code with explanations.

Getting Started with Phi-2

Phi-2 Applications

Phi-2's compact size allows for use on laptops and mobile devices for Q&A, code generation, and basic conversations.

Fine-tuning Phi-2

This section demonstrates fine-tuning Phi-2 on the hieunguyenminh/roleplay dataset using PEFT.

Setup and Installation

!pip install -q -U transformers
!pip install -q -U accelerate

from transformers import pipeline

model_name = "microsoft/phi-2"

pipe = pipeline(
    "text-generation",
    model=model_name,
    device_map="auto",
    trust_remote_code=True,
)
Copy after login
Copy after login

Import necessary libraries:

from IPython.display import Markdown

prompt = "Please create a Python application that can change wallpapers automatically."

outputs = pipe(
    prompt,
    max_new_tokens=300,
    do_sample=True,
    temperature=0.7,
    top_k=50,
    top_p=0.95,
)
Markdown(outputs[0]["generated_text"])
Copy after login
Copy after login

Define variables for the base model, dataset, and fine-tuned model name:

%%capture
%pip install -U bitsandbytes
%pip install -U transformers
%pip install -U peft
%pip install -U accelerate
%pip install -U datasets
%pip install -U trl
Copy after login

Hugging Face Login

Login using your Hugging Face API token. (Replace with your actual token retrieval method).

from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    BitsAndBytesConfig,
    TrainingArguments,
    pipeline,
    logging,
)
from peft import (
    LoraConfig,
    PeftModel,
    prepare_model_for_kbit_training,
    get_peft_model,
)
import os, torch
from datasets import load_dataset
from trl import SFTTrainer
Copy after login

Getting Started with Phi-2

Loading the Dataset

Load a subset of the dataset for faster training:

base_model = "microsoft/phi-2"
dataset_name = "hieunguyenminh/roleplay"
new_model = "phi-2-role-play"
Copy after login

Loading Model and Tokenizer

Load the 4-bit quantized model for memory efficiency:

# ... (Method to securely retrieve Hugging Face API token) ...
!huggingface-cli login --token $secret_hf
Copy after login

Adding Adapter Layers

Add LoRA layers for efficient fine-tuning:

dataset = load_dataset(dataset_name, split="train[0:1000]")
Copy after login

Training

Set up training arguments and the SFTTrainer:

bnb_config = BitsAndBytesConfig(  
    load_in_4bit= True,
    bnb_4bit_quant_type= "nf4",
    bnb_4bit_compute_dtype= torch.bfloat16,
    bnb_4bit_use_double_quant= False,
)

model = AutoModelForCausalLM.from_pretrained(
    base_model,
    quantization_config=bnb_config,
    device_map="auto",
    trust_remote_code=True,
)

model.config.use_cache = False
model.config.pretraining_tp = 1

tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
Copy after login

Getting Started with Phi-2

Saving and Pushing the Model

Save and upload the fine-tuned model:

model = prepare_model_for_kbit_training(model)
peft_config = LoraConfig(
    r=16,
    lora_alpha=16,
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM",
    target_modules=[
        'q_proj',
        'k_proj',
        'v_proj',
        'dense',
        'fc1',
        'fc2',
    ]
)
model = get_peft_model(model, peft_config)
Copy after login

Getting Started with Phi-2

Image Source

Model Evaluation

Evaluate the fine-tuned model:

training_arguments = TrainingArguments(
    output_dir="./results", # Replace with your desired output directory
    num_train_epochs=1,
    per_device_train_batch_size=2,
    gradient_accumulation_steps=1,
    optim="paged_adamw_32bit",
    save_strategy="epoch",
    logging_steps=100,
    logging_strategy="steps",
    learning_rate=2e-4,
    fp16=False,
    bf16=False,
    group_by_length=True,
    disable_tqdm=False,
    report_to="none",
)

trainer = SFTTrainer(
    model=model,
    train_dataset=dataset,
    peft_config=peft_config,
    max_seq_length= 2048,
    dataset_text_field="text",
    tokenizer=tokenizer,
    args=training_arguments,
    packing= False,
)

trainer.train()
Copy after login

Getting Started with Phi-2

Conclusion

This tutorial provided a comprehensive overview of Microsoft's Phi-2, its performance, training, and fine-tuning. The ability to fine-tune this smaller model efficiently opens up possibilities for customized applications and deployments. Further exploration into building LLM applications using frameworks like LangChain is recommended.

The above is the detailed content of Getting Started with Phi-2. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

I Tried Vibe Coding with Cursor AI and It's Amazing! I Tried Vibe Coding with Cursor AI and It's Amazing! Mar 20, 2025 pm 03:34 PM

Vibe coding is reshaping the world of software development by letting us create applications using natural language instead of endless lines of code. Inspired by visionaries like Andrej Karpathy, this innovative approach lets dev

Top 5 GenAI Launches of February 2025: GPT-4.5, Grok-3 & More! Top 5 GenAI Launches of February 2025: GPT-4.5, Grok-3 & More! Mar 22, 2025 am 10:58 AM

February 2025 has been yet another game-changing month for generative AI, bringing us some of the most anticipated model upgrades and groundbreaking new features. From xAI’s Grok 3 and Anthropic’s Claude 3.7 Sonnet, to OpenAI’s G

How to Use YOLO v12 for Object Detection? How to Use YOLO v12 for Object Detection? Mar 22, 2025 am 11:07 AM

YOLO (You Only Look Once) has been a leading real-time object detection framework, with each iteration improving upon the previous versions. The latest version YOLO v12 introduces advancements that significantly enhance accuracy

Is ChatGPT 4 O available? Is ChatGPT 4 O available? Mar 28, 2025 pm 05:29 PM

ChatGPT 4 is currently available and widely used, demonstrating significant improvements in understanding context and generating coherent responses compared to its predecessors like ChatGPT 3.5. Future developments may include more personalized interactions and real-time data processing capabilities, further enhancing its potential for various applications.

Google's GenCast: Weather Forecasting With GenCast Mini Demo Google's GenCast: Weather Forecasting With GenCast Mini Demo Mar 16, 2025 pm 01:46 PM

Google DeepMind's GenCast: A Revolutionary AI for Weather Forecasting Weather forecasting has undergone a dramatic transformation, moving from rudimentary observations to sophisticated AI-powered predictions. Google DeepMind's GenCast, a groundbreak

Which AI is better than ChatGPT? Which AI is better than ChatGPT? Mar 18, 2025 pm 06:05 PM

The article discusses AI models surpassing ChatGPT, like LaMDA, LLaMA, and Grok, highlighting their advantages in accuracy, understanding, and industry impact.(159 characters)

Best AI Art Generators (Free & Paid) for Creative Projects Best AI Art Generators (Free & Paid) for Creative Projects Apr 02, 2025 pm 06:10 PM

The article reviews top AI art generators, discussing their features, suitability for creative projects, and value. It highlights Midjourney as the best value for professionals and recommends DALL-E 2 for high-quality, customizable art.

o1 vs GPT-4o: Is OpenAI's New Model Better Than GPT-4o? o1 vs GPT-4o: Is OpenAI's New Model Better Than GPT-4o? Mar 16, 2025 am 11:47 AM

OpenAI's o1: A 12-Day Gift Spree Begins with Their Most Powerful Model Yet December's arrival brings a global slowdown, snowflakes in some parts of the world, but OpenAI is just getting started. Sam Altman and his team are launching a 12-day gift ex

See all articles