探索Zephyr-7B:强大的开源LLM
> OpenAI LLM排行榜嗡嗡作响,旨在竞争GPT-4的新开源车型,而Zephyr-7B是一个出色的竞争者。本教程从WebPilot.ai探索了这种尖端语言模型,展示了它与变形金刚管道的使用,并在代理 - 教学数据集上进行了微调。 AI的新手? AI基础知识技能轨道是一个很好的起点。
Zephyr-7b经过训练,可以充当有益的助手。它的优势在于生成连贯的文本,翻译语言,总结信息,情感分析和上下文感知的问题回答。
Zephyr-7b-β:微调的漫威
来自Zephyr Chat
> >使用拥抱的脸型变压器访问Zephyr-7b
>本教程使用拥抱的脸部变压器来轻松访问。 (如果遇到加载问题,请咨询推理Kaggle笔记本。安装库:
确保您有最新版本:!pip install -q -U transformers !pip install -q -U accelerate !pip install -q -U bitsandbytes
import torch from transformers import pipeline
device_map="auto"
torch.bfloat16
生成文本:model_name = "HuggingFaceH4/zephyr-7b-beta" pipe = pipeline( "text-generation", model=model_name, torch_dtype=torch.bfloat16, device_map="auto", )
prompt = "Write a Python function that can clean the HTML tags from the file:" outputs = pipe( prompt, max_new_tokens=300, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"])
使用Zephyr-7B样式系统提示自定义响应:
!pip install -q -U transformers !pip install -q -U accelerate !pip install -q -U bitsandbytes
import torch from transformers import pipeline
> kaggle秘密(对于kaggle笔记本):检索拥抱的脸和偏见和偏见API键。
拥抱面部和重量和偏见登录:
model_name = "HuggingFaceH4/zephyr-7b-beta" pipe = pipeline( "text-generation", model=model_name, torch_dtype=torch.bfloat16, device_map="auto", )
prompt = "Write a Python function that can clean the HTML tags from the file:" outputs = pipe( prompt, max_new_tokens=300, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"])
format_prompt
messages = [ { "role": "system", "content": "You are a skilled software engineer who consistently produces high-quality Python code.", }, { "role": "user", "content": "Write a Python code to display text in a star pattern.", }, ] prompt = pipe.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) outputs = pipe( prompt, max_new_tokens=300, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"])
>加载和准备模型
%%capture %pip install -U bitsandbytes %pip install -U transformers %pip install -U peft %pip install -U accelerate %pip install -U trl
# ... (Import statements as in original tutorial) ...
!huggingface-cli login --token $secret_hf # ... (wandb login as in original tutorial) ...
base_model = "HuggingFaceH4/zephyr-7b-beta" dataset_name = "THUDM/AgentInstruct" new_model = "zephyr-7b-beta-Agent-Instruct"
# ... (format_prompt function and dataset loading as in original tutorial) ...
# ... (bnb_config and model loading as in original tutorial) ...
>保存和部署微调模型
# ... (tokenizer loading and configuration as in original tutorial) ...
# ... (peft_config and model preparation as in original tutorial) ...
测试微型模型
>用各种提示测试模型的性能。原始教程中提供了示例。
> Zephyr-7b-beta表现出令人印象深刻的功能。本教程为即使在资源受限的GPU上,也提供了利用和微调这一强大的LLM的综合指南。 考虑大型语言模型(LLMS)概念课程,以了解更深的LLM知识。
以上是Zephyr-7B的综合指南:功能,用法和微调的详细内容。更多信息请关注PHP中文网其他相关文章!