Granite 3.0 是一个开源、轻量级的生成语言模型系列,专为一系列企业级任务而设计。它原生支持多语言功能、编码、推理和工具使用,适合企业环境。
我测试了运行这个模型,看看它可以处理哪些任务。
我在 Google Colab 中设置了 Granite 3.0 环境,并使用以下命令安装了必要的库:
!pip install torch torchvision torchaudio !pip install accelerate !pip install -U transformers
我测试了Granite 3.0的2B和8B型号的性能。
我运行了 2B 模型。这是 2B 模型的代码示例:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-2b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() chat = [ { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(chat, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=100) output = tokenizer.batch_decode(output) print(output[0])
<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|> <|start_of_role|>assistant<|end_of_role|>1. IBM Research - Austin, Texas<|end_of_text|>
将2b替换为8b即可使用8B模型。以下是 8B 模型的没有角色和用户输入字段的代码示例:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-8b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() chat = [ { "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(chat, add_special_tokens=False, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=100) generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True) print(generated_text)
1. IBM Almaden Research Center - San Jose, California
我探索了函数调用功能,并使用虚拟函数对其进行了测试。这里,get_current_weather 被定义为返回模拟天气数据。
import json def get_current_weather(location: str) -> dict: """ Retrieves current weather information for the specified location (default: San Francisco). Args: location (str): Name of the city to retrieve weather data for. Returns: dict: Dictionary containing weather information (temperature, description, humidity). """ print(f"Getting current weather for {location}") try: weather_description = "sample" temperature = "20.0" humidity = "80.0" return { "description": weather_description, "temperature": temperature, "humidity": humidity } except Exception as e: print(f"Error fetching weather data: {e}") return {"weather": "NA"}
我创建了一个调用该函数的提示:
functions = [ { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and country code, e.g. San Francisco, US", } }, "required": ["location"], }, }, ] query = "What's the weather like in Boston?" payload = { "functions_str": [json.dumps(x) for x in functions] } chat = [ {"role":"system","content": f"You are a helpful assistant with access to the following function calls. Your task is to produce a sequence of function calls necessary to generate response to the user utterance. Use the following function calls as required.{payload}"}, {"role": "user", "content": query } ]
使用以下代码,我生成了一个响应:
instruction_1 = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(instruction_1, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=1024) generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True) print(generated_text)
{'name': 'get_current_weather', 'arguments': {'location': 'Boston'}}
这证实了模型能够根据指定城市生成正确的函数调用。
Granite 3.0 允许格式规范以促进结构化格式的响应。本节解释如何使用 [UTTERANCE] 进行回应,使用 [THINK] 进行内心想法。
另一方面,由于函数调用以纯文本形式输出,因此可能需要实现单独的机制来区分函数调用和常规文本响应。
以下是指导 AI 输出的示例提示:
prompt = """You are a conversational AI assistant that deepens interactions by alternating between responses and inner thoughts. <Constraints> * Record spoken responses after the [UTTERANCE] tag and inner thoughts after the [THINK] tag. * Use [UTTERANCE] as a start marker to begin outputting an utterance. * After [THINK], describe your internal reasoning or strategy for the next response. This may include insights on the user's reaction, adjustments to improve interaction, or further goals to deepen the conversation. * Important: **Use [UTTERANCE] and [THINK] as a start signal without needing a closing tag.** </Constraints> Follow these instructions, alternating between [UTTERANCE] and [THINK] formats for responses. <output example> example1: [UTTERANCE]Hello! How can I assist you today?[THINK]I’ll start with a neutral tone to understand their needs. Preparing to offer specific suggestions based on their response.[UTTERANCE]Thank you! In that case, I have a few methods I can suggest![THINK]Since I now know what they’re looking for, I'll move on to specific suggestions, maintaining a friendly and approachable tone. ... </output example> Please respond to the following user_input. <user_input> Hello! What can you do? </user_input> """
生成响应的代码:
chat = [ { "role": "user", "content": prompt }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(chat, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=1024) generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True) print(generated_text)
输出如下:
[UTTERANCE]Hello! I'm here to provide information, answer questions, and assist with various tasks. I can help with a wide range of topics, from general knowledge to specific queries. How can I assist you today? [THINK]I've introduced my capabilities and offered assistance, setting the stage for the user to share their needs or ask questions.
[UTTERANCE] 和 [THINK] 标签已成功使用,允许有效的响应格式。
根据提示的不同,输出中有时可能会出现结束标签(例如[/UTTERANCE]或[/THINK]),但总的来说,一般都可以成功指定输出格式。
让我们看看如何输出流响应。
以下代码使用 asyncio 和线程库来异步传输来自 Granite 3.0 的响应。
!pip install torch torchvision torchaudio !pip install accelerate !pip install -U transformers
运行上述代码将生成以下格式的异步响应:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-2b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() chat = [ { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(chat, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=100) output = tokenizer.batch_decode(output) print(output[0])
此示例演示了成功的流式传输。每个token都是异步生成并顺序显示,让用户可以实时查看生成过程。
Granite 3.0 即使使用 8B 型号也能提供相当强的响应。函数调用和格式规范功能也运行良好,表明其具有广泛的应用潜力。
以上是我尝试过花岗岩。的详细内容。更多信息请关注PHP中文网其他相关文章!