Home > Technology peripherals > AI > body text

Let ChatGPT call 100,000+ open source AI models! HuggingFace's new feature is booming: large models can be used as multi-modal AI tools

PHPz
Release: 2023-05-19 09:47:02
forward
1671 people have browsed it

Just chat with ChatGPT, it can help you call 100,000 HuggingFace models!

This is HuggingFace Transformers Agents, the latest function launched by Hugging Face. It has received great attention since its launch:

Let ChatGPT call 100,000+ open source AI models! HuggingFaces new feature is booming: large models can be used as multi-modal AI tools

This function , which is equivalent to equipping large models such as ChatGPT with "multi-modal" capabilities -

is not limited to text, but can solve any multi-modal tasks such as images, voices, documents, etc.

For example, you can make a "describe this image" request to ChatGPT and give it a picture of a beaver. Using ChatGPT, you can call the image interpreter and output "a beaver is swimming"

Let ChatGPT call 100,000+ open source AI models! HuggingFaces new feature is booming: large models can be used as multi-modal AI tools

Then, ChatGPT calls text-to-speech, and it takes minutes. You can read this sentence:

A beaver is swimming in the water Audio: 00:0000:01

It can not only support OpenAI’s large models, such as ChatGPT, but also support other Free large models like OpenAssistant.

Transformer Agent is responsible for "teaching" these large models to directly call any AI model on Hugging Face and output the processed results.

So what is the principle behind this newly launched function?

How to let large models "command" various AIs?

Simply put, Transformers Agents is a "hugging face AI tool integration package" exclusive to large models.

Various large and small AI models on HuggingFace are included in this package and classified into "image generator", "image interpreter", "text-to-speech tool"...

At the same time, each tool will have a corresponding text explanation to facilitate large models to understand which model they should call.

Let ChatGPT call 100,000+ open source AI models! HuggingFaces new feature is booming: large models can be used as multi-modal AI tools

In this way, you only need a simple code prompt to let the big model help you run the AI ​​model directly and output the results It will be returned to you in real time. The process is divided into three steps:

First, set up the large model you want to use. Here you can use OpenAI's large model (of course, the API is charged):

<code>from transformers import OpenAiAgentagent = OpenAiAgent(model="text-davinci-003", api_key="<your_api_key>")</your_api_key></code>
Copy after login

You can also use free large models such as BigCode or OpenAssistant:

<code>from huggingface_hub import loginlogin("<your_token>")</your_token></code>
Copy after login

Then, set up Hugging Transformers Agents. Here we take the default Agent as an example:

<code>from transformers import HfAgent# Starcoderagent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")# StarcoderBase# agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoderbase")# OpenAssistant# agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5")</code>
Copy after login

Then, you can use the command run() or chat() to run Transformers Agents.

run() is suitable for calling multiple AI models at the same time to perform more complex and professional tasks.

A single AI tool can be called.

For example, if you execute agent.run("Draw me a picture of rivers and lakes."), it will call the AI ​​graphic tool to help you generate an image:

Let ChatGPT call 100,000+ open source AI models! HuggingFaces new feature is booming: large models can be used as multi-modal AI tools

You can also call multiple AI tools at the same time.

For example, if you execute agent.run("Draw me a picture of the sea then transform the picture to add an island"), it will call the "Wen Sheng Diagram" and "Tu Sheng Diagram" tools to help you generate Corresponding image:

Let ChatGPT call 100,000+ open source AI models! HuggingFaces new feature is booming: large models can be used as multi-modal AI tools

chat() is suitable for "continuously completing tasks" through chatting.

For example, first call the Wenshengtu AI tool to generate a picture of rivers and lakes: agent.chat("Generate a picture of rivers and lakes")

Let ChatGPT call 100,000+ open source AI models! HuggingFaces new feature is booming: large models can be used as multi-modal AI tools

Then make the "picture from picture" modification based on this picture: agent.chat("Transform the picture so that there is a rock in there")

Let ChatGPT call 100,000+ open source AI models! HuggingFaces new feature is booming: large models can be used as multi-modal AI tools

The AI ​​model to be called can be set by yourself, or you can use the set of default settings that come with Huohuan Face.

A set of default AI models has been set up

Currently, Transformers Agents have integrated a set of default AI models, which is completed by calling the following AI models in the Transformer library:

1. Vision Document understanding model Donut. As long as you provide a file in image format (including images converted from PDF), you can use it to answer questions about the file.

For example, if you ask "Where will the TRRF Scientific Advisory Committee meeting be held?" Donut will give the answer:

Let ChatGPT call 100,000+ open source AI models! HuggingFaces new feature is booming: large models can be used as multi-modal AI tools

2. Text question answering model Flan-T5. Given a long article and a question, it can answer various text questions and help you with reading comprehension.

3. Zero-sample visual language model BLIP. It can directly understand the content in the image and provide text descriptions for the image.

4. Multi-modal model ViLT. It can understand and answer questions in a given image,

5. Multi-modal image segmentation model CLIPseg. Just provide a model and prompt words, and the system can segment the specified content (mask) in the image based on the prompt words.

6. Automatic speech recognition model Whisper. It can automatically recognize the text in a recording and complete the transcription.

7. Speech synthesis model SpeechT5. for text-to-speech.

8. Self-encoding language model BART. In addition to automatically classifying a piece of text content, it can also make text summaries.

9. 200 language translation model NLLB. In addition to common languages, it can also translate some less common languages, including Lao and Kamba.

By calling the above AI models, tasks including image question and answer, document understanding, image segmentation, recording to text, translation, captioning, text to speech, and text classification can all be completed.

In addition, Huohuan Lian also contains "private goods", including some models outside the Transformer library, including downloading text, Vincent pictures, pictures, and Vincent videos from web pages:

Let ChatGPT call 100,000+ open source AI models! HuggingFaces new feature is booming: large models can be used as multi-modal AI tools

These models can not only be called individually, but can also be mixed together. For example, if you ask the large model to "generate and describe a good-looking photo of a beaver", it will Call the "Venture Picture" and "Picture Understanding" AI models respectively.

Of course, if we don’t want to use these default AI models and want to set up a more useful “tool integration package”, we can also set it up ourselves according to the steps.

Regarding Transformers Agents, some netizens pointed out that it is a bit like the "replacement" of LangChain agents:

Have you tried these two tools? Which one do you think is more useful?

Reference link: [1]https://twitter.com/huggingface/status/1656334778407297027[2]https://huggingface.co/docs/transformers/transformers_agents

The above is the detailed content of Let ChatGPT call 100,000+ open source AI models! HuggingFace's new feature is booming: large models can be used as multi-modal AI tools. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template