For humans, understanding the information of a picture is nothing more than a trivial matter. Humans can casually tell the meaning of a picture without thinking. Just like the picture below, the charger that the phone is plugged into is somewhat inappropriate. Humans can see the problem at a glance, but for AI, it is still very difficult.
The emergence of GPT-4 has begun to make these problems simpler. It can quickly point out the problem in the picture: VGA cable charging iPhone .
In fact, the charm of GPT-4 is far less than this. What is even more exciting is to use hand-drawn sketches to directly generate websites, draw a scribbled diagram on the draft paper, take a photo, and then send it Give GPT-4 and let it write the website code according to the diagram. Whoosh, GPT-4 writes the web page code.
But unfortunately, this function of GPT-4 is not yet open to the public, and it is impossible to get started and experience it. However, some people can't wait any longer, and a team from King Abdullah University of Science and Technology (KAUST) has developed a similar product to GPT-4 - MiniGPT-4. Team researchers include Zhu Deyao, Chen Jun, Shen Xiaoqian, Li Xiang, and Mohamed H. Elhoseiny, all of whom are from the Vision-CAIR research group of KAUST.
MiniGPT-4 It’s easy to talk just by looking at the pictures
What is the effect of MiniGPT-4? Let's start with a few examples. In addition, in order to have a better experience with MiniGPT-4, it is recommended to use English input for testing.First, let’s examine MiniGPT-4’s ability to describe images. For the picture on the left, the answer given by MiniGPT-4 is roughly "The picture depicts a cactus growing on a frozen lake. There are huge ice crystals around the cactus, and there are snow-capped peaks in the distance..." If you then ask Could this scenario happen in the real world? The answer given by MiniGPT-4 is that this image is not common in the real world and the reason why.
Next, let’s take a look at MiniGPT-4’s image question and answer capabilities. Question: "What's wrong with this plant? What should I do?" MiniGPT-4 not only pointed out the problem, but also stated that the leaves with brown spots may be caused by fungal infection, and gave treatment steps:
Looking at a few examples, MiniGPT-4’s picture-viewing and chatting function is already very powerful. Not only that, but MiniGPT-4 can create websites from sketches. For example, let MiniGPT-4 draw a web page according to the draft diagram on the left. After receiving the instruction, MiniGPT-4 gives the corresponding HTML code and the corresponding website as required:
With MiniGPT-4, writing advertising slogans for pictures has become very simple. Ask MiniGPT-4 to write advertising copy for the cup on the left. MiniGPT-4 accurately pointed out the sleepy cat pattern on the cup, which is very suitable for coffee lovers and cat lovers. It also pointed out the material of the cup, etc.:
MiniGPT-4 can also generate recipes based on a picture, turning you into a kitchen expert:
Explain the popular meme:
## Write a poem based on the picture:
In addition, it is worth mentioning that the MiniGPT-4 Demo is now open and can be played online. You can experience it yourself ( It is recommended to use English test):
##Demo address: https://0810e8582bcad31944.gradio.live/
Once the project was released, it attracted widespread attention from netizens. For example, let MiniGPT-4 explain the objects in the picture:##There are more test experiences from netizens below:
##Method Introduction
Author It is believed that GPT-4's advanced large language model (LLM) is the main reason for its advanced multi-modal generation capabilities. To study this phenomenon, the authors propose MiniGPT-4, which uses a projection layer to align a frozen visual encoder and a frozen LLM (Vicuna).MiniGPT-4 consists of a pre-trained ViT and Q-Former visual encoder, a separate linear projection layer, and an advanced Vicuna large language model. MiniGPT-4 only requires training linear layers to align visual features with Vicuna.
MiniGPT-4 was trained in two stages. The first traditional pre-training stage took 10 hours to train on 4 A100 GPUs using approximately 5 million aligned image-text pairs. After the first stage, Vicuna was able to understand images. But Vicuna's text-generating abilities were greatly affected.
To solve this problem and improve usability, researchers proposed a novel way to create high-quality image-text pairs through the model itself and ChatGPT. Based on this, the study created a small but high-quality dataset (3500 pairs in total).
The second fine-tuning stage is trained on this dataset using conversation templates to significantly improve its generation reliability and overall usability. This stage is computationally efficient and only requires an A100GPU in about 7 minutes to complete.
Other related work:
In addition, open source code libraries including BLIP2 are also used in the project , Lavis and Vicuna.
The above is the detailed content of 'MiniGPT-4 proves its amazing image recognition capabilities and multiple functions: chatting with images, building websites with sketches, etc.'. For more information, please follow other related articles on the PHP Chinese website!