Home > Technology peripherals > AI > Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in 'Along the River During Qingming Festival'

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in 'Along the River During Qingming Festival'

WBOY
Release: 2023-11-27 14:49:15
forward
1025 people have browsed it

Want to know how many camels are in "Along the River During Qingming Festival"? Let’s take a look at this multi-modal model that supports UHD input.

Recently, a Chinese team from Nanyang Polytechnic built the 8 billion parameter multi-modal large model OtterHD based on Fuyu-8B.

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

Paper address: https://arxiv.org/abs/2311.04219

with restrictions Unlike traditional models of fixed-size visual encoders, OtterHD-8B has the ability to handle flexible input sizes, ensuring its versatility under various inference needs.

At the same time, the team also proposed a new benchmark test MagnifierBench, which can carefully evaluate LLM's ability to distinguish the minute details and spatial relationships of objects in large-size images.

Experimental results show that the performance of OtterHD-8B is significantly better than similar models in directly processing high-resolution inputs

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

Effect Demonstration

As shown below, we ask how many camels are in the Qingming River Scene (part). The image input reaches 2446x1766 pixels, and the model can also answer the question successfully. .

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

Faced with the apple-counting problem that GPT4-V once confused, the model successfully calculated that it contained 11 apples

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival


In addition to the high-definition input example shown in the paper, we also conducted some tests. In the figure below, we let the model assume that the user It's a PhD from Cambridge University, explaining what this picture means.

The model's answer accurately identified the Black Hole and White Hole information in the picture, and identified it as a tunnel-like structure, and then gave a detailed explanation. .

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

In the diagram below, the model is asked to explain the situation regarding energy share. The model successfully identifies several energy types shown in the figure and accurately presents their proportions over time

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

Figure is about the flow chart of changing a light bulb. The model accurately understands the meaning of the flow chart and gives step-by-step detailed guidance.

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

8 billion parameter command fine-tuning OtterHD-8B

Fuyu-8B’s OtterHD-8B is the first An open source instruction to fine-tune large language models trained on inputs up to 1024×1024, which is noteworthy

Additionally, during inference, it can be further extended to larger Resolution (such as 1440×1440).

Training details

In preliminary experiments, the team found that Fuyu was good at training certain The benchmark performed poorly in responding to specific instructions, which resulted in very weak model performance on MME and MMBench

To address these issues, the team performed instruction fine-tuning, based on 370K Mixed data adjusts the Fuyu model and refers to the similar instruction template of LLaVA-1.5 to standardize the format of model answers

In the training phase, all data sets are organized into instructions/responses Yes, summarized into a unified dataloader and uniformly sampled to ensure representative integrity.

In order to improve the performance of the modeling code, the team adopted FlashAttention-2 and the operator fusion technology in the FlashAttention resource library

With the help of Fuyu’s simplified architecture , as shown in Figure 2, these modifications significantly improved GPU utilization and throughput

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

# Specifically, the method proposed by the team can Full parameter training is completed in 3 hours/epoch on an 8×A100 GPU, while it only takes 1 hour per epoch after LoRA fine-tuning.

When training the model using the AdamW optimizer, the batch size is 64, the learning rate is set to 1×10^-5, and the weight decay is 0.1.

Ultra-fine evaluation benchmark MagnifierBench

The human visual system can naturally perceive the details of objects within the field of view, but the benchmark currently used to test LMM There is no particular focus on assessing competencies in this area.

With the advent of the Fuyu and OtterHD models, we are extending the resolution of the input image to a larger range for the first time.

To this end, the team created a new test benchmark MagnifierBench covering 166 images and a total of 283 sets of questions based on the Panoptic Scene Graph Generation (PVSG) data set.

The PVSG dataset consists of video data, which contains a large number of messy and complex scenes, especially first-person housework videos.

During the annotation phase, the team carefully examined every question-answer pair in the dataset, eliminating those that involved large objects, or that were easily answered with common sense knowledge. For example, most remote controls are black, which is easy to guess, but colors such as red and yellow are not included in this list.

As shown in Figure 3, the types of questions designed by MagnifierBench include recognition, number, color-related questions, etc. An important criterion for this dataset is that the questions must be complex enough that even the annotator must be in full-screen mode or even zoom in on the image to answer accurately

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

Compared with short answers, LMM is better at generating extended answers in conversational environments.

-Multiple choice questions

The problem faced by this model is that there are multiple options to choose from. To guide the model to choose a letter (such as A, B, C) as the answer, the team preceded the question with a letter from a given choice as a prompt. In this case, only the answer that exactly matches the correct option will be considered the correct answer

- Open question

Multiple options will simplify the task because random guessing has a 25% chance of being correct. Furthermore, this does not reflect real-life scenarios faced by chat assistants, as users typically do not provide predefined options to the model. To eliminate this potential bias, the team also asked the model questions in a straightforward, open-ended manner with no prompt options.

Experimental analysis

The research results show that although many models achieve high scores on established benchmarks such as MME and POPE, they fail to perform well on MagnifierBench The performance is often unsatisfactory. The OtterHD-8B, on the other hand, performed well on MagnifierBench.

In order to further explore the effect of increasing the resolution and test the generalization ability of OtterHD at different, possibly higher resolutions, the team conducted experiments on Otter8B using fixed or dynamic resolutions. The x-axis shows that as the resolution increases, more image tokens are sent to the language decoder, thus providing more image details.

Experimental results show that as the resolution increases, the performance of MagnifierBench also improves accordinglyThrough 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

As the resolution increases, the ratio of images to text gradually increases. This is because the average number of text tokens remains the same. This change highlights the importance of LMM resolution, especially for tasks that require complex visual association.

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

Furthermore, the performance difference between fixed and dynamic training methods highlights the advantages of dynamic resizing, especially in preventing overfitting at specific resolutions.

The dynamic strategy also has the advantage of allowing the model to adapt to higher resolutions (1440) even if it has not been seen during training

Some comparisons

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in Along the River During Qingming Festival

##Conclusion

Based on the innovative architecture of Fuyu-8B, the research team proposed the OtterHD-8B model, It can effectively handle images of various resolutions and get rid of the limitations of fixed resolution input in most LMMs

At the same time, OtterHD-8B is very good at processing high-resolution images. Excellent performance

This becomes especially evident in the new MagnifierBench benchmark. The purpose of this benchmark is to evaluate the LMM's ability to recognize details in complex scenes, highlighting the importance of more flexible support for different resolutions

The above is the detailed content of Through 8 billion parameters OtterHD, the Chinese team of Nanyang Polytechnic brings you the experience of counting camels in 'Along the River During Qingming Festival'. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template