


UC Berkeley successfully developed a large general visual reasoning model, and three senior scholars joined forces to participate in the research
How far can we go with visual (pixel) models alone? A new paper from UC Berkeley and Johns Hopkins University explores this problem and demonstrates the potential of large vision models (LVM) on a variety of CV tasks.
In recent times, large language models (LLM) such as GPT and LLaMA have become popular around the world.
Building large-scale visual models (LVM) is a problem of great concern. What do we need to achieve it?
The ideas provided by visual language models such as LLaVA are interesting and worth exploring, but according to the laws of the animal kingdom, we already know that visual ability and language ability are not related. For example, many experiments have shown that the visual world of non-human primates is very similar to that of humans, even though their language systems are "identical" to humans.
A recent paper discusses the answer to another question: how far can we go with pixels alone. The paper was written by researchers from the University of California, Berkeley, and Johns Hopkins University
Paper link: https://arxiv.org/ abs/2312.00785
Project homepage: https://yutongbai.com/lvm.html
The LLM that researchers try to emulate in LVM Key features: 1) Growth according to the scale of data In order to expand the business, we need to find new market opportunities. We plan to further expand our product line to meet growing demand. At the same time, we will strengthen marketing strategies and increase brand awareness. By actively participating in industry exhibitions and promotion activities, we will strive to develop more customer groups. We believe that through these efforts we can achieve greater success and achieve continued growth, 2) Flexibly specify tasks through prompts (contextual learning).
They specify three main components, namely data, architecture and loss function.
In terms of data, researchers want to take advantage of the significant diversity in visual data. Starting with just unannotated raw images and videos, and then leveraging various annotated visual data sources produced over the past few decades (including semantic segmentation, depth reconstruction, keypoints, multi-view 3D objects, etc.). They defined a common format - a "visual sentence" - to represent these different annotations without requiring any meta-knowledge beyond pixels. The total size of the training set is 1.64 billion images/frame.
In terms of architecture, the researchers used a large transformer architecture (3 billion parameters) to train on visual data represented as token sequences, and used the learned tokenizer to map each image to 256 vectorsQuantification token string.
Regarding the loss function, researchers draw inspiration from the natural language community, that is, mask token modeling has "given way" to the sequence autoregressive prediction method. Once images, videos, and annotated images can all be represented as sequences, the trained model can minimize the cross-entropy loss when predicting the next token.
Through this extremely simple design, the researchers demonstrated the following noteworthy behaviors:
As the model size and data size increase, the model automatically Demonstrate Appropriate In order to expand our business, we need to look for new market opportunities. We plan to further expand our product line to meet growing demand. At the same time, we will strengthen marketing strategies and increase brand awareness. By actively participating in industry exhibitions and promotion activities, we will strive to develop more customer groups. We believe that through these efforts we can achieve greater success and achieve continued growth behavior
Many different visual tasks can now be solved by designing appropriate prompts at test time. While not as high-performance as a custom, specially trained model, the fact that a single vision model can solve so many tasks is very encouraging;
Supervised data significantly contributes to performance on a variety of vision tasks
There are already signs of general visual reasoning capabilities when processing out-of-distribution data and performing new tasks, but Further research is still needed
The co-author of the paper, Yutong Bai, a fourth-year CS doctoral student at Johns Hopkins University and a visiting doctoral student at Berkeley, tweeted to promote their work.
## The original image source comes from the Twitter account: https://twitter.com/YutongBAI1002/status/1731512110247473608
Among the authors of the paper, the last three are senior scholars at UC Berkeley in the field of CV. Professor Trevor Darrell is the founding co-director of BAIR, the Berkeley Artificial Intelligence Research Laboratory, Professor Jitendra Malik won the 2019 IEEE Computer Pioneer Award, and Professor Alexei A. Efros is especially famous for nearest neighbor research.From left to right are Trevor Darrell, Jitendra Malik, Alexei A. Efros.
Method introduction
The article uses a two-stage method: 1) train a large visual tokenizer (operating on a single image) to be able to combine each Convert an image into a series of visual tokens; 2) Train an autoregressive transformer model on visual sentences, and each sentence is represented as a series of tokens. The method is shown in Figure 2
Image Tokenization
In order to apply the Transformer model to the image, typical operations include: Divide images into patches and treat them as sequences; or use a pretrained image tokenizer, such as VQVAE or VQGAN, to aggregate image features into a grid of discrete tokens. This article adopts the latter method, using the VQGAN model to generate semantic tokens.
The LVM framework includes encoding and decoding mechanisms and also has quantization layers, where the encoder and decoder are built with convolutional layers. The encoder is equipped with multiple downsampling modules to shrink the spatial dimensions of the input, while the decoder is equipped with a series of equivalent upsampling modules to restore the image to its original size. For a given image, the VQGAN tokenizer generates 256 discrete tokens.
The VQGAN architecture in this paper adopts the implementation details proposed by Chang et al. and follows their setup. Specifically, the downsampling factor is f=16 and the codebook size is 8192. This means that for an image of size 256×256, the VQGAN tokenizer will generate 16×16=256 tokens, and each token can take on 8192 different values. In addition, this article trained tokenizer on a 1.5B subset of the LAION 5B data set
Visual sentence sequence modeling
Use VQGAN to convert images into discrete tokens Finally, this paper concatenates discrete tokens in multiple images into a one-dimensional sequence and treats visual sentences as a unified sequence. Importantly, none of the visual sentences were specially processed - that is, no special tokens were used to indicate a specific task or format.
The function of visual sentences is to format different visual data into a unified image sequence structure
Implementation details. After tokenizing each image in the visual sentence into 256 tokens, this paper concatenates them to form a 1D token sequence. On the visual token sequence, the Transformer model in this article is actually the same as the autoregressive language model, so they adopt LLaMA’s Transformer architecture.
This content uses a context length of 4096 tokens, which is similar to the language model. Add a [BOS] (beginning of sentence) token at the beginning of each visual sentence and an [EOS] (end of sentence) token at the end, and use sequence splicing during training to improve efficiency
This article is used throughout UVDv1 The model was trained on the data set (420 billion tokens), and a total of 4 models with different numbers of parameters were trained: 300 million, 600 million, 1 billion and 3 billion.
Experimental results need to be rewritten
The study conducted experiments to evaluate the model. In order to expand the business, we need to find new market opportunities. We plan to further expand our product line to meet growing demand. At the same time, we will strengthen marketing strategies and increase brand awareness. By actively participating in industry exhibitions and promotion activities, we will strive to develop more customer groups. We believe that through these efforts we can achieve greater success and achieve continued growth in our capabilities and ability to understand and answer a variety of tasks.
In order to expand our business, we need to find new market opportunities. We plan to further expand our product line to meet growing demand. At the same time, we will strengthen marketing strategies and increase brand awareness. By actively participating in industry exhibitions and promotion activities, we will strive to develop more customer groups. We believe that through these efforts, we can achieve greater achievements and achieve sustained growth
As shown in Figure 3, this study first examined the training loss of LVMs of different sizes
As shown in Figure 4 below, the larger model has lower complexity in all tasks, indicating that the overall performance of the model can be transferred to a series of downstream tasks.
As shown in Figure 5, each data component has an important impact on downstream tasks. LVM not only benefits from larger data, but also improves with the diversity of the data set
Rewrite content without changing the original meaning, The language needs to be rewritten to Chinese. The original sentence should appear
In order to test LVM’s ability to understand various prompts, this study first conducted an evaluation experiment on LVM on a sequence reasoning task. Among them, prompt is very simple: provide the model with a sequence of 7 images and ask it to predict the next image. The experimental results need to be rewritten as shown in Figure 6 below:
The study also treats the list of items of a given category as a sequence to let LVM predict images of the same category. The experimental results need to be rewritten as shown in Figure 15 below:
So, how much context is needed to accurately predict subsequent frames?
In this study, we evaluate the frame generation perplexity of our model by giving contextual prompts of varying lengths (1 to 15 frames). The results show that the perplexity gradually improves as the number of frames increases. The specific data is shown in Figure 7 below. The confusion improved significantly from frame 1 to frame 11, and then stabilized (62.1 → 48.4)
Analogy Prompt
This study also tested the advanced interpretation capabilities of LVM by evaluating more complex prompt structures such as analogy prompts
Figure 8 below shows the results of Analogy Prompt for a number of tasks Qualitative results:
Based on comparison with visual prompting, it can be seen that sequence LVM is better than previous methods on almost all tasks
Synthetic tasks. Figure 9 shows the results of combining multiple tasks using a single prompt
Other prompts
Researchers have tried to The model provides various prompts that it has never seen before. To observe the model, in order to expand the business, we need to find new market opportunities. We plan to further expand our product line to meet growing demand. At the same time, we will strengthen marketing strategies and increase brand awareness. By actively participating in industry exhibitions and promotion activities, we will strive to develop more customer groups. We believe that through these efforts, we can achieve greater success and achieve continued growth. Figure 10 below shows some such prompts working well.
Figure 11 below shows some prompts that are difficult to describe in words. LVM may eventually outperform LLM on these tasks.
In the non-verbal human IQ test, Figure 13 shows preliminary qualitative results for a typical visual reasoning question
Read the original article for more details.
The above is the detailed content of UC Berkeley successfully developed a large general visual reasoning model, and three senior scholars joined forces to participate in the research. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



In modern manufacturing, accurate defect detection is not only the key to ensuring product quality, but also the core of improving production efficiency. However, existing defect detection datasets often lack the accuracy and semantic richness required for practical applications, resulting in models unable to identify specific defect categories or locations. In order to solve this problem, a top research team composed of Hong Kong University of Science and Technology Guangzhou and Simou Technology innovatively developed the "DefectSpectrum" data set, which provides detailed and semantically rich large-scale annotation of industrial defects. As shown in Table 1, compared with other industrial data sets, the "DefectSpectrum" data set provides the most defect annotations (5438 defect samples) and the most detailed defect classification (125 defect categories

The open LLM community is an era when a hundred flowers bloom and compete. You can see Llama-3-70B-Instruct, QWen2-72B-Instruct, Nemotron-4-340B-Instruct, Mixtral-8x22BInstruct-v0.1 and many other excellent performers. Model. However, compared with proprietary large models represented by GPT-4-Turbo, open models still have significant gaps in many fields. In addition to general models, some open models that specialize in key areas have been developed, such as DeepSeek-Coder-V2 for programming and mathematics, and InternVL for visual-language tasks.

Editor |KX To this day, the structural detail and precision determined by crystallography, from simple metals to large membrane proteins, are unmatched by any other method. However, the biggest challenge, the so-called phase problem, remains retrieving phase information from experimentally determined amplitudes. Researchers at the University of Copenhagen in Denmark have developed a deep learning method called PhAI to solve crystal phase problems. A deep learning neural network trained using millions of artificial crystal structures and their corresponding synthetic diffraction data can generate accurate electron density maps. The study shows that this deep learning-based ab initio structural solution method can solve the phase problem at a resolution of only 2 Angstroms, which is equivalent to only 10% to 20% of the data available at atomic resolution, while traditional ab initio Calculation

For AI, Mathematical Olympiad is no longer a problem. On Thursday, Google DeepMind's artificial intelligence completed a feat: using AI to solve the real question of this year's International Mathematical Olympiad IMO, and it was just one step away from winning the gold medal. The IMO competition that just ended last week had six questions involving algebra, combinatorics, geometry and number theory. The hybrid AI system proposed by Google got four questions right and scored 28 points, reaching the silver medal level. Earlier this month, UCLA tenured professor Terence Tao had just promoted the AI Mathematical Olympiad (AIMO Progress Award) with a million-dollar prize. Unexpectedly, the level of AI problem solving had improved to this level before July. Do the questions simultaneously on IMO. The most difficult thing to do correctly is IMO, which has the longest history, the largest scale, and the most negative

Editor | ScienceAI Based on limited clinical data, hundreds of medical algorithms have been approved. Scientists are debating who should test the tools and how best to do so. Devin Singh witnessed a pediatric patient in the emergency room suffer cardiac arrest while waiting for treatment for a long time, which prompted him to explore the application of AI to shorten wait times. Using triage data from SickKids emergency rooms, Singh and colleagues built a series of AI models that provide potential diagnoses and recommend tests. One study showed that these models can speed up doctor visits by 22.3%, speeding up the processing of results by nearly 3 hours per patient requiring a medical test. However, the success of artificial intelligence algorithms in research only verifies this

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

In 2023, almost every field of AI is evolving at an unprecedented speed. At the same time, AI is constantly pushing the technological boundaries of key tracks such as embodied intelligence and autonomous driving. Under the multi-modal trend, will the situation of Transformer as the mainstream architecture of AI large models be shaken? Why has exploring large models based on MoE (Mixed of Experts) architecture become a new trend in the industry? Can Large Vision Models (LVM) become a new breakthrough in general vision? ...From the 2023 PRO member newsletter of this site released in the past six months, we have selected 10 special interpretations that provide in-depth analysis of technological trends and industrial changes in the above fields to help you achieve your goals in the new year. be prepared. This interpretation comes from Week50 2023

Editor | KX Retrosynthesis is a critical task in drug discovery and organic synthesis, and AI is increasingly used to speed up the process. Existing AI methods have unsatisfactory performance and limited diversity. In practice, chemical reactions often cause local molecular changes, with considerable overlap between reactants and products. Inspired by this, Hou Tingjun's team at Zhejiang University proposed to redefine single-step retrosynthetic prediction as a molecular string editing task, iteratively refining the target molecular string to generate precursor compounds. And an editing-based retrosynthetic model EditRetro is proposed, which can achieve high-quality and diverse predictions. Extensive experiments show that the model achieves excellent performance on the standard benchmark data set USPTO-50 K, with a top-1 accuracy of 60.8%.
