Home Technology peripherals AI New work by Andrew Ng's team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning

New work by Andrew Ng's team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning

Jun 19, 2024 pm 08:58 PM
getting Started contextual learning ManyICL

New work by Andrew Ngs team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

## This study evaluates the advanced multi-modal basic model on 10 data sets Multi-sample context learning on ,revealing sustained performance improvements. Batch queries significantly reduce per-example latency and inference cost without sacrificing performance. These findings demonstrate that
leveraging a large set of demonstration examples allows rapid adaptation to new tasks and domains without the need for traditional fine-tuning.

New work by Andrew Ngs team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning

  • Paper address: https://arxiv.org/abs/2405.09798
  • Code address: https://github.com/stanfordmlgroup/ManyICL

##Background introduction

In recent research on Multimodal Foundation Model, In-Context Learning (ICL) has been proven to be one of the effective methods to improve model performance.

However, limited by the context length of the basic model, especially for multi-modal basic models that require a large number of visual tokens to represent images, existing related research is only limited to Yu provides a small sample in context.

Excitingly, recent technological advances have greatly increased the context length of models, which opens up the possibility of exploring context learning using more examples.

Based on this, the latest research of Stanford Ng's team -
ManyICL, mainly evaluates the performance of the most advanced multi-modal basic model from a few samples (less than 100) to multi-sample (up to 2000) performance in context learning
. By testing data sets from multiple domains and tasks, the team verified the significant effect of multi-sample context learning in improving model performance, and explored the impact of batch queries on performance, cost, and latency.
New work by Andrew Ngs team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning
Comparison between Many-shot ICL and zero-sample and few-sample ICL.

Overview of Methods

Three types were selected for this study Advanced multi-modal base models:
GPT-4o, GPT4 (V)-Turbo and Gemini 1.5 Pro
. Due to the superior performance of GPT-4o, the research team focuses on GPT-4o and Gemini 1.5 Pro in the main text. Please view the relevant content of GPT4 (V)-Turbo in the appendix. In terms of data sets, the research team collected 10 data across different fields (including natural imaging, medical imaging, remote sensing imaging and molecular imaging, etc.) and tasks (including multi-classification, multi-label classification and fine-grained classification). Extensive experiments were conducted on the set.

New work by Andrew Ngs team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning

                                                                                                                                                                                                                                                      Benchmark data set summary.

To test the impact of increasing the number of examples on model performance, the research team gradually increased the number of examples provided in the context, up to nearly 2,000 examples. At the same time, considering the high cost and high latency of multi-sample learning, the research team also explored the impact of batch processing of queries. Here, batch query refers to processing multiple queries in a single API call.

Experimental results

Multi-sample context learning performance evaluation

Overall performance: Multi-shot context learning with nearly 2000 examples outperforms few-shot learning on all datasets. The performance of the Gemini 1.5 Pro model shows a consistent log-linear improvement as the number of examples increases, while the performance of GPT-4o is less stable.

New work by Andrew Ngs team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning

Data efficiency: The study measured the model’s contextual learning data efficiency, which is how quickly the model learns from examples. The results show that Gemini 1.5 Pro shows higher context learning data efficiency than GPT-4o on most data sets, meaning that it can learn from examples more effectively.

New work by Andrew Ngs team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning

Impact of batch queries

Overall performance: In Combine multiple queries into one request without degrading performance in zero-sample and multi-sample scenarios under optimal sample set size selection. It is worth noting that in the zero-shot scenario, a single query performs poorly on many datasets. In contrast, batch queries can even improve performance.

New work by Andrew Ngs team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning

Performance improvement in zero-sample scenario: For some data sets (such as UCMerced), batch query significantly improves performance in zero-sample scenario . The research team analyzed that this is mainly due to domain calibration, class calibration and self-learning (self-ICL).

New work by Andrew Ngs team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning

Cost and latency analysis

Multi-sample context learning although it needs to be processed during inference Longer input context, but significantly lower per-example latency and inference cost with batched queries. For example, on the HAM10000 dataset, using the Gemini 1.5 Pro model for a batch query of 350 examples, the latency dropped from 17.3 seconds to 0.54 seconds and the cost dropped from $0.842 to $0.0877 per example.

New work by Andrew Ngs team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning

Conclusion

The research results show that multi-sample context learning can significantly improve multi-modal The performance of state-of-the-art base models, especially the Gemini 1.5 Pro model, has shown continued performance improvements on multiple data sets, allowing it to more effectively adapt to new tasks and domains without the need for traditional fine-tuning.

Secondly, batch processing of queries can reduce inference cost and latency while achieving similar or even better model performance, showing great potential in practical applications.

Overall, this research by Andrew Ng’s team opens up a new path for the application of multi-modal basic models, especially in terms of rapid adaptation to new tasks and fields. .

The above is the detailed content of New work by Andrew Ng's team: multi-modal and multi-sample context learning, quickly adapting to new tasks without fine-tuning. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1662
14
PHP Tutorial
1262
29
C# Tutorial
1235
24
A Diffusion Model Tutorial Worth Your Time, from Purdue University A Diffusion Model Tutorial Worth Your Time, from Purdue University Apr 07, 2024 am 09:01 AM

Diffusion can not only imitate better, but also "create". The diffusion model (DiffusionModel) is an image generation model. Compared with the well-known algorithms such as GAN and VAE in the field of AI, the diffusion model takes a different approach. Its main idea is a process of first adding noise to the image and then gradually denoising it. How to denoise and restore the original image is the core part of the algorithm. The final algorithm is able to generate an image from a random noisy image. In recent years, the phenomenal growth of generative AI has enabled many exciting applications in text-to-image generation, video generation, and more. The basic principle behind these generative tools is the concept of diffusion, a special sampling mechanism that overcomes the limitations of previous methods.

Generate PPT with one click! Kimi: Let the 'PPT migrant workers' become popular first Generate PPT with one click! Kimi: Let the 'PPT migrant workers' become popular first Aug 01, 2024 pm 03:28 PM

Kimi: In just one sentence, in just ten seconds, a PPT will be ready. PPT is so annoying! To hold a meeting, you need to have a PPT; to write a weekly report, you need to have a PPT; to make an investment, you need to show a PPT; even when you accuse someone of cheating, you have to send a PPT. College is more like studying a PPT major. You watch PPT in class and do PPT after class. Perhaps, when Dennis Austin invented PPT 37 years ago, he did not expect that one day PPT would become so widespread. Talking about our hard experience of making PPT brings tears to our eyes. "It took three months to make a PPT of more than 20 pages, and I revised it dozens of times. I felt like vomiting when I saw the PPT." "At my peak, I did five PPTs a day, and even my breathing was PPT." If you have an impromptu meeting, you should do it

All CVPR 2024 awards announced! Nearly 10,000 people attended the conference offline, and a Chinese researcher from Google won the best paper award All CVPR 2024 awards announced! Nearly 10,000 people attended the conference offline, and a Chinese researcher from Google won the best paper award Jun 20, 2024 pm 05:43 PM

In the early morning of June 20th, Beijing time, CVPR2024, the top international computer vision conference held in Seattle, officially announced the best paper and other awards. This year, a total of 10 papers won awards, including 2 best papers and 2 best student papers. In addition, there were 2 best paper nominations and 4 best student paper nominations. The top conference in the field of computer vision (CV) is CVPR, which attracts a large number of research institutions and universities every year. According to statistics, a total of 11,532 papers were submitted this year, and 2,719 were accepted, with an acceptance rate of 23.6%. According to Georgia Institute of Technology’s statistical analysis of CVPR2024 data, from the perspective of research topics, the largest number of papers is image and video synthesis and generation (Imageandvideosyn

PyCharm Community Edition Installation Guide: Quickly master all the steps PyCharm Community Edition Installation Guide: Quickly master all the steps Jan 27, 2024 am 09:10 AM

Quick Start with PyCharm Community Edition: Detailed Installation Tutorial Full Analysis Introduction: PyCharm is a powerful Python integrated development environment (IDE) that provides a comprehensive set of tools to help developers write Python code more efficiently. This article will introduce in detail how to install PyCharm Community Edition and provide specific code examples to help beginners get started quickly. Step 1: Download and install PyCharm Community Edition To use PyCharm, you first need to download it from its official website

From bare metal to a large model with 70 billion parameters, here is a tutorial and ready-to-use scripts From bare metal to a large model with 70 billion parameters, here is a tutorial and ready-to-use scripts Jul 24, 2024 pm 08:13 PM

We know that LLM is trained on large-scale computer clusters using massive data. This site has introduced many methods and technologies used to assist and improve the LLM training process. Today, what we want to share is an article that goes deep into the underlying technology and introduces how to turn a bunch of "bare metals" without even an operating system into a computer cluster for training LLM. This article comes from Imbue, an AI startup that strives to achieve general intelligence by understanding how machines think. Of course, turning a bunch of "bare metal" without an operating system into a computer cluster for training LLM is not an easy process, full of exploration and trial and error, but Imbue finally successfully trained an LLM with 70 billion parameters. and in the process accumulate

AI in use | AI created a life vlog of a girl living alone, which received tens of thousands of likes in 3 days AI in use | AI created a life vlog of a girl living alone, which received tens of thousands of likes in 3 days Aug 07, 2024 pm 10:53 PM

Editor of the Machine Power Report: Yang Wen The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. Video link: https://mp.weixin.qq.com/s/2hX_i7li3RqdE4u016yGhQ Recently, the life vlog of a girl living alone became popular on Xiaohongshu. An illustration-style animation, coupled with a few healing words, can be easily picked up in just a few days.

A must-read for technical beginners: Analysis of the difficulty levels of C language and Python A must-read for technical beginners: Analysis of the difficulty levels of C language and Python Mar 22, 2024 am 10:21 AM

Title: A must-read for technical beginners: Difficulty analysis of C language and Python, requiring specific code examples In today's digital age, programming technology has become an increasingly important ability. Whether you want to work in fields such as software development, data analysis, artificial intelligence, or just learn programming out of interest, choosing a suitable programming language is the first step. Among many programming languages, C language and Python are two widely used programming languages, each with its own characteristics. This article will analyze the difficulty levels of C language and Python

Five programming software for getting started with learning C language Five programming software for getting started with learning C language Feb 19, 2024 pm 04:51 PM

As a widely used programming language, C language is one of the basic languages ​​that must be learned for those who want to engage in computer programming. However, for beginners, learning a new programming language can be difficult, especially due to the lack of relevant learning tools and teaching materials. In this article, I will introduce five programming software to help beginners get started with C language and help you get started quickly. The first programming software was Code::Blocks. Code::Blocks is a free, open source integrated development environment (IDE) for

See all articles