Home > Technology peripherals > AI > body text

Google MIT's latest research shows: Obtaining high-quality data is not difficult, large models are the solution

WBOY
Release: 2024-01-14 20:30:25
forward
1226 people have browsed it

Получение высококачественных данных стало основным узким местом в современном обучении больших моделей.

Несколько дней назад газета New York Times подала в суд на OpenAI и потребовала миллиарды долларов компенсации. В жалобе перечислены многочисленные доказательства плагиата со стороны GPT-4.

Даже New York Times призывала к уничтожению почти всех крупных моделей, таких как GPT.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Многие громкие имена в индустрии искусственного интеллекта уже давно полагают, что «синтетические данные» могут быть лучшим решением этой проблемы.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Ранее команда Google также предложила RLAIF, метод, который использует LLM для замены предпочтений человека в отношении маркировки, и эффект даже не уступает люди.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Теперь исследователи из Google и MIT обнаружили, что обучение на больших моделях может привести к представлениям лучших моделей, обученных с использованием реальных данных.

Этот последний метод называется SynCLR и представляет собой метод изучения виртуальных представлений полностью на основе синтетических изображений и синтетических описаний без каких-либо реальных данных.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Адрес статьи: https://arxiv.org/abs/2312.17742

Результаты эксперимента показывают, что представление, полученное с помощью метода SynCLR, может быть таким же хорошим, как эффект передачи CLIP OpenAI в ImageNet.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Обучение на генеративных моделях

#Наиболее эффективные методы обучения «визуального представления» в настоящее время основаны на крупномасштабных реальных наборах данных. Однако при сборе реальных данных возникает много трудностей.

Чтобы снизить затраты на сбор данных, исследователи в этой статье задаются вопросом:

Выборка из готовых материалов генеративные модели Являются ли синтетические данные жизнеспособным путем к созданию масштабных наборов данных для обучения современным визуальным представлениям?

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

## В отличие от обучения непосредственно на данных, исследователи Google называют эту модель «обучением на модели». В качестве источника данных для создания крупномасштабных обучающих наборов модели имеют несколько преимуществ:

- Предоставляют новые методы управления данными через скрытые переменные, условные переменные и гиперпараметры.

— моделями также легче делиться и хранить (поскольку модели легче сжимать, чем данные), и они могут создавать неограниченное количество образцов данных.

Все большее количество литературы исследует эти свойства, а также другие преимущества и недостатки генеративных моделей как источника данных для обучения последующих моделей.

Некоторые из этих методов используют гибридную модель, т. е. смешивают реальные и синтетические наборы данных или требуют, чтобы один реальный набор данных генерировал другой синтетический набор данных.

Другие методы пытаются изучить представления на основе чисто «синтетических данных», но сильно отстают от наиболее эффективных моделей.

В статье последний метод, предложенный исследователями, использует генеративную модель для переопределения степени детализации классов визуализации.

Как показано на рисунке 2, четыре изображения были созданы с использованием двух подсказок: «Золотистый ретривер в солнцезащитных очках и пляжной шляпе едет на велосипеде» и «Милый золотистый ретривер». Собака сидит. в доме из суши».

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Traditional self-supervised methods (such as Sim-CLR) will treat these images as different classes, and the embeddings of different images will be separated, without explicitly considering the shared semantics between images.

At the other extreme, supervised learning methods (i.e. SupCE) treat all these images as a single class (such as "golden retriever"). This ignores semantic nuances in the images, such as a dog riding a bicycle in one pair of images and a dog sitting in a sushi house in another.

In contrast, the SynCLR approach treats descriptions as classes, i.e. one visual class per description.

In this way, we can group the pictures according to the two concepts of "riding a bicycle" and "sitting in a sushi restaurant".

This kind of granularity is difficult to mine in real data because collecting multiple images by a given description is not trivial, especially when the number of descriptions increases.

However, the text-to-image diffusion model fundamentally has this capability.

By simply conditioning on the same description and using different noise inputs, a text-to-image diffusion model can generate different images that match the same description.

Specifically, the authors study the problem of learning visual encoders without real image or text data.

Latest methods rely on the utilization of 3 key resources: a language generative model (g1), a text-to-image generative model (g2), and a curated list of visual concepts ( c).

Pre-processing includes three steps:

(1) Use (g1) to synthesize a comprehensive set of image descriptions T, which covers Various visual concepts in C;

(2) For each title in T, use (g2) to generate multiple images, ultimately generating an extensive synthetic image dataset X ;

(3) Train on X to obtain the visual representation encoder f.

Then, use llama-27b and Stable Diffusion 1.5 as (g1) and (g2) respectively because of their fast inference speed.

Synthetic description

To take advantage of the power of powerful text-to-image models to generate large amounts of training image data Set, first requires a set of descriptions that not only accurately describe the image but also exhibit diversity to encompass a wide range of visual concepts.

In response, the authors developed a scalable method to create such a large set of descriptions, leveraging the contextual learning capabilities of large models.

The following shows three examples of synthetic templates.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

The following is a context description generated using Llama-2. The researchers randomly sampled three context examples in each inference run.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Synthetic Image

For each text description, the researchers The back-diffusion process is started with different random noises, resulting in various images.

In this process, the classifier-free bootstrapping (CFG) ratio is a key factor.

The higher the CFG scale, the better the quality of the samples and the consistency between text and images, while the lower the scale, the greater the diversity of the samples, that is The closer it is to the original conditional distribution of the image based on the given text.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Representation learning

In the paper, the representation learning method is based on Based on StableRep.

The key component of the method proposed by the authors is the multi-positive contrast learning loss, which works by aligning (in embedding space) images generated from the same description.

In addition, various techniques from other self-supervised learning methods were also combined in the study.

Comparable to OpenAI’s CLIP

In the experimental evaluation, the researchers first conducted an ablation study to evaluate the effectiveness of various designs and modules within the pipeline, and then continued to expand the amount of synthetic data.

The following figure is a comparison of different description synthesis strategies.

The researchers report the ImageNet linear evaluation accuracy and average accuracy on 9 fine-grained datasets. Each item here includes 10 million descriptions and 4 pictures per description.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

The following table is a comparison of ImageNet linear evaluation and fine-grained classification.

Despite using only synthetic data, SynCLR achieved comparable results to OpenAI’s CLIP and DINO v2 models.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

The following table compares SynCLR and CLIP on the same synthetic data. It can be seen that SynCLR is significantly better than CLIP.

The specific setting is to generate 4 images per title. SynCaps-150M provides better representation for SynCLR and CLIP.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

PCA visualization is as follows. Following DINO v2, the researchers calculated PCA between patches of the same set of images and colored them based on their first 3 components.

Compared with DINO v2, SynCLR is more accurate for drawings of cars and airplanes, but slightly worse for drawings that can be drawn.

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Figure 6 and Figure 7 respectively show the linear accuracy of ImageNet under different training scales and the fine classification under different training parameter scales. .

Google MITs latest research shows: Obtaining high-quality data is not difficult, large models are the solution

Why learn from generative models?

One compelling reason is that generative models can operate on hundreds of data sets simultaneously, providing a convenient and efficient way to curate training data.

In summary, the latest paper investigates a new paradigm of visual representation learning - learning from generative models.

Without using any actual data, SynCLR learns visual representations that are comparable to those learned by state-of-the-art general-purpose visual representation learners.

The above is the detailed content of Google MIT's latest research shows: Obtaining high-quality data is not difficult, large models are the solution. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!