Home > Technology peripherals > AI > 16 articles in three years, former Google research scientist Yi Tay officially announced a new model, 21B comparable to Gemini Pro, GPT-3.5

16 articles in three years, former Google research scientist Yi Tay officially announced a new model, 21B comparable to Gemini Pro, GPT-3.5

王林
Release: 2024-02-15 18:45:28
forward
1083 people have browsed it
The team’s new model is comparable to Gemini Pro and GPT-3.5 in multiple benchmarks.

If you often read papers on AI large model direction, Yi Tay must be a familiar name. As a former senior research scientist at Google Brain, Yi Tay has contributed to many well-known large-scale language models and multi-modal models, including PaLM, UL2, Flan-U-PaLM, LaMDA/Bard, ViT-22B, PaLI, MUM, etc. .

According to Yi Tay’s personal information, during more than 3 years of working at Google Brain, he participated in writing a total of about 45 papers, and was a co-author of 16 of them. . Authored papers include UL2, U-PaLM, DSI, Synthesizer, Charformer and Long Range Arena, etc.
三年16篇一作,前谷歌研究科学家Yi Tay官宣新模型,21B媲美Gemini Pro、GPT-3.5
Like most Transformer authors who left Google to start their own businesses, Yi Tay announced his departure from Google in March last year and co-founded a company called Reka. Yi Tay He serves as the company's chief scientist, focusing on large-scale language models.

As time goes by, just now, Yi Tay announced that they have released a new model:
三年16篇一作,前谷歌研究科学家Yi Tay官宣新模型,21B媲美Gemini Pro、GPT-3.5
"Very happy Share with you Reka Flash, a new 21B multi-modal model with SOTA performance that is comparable to Gemini Pro and GPT 3.5 on language and visual benchmarks. We started from scratch with relatively limited resources Training this model... At the same time, our largest and most powerful model Reka-Core is also about to be completed. You can look forward to our next work."

Reka Flash: An efficient multi-modal language model

Reka Flash has a parameter size of 21B and is completely trained from scratch. Its performance can be compared with Comparable to larger models, Reka Flash is competitive with Gemini Pro and GPT-3.5 on numerous language and vision benchmarks.

In addition, the Reka team also proposed a more compact model variant, Reka Edge, which has fewer parameters, only 7B, and is more efficient, making it more efficient in It can also run in scenarios with limited resources (e.g. on-device, local).

It is worth mentioning that these two models are in the public testing stage, and interested readers can go and try them.

Trial address: https://chat.reka.ai/auth/login

At the same time, The Reka team has announced that their largest and most powerful Reka Core model will be available to the public in the coming weeks.

As for the open source issue, the team said it is still under consideration.
三年16篇一作,前谷歌研究科学家Yi Tay官宣新模型,21B媲美Gemini Pro、GPT-3.5
Assessment: Language

##The assessment benchmarks include MMLU (knowledge-based question answering), GSM8K (reasoning and mathematics), HumanEval (code generation) and GPQA (Google-proof graduate-level question answering).

The results show that Reka Flash achieved very good results in these benchmark tests: better than Gemini Pro on MMLU and GPQA, and achieved better results on GSM8K and HumanEval Competitiveness results. Furthermore, in these evaluations, Reka Flash significantly outperforms many larger models (e.g., Llama 2 70B, Grok-1, GPT-3.5).
三年16篇一作,前谷歌研究科学家Yi Tay官宣新模型,21B媲美Gemini Pro、GPT-3.5
Assessment: Multilingual Reasoning

Reka Flash is available in over 32 languages ​​including Reka Flash can be regarded as a powerful multi-language model. The researchers compared the performance of different models on multilingual benchmarks, including multilingual commonsense reasoning, causal reasoning, and question answering. The results show that Reka Flash outperforms Llama-2 70B and Mixtral on all these tasks.
三年16篇一作,前谷歌研究科学家Yi Tay官宣新模型,21B媲美Gemini Pro、GPT-3.5
Evaluation: Visual and Video

In addition, the study is also multi-modal Reka Flash was evaluated on benchmarks, including visual question answering (MMMU, VQA-v2), video subtitles (VATEX), and video question answering (Perception Test). The results show that the Reka Flash is competitive with the Gemini Pro in all four benchmarks.
三年16篇一作,前谷歌研究科学家Yi Tay官宣新模型,21B媲美Gemini Pro、GPT-3.5
The study also conducted a series of human evaluations to evaluate the Reka Flash-based chat model. The researchers considered two settings, 1) text-only chat model and 2) multi-modal chat model. During the evaluation they calculated the ELO score and overall win rate following the method of Askell et al.

Plain text chat: Researchers benchmarked leading models such as GPT-4, Claude 2.1, and Gemini Pro (API version). In addition, the researchers also compared the performance of Reka Edge, Mistral 7B and Llama 2 7B chat models.

Human evaluation results show that Reka Flash achieves competitive results, outperforming GPT-3.5 Turbo, Claude, Mixtral, and Gemini Pro. The Reka Edge is ahead of the other two 7B models, approaching the performance of the Claude Instant 1.2.
三年16篇一作,前谷歌研究科学家Yi Tay官宣新模型,21B媲美Gemini Pro、GPT-3.5
Evaluation: Multimodality

The study also combines Reka Flash with GPT4- V, Gemini Pro, Llava-1.6, IDEFICS 80b and Adept Fuyu-8B multi-modal language models are compared. The results show that Reka Flash outperforms all models except GPT4-V. The Reka Edge also achieved good rankings, surpassing the Mistral 7B-based Llava 1.6 7B and approaching the performance of the Gemini Pro.
三年16篇一作,前谷歌研究科学家Yi Tay官宣新模型,21B媲美Gemini Pro、GPT-3.5
7B parameter Reka Edge model

Reka Edge is a more compact 7B Model designed for on-premises deployment and latency-sensitive applications. On a language assessment task, the study reports comparisons with similarly sized models (i.e., Mistral 7B and Llama-2 7B). Results show that Reka Edge outperforms Llama 2 7B and Mistral 7B on standard language benchmarks.
三年16篇一作,前谷歌研究科学家Yi Tay官宣新模型,21B媲美Gemini Pro、GPT-3.5
Summary

The Reka team says they aim to build the most advanced multi-modal Language model, with the release of Reka Flash and Reka Edge, the initial milestones in their AI roadmap have been achieved. Everyone can look forward to their next research.

Reference link: https://reka.ai/reka-flash-an-efficient-and-capable-multimodal-language-model/

The above is the detailed content of 16 articles in three years, former Google research scientist Yi Tay officially announced a new model, 21B comparable to Gemini Pro, GPT-3.5. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template