Home Technology peripherals AI The first open source model to surpass GPT4o level! Llama 3.1 leaked: 405 billion parameters, download links and model cards are available

The first open source model to surpass GPT4o level! Llama 3.1 leaked: 405 billion parameters, download links and model cards are available

Jul 23, 2024 pm 08:51 PM
meta industry

Get your GPU ready!


Llama 3.1 finally appeared, but the source is not Meta official.

Today, news of the leak of the new Llama large model went viral on Reddit. In addition to the base model, it also includes benchmark results of 8B, 70B and the maximum parameter of 405B.

首个超越GPT4o级开源模型!Llama 3.1泄密:4050亿参数,下载链接、模型卡都有了

The picture below shows the comparison results of each version of Llama 3.1 with OpenAI GPT-4o and Llama 3 8B/70B. As you can see, even the 70B version surpasses GPT-4o on multiple benchmarks.

首个超越GPT4o级开源模型!Llama 3.1泄密:4050亿参数,下载链接、模型卡都有了

                                                                                                                                             , the 8B and 70B models of version 3.1 are distilled from 405B, so compared to The previous generation had significant performance improvements.

Some netizens said that this is the first time that an open source model has surpassed closed source models such as GPT4o and Claude Sonnet 3.5 and reached SOTA
on multiple benchmarks.

At the same time, the model card of Llama 3.1 leaked and the details were leaked (the date marked in the model card indicates that it is based on the July 23rd release).

Someone summarized the following highlights: 首个超越GPT4o级开源模型!Llama 3.1泄密:4050亿参数,下载链接、模型卡都有了


The model uses 15T+ tokens from public sources for training, and the pre-training data deadline is December 2023;

Fine-tuning data includes public Available instruction fine-tuning dataset (unlike Llama 3) and 15 million synthetic samples;
  • Model supports multiple languages, including English, French, German, Hindi, Italian, Portuguese, Spanish and Thai.
  •                                                                                                                                                                                                                             

Although the leaked Github link is currently 404, some netizens have given download links ( However, for the sake of safety, it is recommended to wait for the official channel announcement tonight): 首个超越GPT4o级开源模型!Llama 3.1泄密:4050亿参数,下载链接、模型卡都有了

But this is a 100 billion-level model after all, please prepare enough hard disk space before downloading:

The following is the Llama 3.1 model Important content in the card:

首个超越GPT4o级开源模型!Llama 3.1泄密:4050亿参数,下载链接、模型卡都有了

Basic model information

首个超越GPT4o级开源模型!Llama 3.1泄密:4050亿参数,下载链接、模型卡都有了

Meta Llama 3.1 Multilingual Large Language Model (LLM) collection is a set of pre-trained and instruction fine-tuned generative models, each 8B in size , 70B and 405B (text input/text output). Llama 3.1 command-fine-tuned text-only models (8B, 70B, 405B) are optimized for multilingual conversation use cases and outperform many available open and closed source chat models on common industry benchmarks.

Model architecture: Llama 3.1 is an optimized Transformer architecture autoregressive language model. The fine-tuned version uses SFT and RLHF to align usability and security preferences.

Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish and Thai.
It can be inferred from the model card information that the context length of the
Llama 3.1 series model is 128k
. All model versions use Grouped Query Attention (GQA) to improve inference scalability.

首个超越GPT4o级开源模型!Llama 3.1泄密:4050亿参数,下载链接、模型卡都有了

首个超越GPT4o级开源模型!Llama 3.1泄密:4050亿参数,下载链接、模型卡都有了

INTENDED USE

INTENDED USE CASE. Llama 3.1 is intended for multilingual business applications and research. Instruction-tuned text-only models are suitable for assistant-like chat, while pre-trained models can be adapted to a variety of natural language generation tasks.

The Llama 3.1 model set also supports the ability to leverage its model output to improve other models, including synthetic data generation and distillation. The Llama 3.1 Community License allows these use cases.

Llama 3.1 trains on a wider set of languages ​​than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages ​​other than the 8 supported languages, provided they comply with the Llama 3.1 Community License Agreement and Acceptable Use Policy, and are responsible in such cases for ensuring that other languages ​​are used in a safe and responsible manner Language Llama 3.1.

Software and hardware infrastructure
The first is the training element. Llama 3.1 uses a custom training library, Meta-customized GPU cluster and production infrastructure for pre-training, and is also fine-tuned on the production infrastructure. , annotation and evaluation.

The second is the training energy consumption. Llama 3.1 training uses a total of 39.3 M GPU hours of calculation on H100-80GB (TDP is 700W) type hardware. Here training time is the total GPU time required to train each model, and power consumption is the peak power capacity of each GPU device, adjusted for power efficiency.

Training on greenhouse gas emissions. Total greenhouse gas emissions during the Llama 3.1 training period based on a geographical baseline are estimated at 11,390 tonnes of CO2e. Since 2020, Meta has maintained net-zero greenhouse gas emissions across its global operations and matched 100% of its electricity use with renewable energy, resulting in total market-based greenhouse gas emissions of 0 tonnes of CO2e during the training period .

The methods used to determine training energy use and greenhouse gas emissions can be found in the following paper. Because Meta releases these models publicly, others do not need to bear the burden of training energy usage and greenhouse gas emissions.

Paper address: https://arxiv.org/pdf/2204.05149

Training data
Overview: Llama 3.1 was conducted using approximately 1.5 trillion token data from public sources. Pre-training. Fine-tuning data includes publicly available instruction datasets, and over 25 million synthetically generated examples.
Data freshness: The deadline for pre-training data is December 2023.

Benchmark score

In this section, Meta reports the scoring results of the Llama 3.1 model on the annotation benchmark. For all evaluations, Meta uses internal evaluation libraries.

首个超越GPT4o级开源模型!Llama 3.1泄密:4050亿参数,下载链接、模型卡都有了

安全風險考量

Llama 研究團隊致力於為研究界提供寶貴的資源來研究安全微調的穩健性,並為開發人員提供適用於各種應用的安全且強大的現成模型,以減少部署安全人工智慧系統的開發人員的工作量。
 
研究團隊採用多方面資料收集方法,將供應商的人工產生資料與合成資料結合,以減輕潛在的安全風險。研究團隊開發了許多基於大型語言模型 (LLM) 的分類器,以深思熟慮地選擇高品質的 prompt 和回應,從而增強資料品質控制。
 
值得一提的是,Llama 3.1 非常重視模型拒絕良性 prompt 以及拒絕語氣。研究團隊在安全資料策略中引入了邊界 prompt 和對抗性 prompt,並修改了安全資料回應以遵循語氣指南。 

Llama 3.1 模型並非設計為單獨部署,而是應作為整個人工智慧系統的一部分進行部署,並根據需要提供額外的「安全護欄」。開發人員在建置智能體系統時應部署系統安全措施。

請注意,該版本引入了新功能,包括更長的上下文視窗、多語言輸入和輸出,以及開發人員與第三方工具的可能整合。使用這些新功能進行建置時,除了需要考慮一般適用於所有生成式人工智慧用例的最佳實踐外,還需要特別注意以下問題: 

工具使用:與標準軟體開發一樣,由開發人員負責將LLM 與他們選擇的工具和服務整合。他們應為自己的使用案例制定明確的政策,並評估所使用的第三方服務的完整性,以了解使用此功能時的安全和安保限制。

多語言:Lama 3.1 除英語外還支援 7 種語言:法語、德語、印地語、義大利語、葡萄牙語、西班牙語和泰語。 Llama 可能可以輸出其他語言的文本,但這些文本可能不符合安全性和幫助性表現閾值。

Llama 3.1 的核心價值是開放、包容和樂於助人。它旨在服務每個人,並適用於各種使用情況。因此,Llama 3.1 的設計宗旨是讓不同背景、經驗和觀點的人都能使用。 Llama 3.1 以使用者及其需求為本,沒有插入不必要的評判或規範,同時也反映了這樣一種認識,即即使在某些情況下看似有問題的內容,在其他情況下也能達到有價值的目的。 Llama 3.1 尊重所有使用者的尊嚴和自主權,特別是尊重為創新和進步提供動力的自由思想和表達價值。
 
但 Llama 3.1 是一項新技術,與任何新技術一樣,其使用也存在風險。迄今為止進行的測試尚未涵蓋也不可能涵蓋所有情況。因此,與所有 LLM 一樣,Llama 3.1 的潛在輸出無法事先預測,在某些情況下,模型可能會對使用者提示做出不準確、有偏差或其他令人反感的反應。因此,在部署 Llama 3.1 模型的任何應用之前,開發人員應針對模型的特定應用進行安全測試和微調。

模型卡來源:https://pastebin.com/9jGkYbXY
參考資訊:https://x.com/op74185001874720387418520374185203743720372727203838372370383838383838
https: //x.com/iScienceLuvr/status/1815519917715730702
https://x.com/mattshumer_/status/1815444612414087294

The above is the detailed content of The first open source model to surpass GPT4o level! Llama 3.1 leaked: 405 billion parameters, download links and model cards are available. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

New affordable Meta Quest 3S VR headset appears on FCC, suggesting imminent launch New affordable Meta Quest 3S VR headset appears on FCC, suggesting imminent launch Sep 04, 2024 am 06:51 AM

The Meta Connect 2024event is set for September 25 to 26, and in this event, the company is expected to unveil a new affordable virtual reality headset. Rumored to be the Meta Quest 3S, the VR headset has seemingly appeared on FCC listing. This sugge

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Analyst discusses launch pricing for rumoured Meta Quest 3S VR headset Analyst discusses launch pricing for rumoured Meta Quest 3S VR headset Aug 27, 2024 pm 09:35 PM

Over a year has now passed from Meta's initial release of the Quest 3 (curr. $499.99 on Amazon). Since then, Apple has shipped the considerably more expensive Vision Pro, while Byte Dance has now unveiled the Pico 4 Ultra in China. However, there is

See all articles