Home web3.0 OpenAI Quietly Releases GPT-4o Long Output, a New Large Language Model With a Massively Extended Output Size

OpenAI Quietly Releases GPT-4o Long Output, a New Large Language Model With a Massively Extended Output Size

Jul 31, 2024 am 09:11 AM
GPT-4o Long Output 16X token capacity

OpenAI is reportedly eyeing a cash crunch, but that isn’t stopping the preeminent generative AI company from continuing to release a steady stream of new models and updates.

OpenAI Quietly Releases GPT-4o Long Output, a New Large Language Model With a Massively Extended Output Size

OpenAI has quietly announced a new variation of its GPT-4o large language model, dubbed GPT-4o Long Output. This new model boasts a massively extended output size, capable of generating up to 64,000 tokens of output compared to the original GPT-4o's 4,000 tokens. This marks a 16-fold increase in output capacity.

Tokens, to quickly refresh your memory, are the numerical representations of concepts, grammatical constructions, and combinations of letters and numbers that are organized based on their semantic meaning behind-the-scenes of an LLM.

The word “Hello” is one token, for example, but so too is “hi.” You can see an interactive demo of tokens in action via OpenAI’s Tokenizer here. Machine learning researcher Simon Willison also has a great interactive token encoder/decoder.

This new model is designed to meet customer demand for longer output contexts, with an OpenAI spokesperson telling VentureBeat: “We heard feedback from our customers that they’d like a longer output context. We are always testing new ways we can best serve our customers’ needs.”

The model is currently undergoing an alpha testing phase for a few weeks, during which OpenAI will be gathering data on how effectively the extended output meets user needs.

This enhanced capability is particularly advantageous for applications that require detailed and extensive output, such as code editing and writing improvement. By offering more extended outputs, the GPT-4o model can provide more comprehensive and nuanced responses, which can significantly benefit these use cases.

発売以来、GPT-4o は最大 128,000 のコンテキスト ウィンドウを提供していました。これは、モデルが 1 回の対話で処理できるトークンの量 (入力トークンと出力トークンの両方を含む) です。 GPT-4o ロング出力の場合、この最大コンテキスト ウィンドウは 128,000 のままです。

では、OpenAI はどのようにして、コンテキスト ウィンドウ全体を 128,000 に保ちながら、出力トークンの数を 4,000 トークンから 64,000 トークンに 16 倍に増やすことができるのでしょうか?

この呼び出しは単純な計算に帰着します。5 月のオリジナル GPT-4o には合計 128,000 トークンのコンテキスト ウィンドウがありましたが、その 1 つの出力メッセージは 4,000 に制限されていました。

同様に、新しい GPT-4o ミニ ウィンドウの場合、合計コンテキストは 128,000 ですが、最大出力は 16,000 トークンに増加しました。

つまり、GPT-4o の場合、ユーザーは 1 回の対話で最大 124,000 個のトークンを入力として提供し、モデルから最大 4,000 個の出力を受け取ることができます。また、入力としてより多くのトークンを提供し、出力として受け取るトークンの数を少なくすることもできますが、それでも合計合計最大 128,000 のトークンが追加されます。

GPT-4o mini の場合、ユーザーは最大 16,000 トークンの出力を得るために、入力として最大 112,000 トークンを提供できます。

GPT-4o ロング出力の場合、コンテキスト ウィンドウの合計は依然として 128,000 に制限されています。ただし、現在、ユーザーは最大 64,000 トークンのバックアウトと引き換えに、最大 64,000 トークン相当の入力を提供できます。つまり、その上に構築されたアプリケーションのユーザーまたは開発者が、入力を制限しながら、より長い LLM 応答を優先したい場合です。 .

どのような場合でも、ユーザーまたは開発者は選択またはトレードオフを行う必要があります。合計 128,000 トークンのままで、より長い出力を優先して入力トークンの一部を犠牲にするか?より長い回答を必要とするユーザーのために、GPT-4o ロング出力がオプションとしてこれを提供するようになりました。

新しい GPT-4o ロング出力モデルの価格は次のとおりです:

これを、入力トークン 100 万あたり 5 ドル、出力 100 万あたり 15 ドルである通常の GPT-4o の価格、または入力トークン 100 万あたり 0.15 ドル、出力 100 万あたり 0.60 ドルの新しい GPT-4o mini と比較すると、かなりの価格設定であることがわかります。 OpenAI は、強力な AI を手頃な価格で広範な開発者ユーザーベースにアクセスできるようにしたいという最近の発言を継続し、積極的に取り組んでいます。

現在、この実験モデルへのアクセスは、信頼できるパートナーの小グループに限定されています。広報担当者はさらに、「少数の信頼できるパートナーと数週間にわたってアルファテストを実施し、より長い出力がそのユースケースに役立つかどうかを確認しています。」と付け加えた。

このテスト段階の結果に応じて、OpenAI はより幅広い顧客ベースへのアクセスの拡大を検討する可能性があります。

進行中のアルファ テストは、拡張出力モデルの実際のアプリケーションと潜在的な利点についての貴重な洞察を提供します。

初期グループからのフィードバックの場合

The above is the detailed content of OpenAI Quietly Releases GPT-4o Long Output, a New Large Language Model With a Massively Extended Output Size. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1266
29
C# Tutorial
1239
24
In 18 months, the OpenAI team developed GPT-4o In 18 months, the OpenAI team developed GPT-4o Jun 13, 2024 am 10:33 AM

Ultraman: Without his (Prafulla Dhariwal's) vision, talent, belief and determination, there would be no GPT-4o. "GPT-4o would not have been possible without the vision, talent, belief and long-term determination of @prafdhar. It is these efforts (and the work of many others) that have led to what I hope will be a revolution in the way computers are used." Two days after OpenAI released its new generation flagship generation model GPT-4o, OpenAI CEO Altman commented on one of the people involved in the project. After 18 months of working with multiple teams at OpenAI, co-founder Greg Brockman said: “GPT-4o is the result of the entire team’s efforts.

Read GPT-4o vs GPT-4 Turbo in one article Read GPT-4o vs GPT-4 Turbo in one article Jun 02, 2024 pm 04:02 PM

Hellofolks, I am Luga. Today we will talk about technologies related to the artificial intelligence (AI) ecological field - the GPT-4o model. On May 13, 2024, OpenAI innovatively launched its most advanced and cutting-edge model GPT-4o, which marked a major breakthrough in the field of artificial intelligence chatbots and large-scale language models. Heralding a new era of artificial intelligence capabilities, GPT-4o boasts significant performance enhancements that surpass its predecessor, GPT-4, in both speed and versatility. This groundbreaking advancement resolves the latency issues that often plagued its predecessor, ensuring a seamless and responsive user experience. What is GPT-4o? On May 13, 2024, OpenAI released

Revolutionary GPT-4o: Reshaping the human-computer interaction experience Revolutionary GPT-4o: Reshaping the human-computer interaction experience Jun 07, 2024 pm 09:02 PM

The GPT-4o model released by OpenAI is undoubtedly a huge breakthrough, especially in its ability to process multiple input media (text, audio, images) and generate corresponding output. This ability makes human-computer interaction more natural and intuitive, greatly improving the practicality and usability of AI. Several key highlights of GPT-4o include: high scalability, multimedia input and output, further improvements in natural language understanding capabilities, etc. 1. Cross-media input/output: GPT-4o+ can accept any combination of text, audio, and images as input and directly generate output from these media. This breaks the limitation of traditional AI models that only process a single input type, making human-computer interaction more flexible and diverse. This innovation helps power smart assistants

OpenAI Quietly Releases GPT-4o Long Output, a New Large Language Model With a Massively Extended Output Size OpenAI Quietly Releases GPT-4o Long Output, a New Large Language Model With a Massively Extended Output Size Jul 31, 2024 am 09:11 AM

OpenAI is reportedly eyeing a cash crunch, but that isn’t stopping the preeminent generative AI company from continuing to release a steady stream of new models and updates.

Kimi + Coze (coze) is a great combo, I want to build a GPT-4o Kimi + Coze (coze) is a great combo, I want to build a GPT-4o Jun 01, 2024 pm 08:23 PM

Hello everyone, I am Lao Du. Among domestic large models, Kimi’s performance is very good. Fortunately, the coze platform supports the Kimi large model. Button is a platform for building Agent intelligence. Today we will try to use Kimi+ Button to make an agent with GPT-4o effect. First, click "Create Bot" on the homepage of the button. Bot is actually an Agent. In the picture here, the model of the moonshot series selected is the Kimi large model. The remaining highlight of the picture is the "plug-in". Coze provides a very rich set of plug-ins, which can be combined with the large model to complete many complex functions. To give a few examples, for example, visual ability. Add a plug-in to enable large models to generate pictures and view

GPT-4o and SQL: How capable is a large model of changing its own architecture? GPT-4o and SQL: How capable is a large model of changing its own architecture? Jun 11, 2024 am 09:56 AM

Author | Compiled by David Eastman | Produced by Noah | 51CTO Technology Stack (WeChat ID: blog51cto) Although no large language model (LLM) has ever driven a bicycle, they clearly understand the role of driving behavior in the field of human transportation. They are similar to what software developers provide as a kind of semantic-like real-world knowledge combined with an understanding of the technical world. We saw this clearly in a recent article where we were able to generate a simple SQL schema for book publishing simply by describing it in natural language. Although I was pleased with the performance of the Llama3 creation schema, a colleague from my previous days at Oracle pointed out that the book publishing schema was a fairly familiar example

xAI releases Grok-2 and Grok-2 mini beta AI LLMs on 𝕏 with enterprise API arriving later this month xAI releases Grok-2 and Grok-2 mini beta AI LLMs on 𝕏 with enterprise API arriving later this month Aug 16, 2024 pm 06:42 PM

xAI has released Grok-2 and Grok-2 mini beta AI large-language models (LLMs) on X with the enterprise API arriving later this month. The generative image capabilities of Grok-2 have also been expanded with the integration of FLUX.1 AI from Black Fore

The first in the country! SenseTime releases 'Ririxin 5o', real-time multi-modal streaming interaction benchmarking GPT-4o The first in the country! SenseTime releases 'Ririxin 5o', real-time multi-modal streaming interaction benchmarking GPT-4o Jul 11, 2024 pm 03:52 PM

July 5, 2024, Shanghai - SenseTime, a strategic partner of the 2024 World Artificial Intelligence Conference and High-Level Conference on Artificial Intelligence Global Governance (WAIC2024), held the "Love Without Boundaries·Xiang Xinli" Artificial Intelligence Forum and released the first domestic artificial intelligence The WYSIWYG model is "new every day 5o", and the interactive experience is benchmarked against GPT-4o, realizing a new AI interaction model. By integrating cross-modal information, based on various forms such as sound, text, image and video, the country's first WYSIWYG model "Ririxin 5o" brings a new AI interaction model, that is, real-time streaming multi- Modal interaction. This was also shown to everyone at the scene