Home > web3.0 > body text

OpenAI introduces fine-tuning for its large language model (LLM), designed to improve its enterprise performance

WBOY
Release: 2024-08-27 00:41:09
Original
837 people have browsed it

According to the announcement, the fine-tuning feature for GPT-4o will allow developers to tweak the model to fit the needs of their organizations

OpenAI introduces fine-tuning for its large language model (LLM), designed to improve its enterprise performance

ChatGPT maker OpenAI has announced the introduction of fine-tuning for its large language model (LLM), which is designed to improve its enterprise performance.

According to the announcement, the fine-tuning feature for GPT-4o will allow developers to tweak the model to fit the needs of their organizations. Developers can customize the LLM with their custom data sets, and early results show a marked performance improvement.

“Fine-tuning enables the model to customize structure and tone of responses, or to follow complex domain-specific instructions,” reads OpenAI’s official statement. “Developers can already produce strong results for their applications with as little as a few dozen examples in their training data set.”

The fine-tuning training is priced at $25 per million tokens, while inference is capped at $3.75 per million and $15 per million for input and output tokens, respectively. These fees are expected to contribute to OpenAI revenues and could form a large chunk of its earnings as more enterprises tailor the LLM to suit their individual needs.

To spur adoption, OpenAI says it will offer one million training tokens each day for free to organizations until September 23, with users of GPT-4o mini receiving 2 million free tokens per day.

Prior to the announcement, several firms participated in early studies with OpenAI to test the practicality of the finetuning feature. Some notable use cases include the following:

Cosine’s Genie, an AI software engineering assistant that relies on GPT-4o, demonstrated impressive results in writing code, spotting bugs, and building new features.

An AI solutions firm, Distyl, ranked first after using a fine-tuned GPT-4o in studies that explored text-to-SQL benchmarks, racking up accuracies of over 70% across all metrics.

According to OpenAI, fine-tuned models will still provide users with the same levels of data privacy as ChatGPT while rolling out new security measures to protect enterprise data.

“We’ve also implemented layered safety mitigations for fine-tuned models to ensure they aren’t being misused,” said OpenAI. “For example, we continuously run automated safety evals on fine-tuned models and monitor usage to ensure applications adhere to our usage policies.”

A streak of upgrades

OpenAI has been bullish in rolling out upgrades for its artificial intelligence (AI) offerings, teasing users with an AI-powered search engine at the tail end of July. In April, the company announced an upgrade designed to make the chatbot more conversational while reducing the usage of verbose language in responses.

The firm previously confirmed the development of a new AI detection tool with 99.9% accuracy levels after its previous attempt had a sputtering start. However, it says it will adopt a cautious approach for a commercial launch to avoid pitfalls associated with next-gen technologies.

“We believe the deliberate approach we’ve taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI,” said a company executive.

Watch: Understanding the dynamics of blockchain & AI

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

The above is the detailed content of OpenAI introduces fine-tuning for its large language model (LLM), designed to improve its enterprise performance. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!