Home > Technology peripherals > AI > body text

OpenAI offers new fine-tuning and customization options

王林
Release: 2024-04-19 15:19:09
forward
1035 people have browsed it

Fine-tuning plays a vital role in building valuable artificial intelligence tools. This process of refining pre-trained models using more targeted data sets allows users to greatly increase the model's understanding of professional connotations, allowing users to add ready-made knowledge to the model for specific tasks.

OpenAI offers new fine-tuning and customization options

While this process may take time, it is often three times more cost-effective than training a model from scratch. This value is reflected in OpenAI’s recent announcement of an expansion of its custom model program and various new features for its fine-tuning API.

New features of self-service fine-tuning API

OpenAI first announced the launch of the self-service fine-tuning API for GPT-3 in August 2023, and was enthusiastically received by the AI ​​community response. OpenAI reports that thousands of groups have leveraged APIs to train tens of thousands of models, such as using specific programming languages ​​to generate code, summarize text into specific formats, or create personalized content based on user behavior.

Since its launch in August 2023, the job matching and recruitment platform Indeed has achieved significant success. In order to match job seekers with relevant job openings, Indeed sends personalized recommendations to users. By fine-tuning GPT 3.5 Turbo to produce a more accurate explanation of the process and being able to reduce the number of tokens in alerts by 80%. This has increased the number of messages the company sends to job seekers each month from less than 1 million to approximately 20 million.

New fine-tuning API features build on this success and hopefully improve functionality for future users:

Epoch-based checkpoint creation : Automatically generates a complete fine-tuned model checkpoint at every training epoch, which reduces the need for subsequent retraining, especially in the case of overfitting.

Comparity Playground: A new parallel playground UI for comparing model quality and performance, allowing manual evaluation of the output of multiple models or fine-tuning snapshots for a single prompt.

Third-party integrations: Supports integrations with third-party platforms (starting with permissions and biases), enabling developers to share detailed fine-tuning data to the rest of the stack.

Comprehensive validation metrics: Ability to calculate metrics such as loss and accuracy for the entire validation data set to better understand model quality.

Hyperparameter configuration: Ability to configure available hyperparameters from the dashboard (not just through the API or SDK).

Fine-tuning dashboard improvements: including the ability to configure hyperparameters, view more detailed training metrics, and rerun jobs from previous configurations.

Building on past success, OpenAI believes these new features will give developers more fine-grained control over their fine-tuning efforts.

Assisted fine-tuning and custom training models

OpenAI has also improved the custom model plan based on the release on DevDay in November 2023. One of the major changes is the emergence of assisted fine-tuning, a means of leveraging valuable techniques beyond API fine-tuning, such as adding additional hyperparameters and various parameter effective fine-tuning (PEFT) methods on a larger scale.

SK Telecom is an example of realizing the full potential of this service. The telecom operator has more than 30 million users in South Korea, so they wanted to customize an artificial intelligence model that can act as a telecom customer service expert.

By fine-tuning GPT-4 in collaboration with OpenAI to focus on Korean Telecom-related conversations, SK Telecom’s conversation summary quality improved by 35% and intent recognition accuracy increased. 33%. When comparing their new fine-tuned model to generalized GPT-4, their satisfaction score also improved from 3.6 to 4.5 out of 5.

OpenAI also introduces the ability to build custom models for companies that require deep fine-tuning of domain-specific knowledge models. A partnership with legal AI company Harvey demonstrates the value of this feature. Legal work requires a lot of reading-intensive documents, and Harvey wanted to use LLMs (Large Language Models) to synthesize information from these documents and submit them to lawyers for review. However, many laws are complex and context-dependent, and Harvey hopes to work with OpenAI to build a custom-trained model that can incorporate new knowledge and reasoning methods into the base model.

Harvey partnered with OpenAI and added the equivalent of 10 billion tokens of data to custom train this case law model. By adding the necessary contextual depth to make informed legal judgments, the resulting model improved factual answers by 83%.

AI tools are never a “cure-all” solution. Customizability is at the heart of this technology’s usefulness, and OpenAI’s work in fine-tuning and customizing training models will help expand the organizations already gaining from the tool.

The above is the detailed content of OpenAI offers new fine-tuning and customization options. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template