OpenAI has fully opened GPT-3.5 Turbo, DALL-E and Whisper APIs
According to news on July 10, OpenAI yesterday announced the full opening of GPT-3.5 Turbo, DALL-E and Whisper APIs to assist developers in improving model processing efficiency. In addition, OpenAI also stated that it is working on Develop follow-up features for GPT-4 and GPT-3.5 Turbo, which are planned to be launched in the second half of this year.
OpenAI revealed that all current AI models called by the API have been upgraded to GPT-4 by default, and existing users can use it without switching.
Note: Whisper API is a speech-to-text AI model that can recognize the user's voice, video and other media and convert it into text.
▲ Picture source OpenAI official website
OpenAI stated that it is constantly improving the Chat Completions API, with the main goal of improving its computing efficiency. They plan to discontinue the obsolete model of the Completions API in January 2024, 6 months later.
Note: Completions API is a natural language processing API that can be used for various text generation tasks, such as generating summaries, translating languages, generating articles, self-service Q&A, etc.
OpenAI claims that developers can still call the Completions API, but starting today, OpenAI will mark "old versions of Completions" as "old APIs" in developer files. OpenAI will focus resources on the Chat Completions API in the future and will no longer expose models using the Completions API.
IT House learned after inquiries that OpenAI stated that it is expected that by January 4, 2024, the old model will be terminated, and the Edits API will also be deactivated. Users of the Edits API and related models must be deactivated in early January next year. From the old model (text-davinci-edit-001 or code-davinci-edit-001) to GPT-3.5 Turbo, OpenAI also lists a comparison table of the old and new models.
▲ Picture source OpenAI
The above is the detailed content of OpenAI has fully opened GPT-3.5 Turbo, DALL-E and Whisper APIs. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



OpenAI recently announced the launch of their latest generation embedding model embeddingv3, which they claim is the most performant embedding model with higher multi-language performance. This batch of models is divided into two types: the smaller text-embeddings-3-small and the more powerful and larger text-embeddings-3-large. Little information is disclosed about how these models are designed and trained, and the models are only accessible through paid APIs. So there have been many open source embedding models. But how do these open source models compare with the OpenAI closed source model? This article will empirically compare the performance of these new models with open source models. We plan to create a data

In 2023, AI technology has become a hot topic and has a huge impact on various industries, especially in the programming field. People are increasingly aware of the importance of AI technology, and the Spring community is no exception. With the continuous advancement of GenAI (General Artificial Intelligence) technology, it has become crucial and urgent to simplify the creation of applications with AI functions. Against this background, "SpringAI" emerged, aiming to simplify the process of developing AI functional applications, making it simple and intuitive and avoiding unnecessary complexity. Through "SpringAI", developers can more easily build applications with AI functions, making them easier to use and operate.

What is the strength of Google Gemini? Carnegie Mellon University conducted a professional and objective third-party comparison. To ensure fairness, all models use the same prompts and generation parameters, and provide reproducible code and fully transparent results. It will not use CoT@32 to compare 5-shot like Google’s official press conference. Results in one sentence: The GeminiPro version is close to but slightly inferior to GPT-3.5Turbo, and GPT-4 is still far ahead. During the in-depth analysis, we also found some strange characteristics of Gemini, such as choosing D for multiple-choice questions... Many researchers said that Gemini was tested in great detail just a few days after its release, which is a great achievement. In-depth testing of six major tasks This test is more specific than

If the answer given by the AI model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Author丨Compiled by TimAnderson丨Produced by Noah|51CTO Technology Stack (WeChat ID: blog51cto) The Zed editor project is still in the pre-release stage and has been open sourced under AGPL, GPL and Apache licenses. The editor features high performance and multiple AI-assisted options, but is currently only available on the Mac platform. Nathan Sobo explained in a post that in the Zed project's code base on GitHub, the editor part is licensed under the GPL, the server-side components are licensed under the AGPL, and the GPUI (GPU Accelerated User) The interface) part adopts the Apache2.0 license. GPUI is a product developed by the Zed team

Today I would like to introduce to you an article published by MIT last week, using GPT-3.5-turbo to solve the problem of time series anomaly detection, and initially verifying the effectiveness of LLM in time series anomaly detection. There is no finetune in the whole process, and GPT-3.5-turbo is used directly for anomaly detection. The core of this article is how to convert time series into input that can be recognized by GPT-3.5-turbo, and how to design prompts or pipelines to let LLM solve the anomaly detection task. Let me introduce this work to you in detail. Image paper title: Largelanguagemodelscanbezero-shotanomalydete

Not long ago, OpenAISora quickly became popular with its amazing video generation effects. It stood out among the crowd of literary video models and became the focus of global attention. Following the launch of the Sora training inference reproduction process with a 46% cost reduction 2 weeks ago, the Colossal-AI team has fully open sourced the world's first Sora-like architecture video generation model "Open-Sora1.0", covering the entire training process, including data processing, all training details and model weights, and join hands with global AI enthusiasts to promote a new era of video creation. For a sneak peek, let’s take a look at a video of a bustling city generated by the “Open-Sora1.0” model released by the Colossal-AI team. Open-Sora1.0

Ollama is a super practical tool that allows you to easily run open source models such as Llama2, Mistral, and Gemma locally. In this article, I will introduce how to use Ollama to vectorize text. If you have not installed Ollama locally, you can read this article. In this article we will use the nomic-embed-text[2] model. It is a text encoder that outperforms OpenAI text-embedding-ada-002 and text-embedding-3-small on short context and long context tasks. Start the nomic-embed-text service when you have successfully installed o
