


In the five scenarios of interviews, English emails, live broadcasts, weekly reports and resumes, how is the cost-effectiveness of the GPT 3.5 series models? We conducted real-life tests and provided a selection guide.
Which model performs best in the GPT 3.5 series?
How does the GPT 3.5 series actually perform in common application tasks?
How much does it generally cost for a GPT 3.5 model to answer different questions?
This issue of "SOTA! Actual Measurement"
The following is the conclusion of this issue's actual measurement (See the end of the article for detailed ratings)
Model |
gpt-3.5-turbo |
##text-davinci-003 |
text-davinci-002 |
|
Description |
is currently the most powerful GPT-3.5 model, specially optimized for chat scenarios, the price is text- One tenth of davinci-003. |
#Can complete any language task with better quality, longer output, and follows instructions better than Curie, Babbage or Ada models. |
Has similar capabilities to text-davinci-003, but is trained through supervised fine-tuning rather than reinforcement learning, the maximum number of Tokens to 4097. |
|
##Maximum number of Tokens |
4,096 tokens
| ##4,097 tokens
##4,097 tokens |
||
Price |
##$0.002 / 1K tokens |
$0.0200 / 1K tokens |
$0.0200 / 1K tokens |
|
Comprehensive rating |
The overall rating is higher and the performance is higher It is highly accurate and professional, and can be adapted to most tasks. The output results are relatively complete and smooth, and the output for different tasks is also relatively accurate and comprehensive. It has strong adaptability and versatility, and the lowest cost. |
The overall score is relatively low. Although it performs well on some tasks, overall the output results lack personalization and pertinence, and the expression is not precise and concise enough. , and sometimes there are some inaccuracies. |
The overall score is the lowest. The output results are not professional and accurate enough. They lack personalization and pertinence. There are also major problems in language expression. Overall It needs further optimization and improvement. |
##Test scenario |
Testing angle |
|||
##Generate interview questions based on job description
|
How easy it is to generate interview questions How well generated interview questions match the job description
|
|||
Generating interview questions based on candidate information
|
The difficulty of generating interview questions Ease of generation How well generated interview questions match the candidate
|
##Test scenario
|
Test angle
|
Insert special Proper nouns for translation, professional terms in a certain vertical field, nouns with different meanings in different scenarios
|
Whether the semantics are smooth, whether the expanded content is correct, whether the translation of ambiguous nouns is correct, whether the translation of professional nouns/proper nouns is correct |
In the input, it is required to output in a "colloquial" or "written" way |
Is it okay? Simulate spoken or formal written language style |
Write in a colloquial tone in the input and require "written" output , and omit some background information and use ambiguous nouns in the input |
Whether it can simulate spoken language or formal written language style, and whether it can correctly understand spoken language expression; whether ambiguous nouns can be translated correctly |
with crime-related content in the input |
Whether unsafe content will be filtered |
#Use inversion in input Sentences, homophone typos, dialects, colloquial omitted sentences |
Whether grammatical errors, typos, and incomplete sentences in Chinese can be correctly filtered and understood |
gpt-3.5-turbo: The overall score is 3.3 points. The email structure fits the scene, the tone is correct, and the abbreviation is appropriate. Unless the proper nouns of scientific names are basically abbreviated, for colloquial It has good understanding and filtering of strong emotions in the input, and can correctly correct input problems such as typos and grammatical errors. The disadvantage is that it does not correctly identify unsafe content.
text-davinci-003: The overall score is 3 points, the structure uses common templates, no titles, blunt sentence connections, insufficient expansion, and proprietary Nouns and ambiguous nouns are understood correctly, colloquial comprehension and production are higher than expected, and unsafe content is not correctly identified.
text-davinci-002: The overall score is 2 points, the structure uses common templates, there is no title, the sentences are not fluent or even wrong, the language is The paragraph structure is not obvious, there is no abbreviation, unless the proper noun of the scientific name is basically abbreviated, spoken and written language cannot be switched well, and unsafe content is not correctly identified.
Let’s choose one of the test cases to take a look—— Insert a special translation into the input text Proper nouns, professional terms in a certain vertical field, nouns with different meanings in different scenarios. The following input is included in the test example
##Model consumption
#Insert proper nouns with special translations, professional terms in a certain vertical field, and noun test examples with different meanings in different scenarios into the input text. gpt-3.5-turbo consumes about 0.006 yuan, text-davinci-003 consumes about 0.067 yuan, text-davinci-002 consumes about 0.07 yuan
Inference performance
In terms of semantic smoothness, all three models performed relatively well, with no obvious differences. Glossary and grammatical errors. In terms of whether the expanded content is correct, the responses from gpt-3.5-turbo and text-davinci-003 are relatively comprehensive, providing detailed answers to each question, and providing some relevant suggestions and product recommendations. Text-davinci-002 only answered a few questions and did not provide many relevant details and suggestions.
The performance of the three models is relatively good in terms of whether the translation of ambiguous nouns is correct and whether the translation of professional nouns/proper nouns is correct. gpt-3.5-turbo and text-davinci-003, text-davinci-002 both correctly translate polytetrafluoroethylene (PTFE) and perfluorinated compounds (PFCs), using the correct English terms.
Application Task Three: Live Broadcast AssistanceTest Scenario |
Test angle |
##Based on the text content of the live broadcast, it is summarized as A summary
|
The accuracy, refinement and fluency of the generated content summary |
Refining several key points based on the live text content |
The accuracy, refinement and fluency of the generated content key points |
Write a live broadcast outline based on the live broadcast theme |
The quality of the live broadcast outline generated; related to the theme Degree |
Find the answer to the question based on the live text content |
Quality of generated answers; accuracy |
gpt-3.5-turbo: The overall score is 4.4 points. The model accurately and precisely implements the requirements put forward by the user. The output content echoes the input and fits the theme scene. , the expression is accurate, no original information is omitted or distorted, the answer to the question can be organized concisely, the simplicity requirements in the requirements are followed, the output is smooth, the sentence structure is concise and clear, and the expression is clear.
text-davinci-003: The overall score is 4.2 points, The model summary is more accurate, the generated content meets the scene requirements, and there are no omissions At the same time, the information does not add unnecessary information, and the language fluency is also good, meeting the requirements of content fluency and conciseness. However, there is a need for increased refinement and simplified language, while the content generated does not provide additional analysis and insights and requires increased breadth and depth.
##text-davinci-002: The overall score is 1.5 points, The model output accuracy is average, some basic coverage of problem points, most of them cannot be compared It adapts well to the scene. The generated sentence structure is relatively complex, the word redundancy is obvious, and the language expression is slightly stiff, which may affect the reader's understanding of the text and reading fluency. There is room for further improvement in terms of simplicity and fluency.
Let’s choose one of the test cases to take a look——
Cost consumption
#Write a live outline test example based on the live broadcast theme. gpt-3.5-turbo costs about 0.01 yuan. text-davinci-003 consumes approximately 0.11 yuan, text-davinci-002 consumes approximately 0.071 yuan
##Inference results
text-davinci-003 The output is also usable to a certain extent, but it is slightly lacking in relevance to the topic, mainly due to the introduction of AIGC and its history. The mentioned content such as how to open the door to the content industry and the future of AIGC are not closely related to the theme and are relatively more general.
text-davinci-002 The output is quite different from the theme requirements. Although it mentions an overview of AIGC as a content production company, the outline content is more like a company introduction, which is different from the theme. There is no direct correlation and lacks the practical significance of the live broadcast outline.
Scene 4:
Work Weekly ReportConsider the polishing ability, expansion ability, and the completeness and perfection of the output content |
||||
##Output a weekly report based on the rough description given |
Consider the quality of the weekly report output by people in different professions giving rough work content |
|||
Based on the given work content and target template structure, output a templated weekly report |
## Consider outputting a weekly report according to known specifications
|
|||
Based on this week’s work content, output next week’s weekly work report
|
Consider predictive ability
|
##testing scenarios
|
Inspection perspective |
|||
##Generate resume based on job responsibilities |
Matching and professionalism between job responsibilities and generated resume |
|||
Generated based on job requirements Resume |
Matching between job requirements and resume |
|||
## Generate resume based on self-introduction
|
Precision and professionalism of generated content
|
|||
Generate a resume template based on the job position
|
Generate a template for professionalism and matching
|
#Apply Task |
Test scenario |
GPT-3.5 Turbo |
text-davinci-003 |
text-davinci-002 |
Comprehensive score (total score 5 points, the same below) |
3.8 |
3.2 |
##1.7
|
|
Create Interview Questions
|
Generate interview questions based on job description |
4.5 |
##4 |
0 |
##Based on Candidate information generation interview questions
|
4.5
| ##3.75
| 3.5||
Insert proper nouns with special translations, professional terms in a certain vertical field, and nouns with different meanings in different scenarios into the input text |
5 |
##3 |
##2 |
Requires "colloquial" and "written" output in the input |
3.5 |
3 |
3.5 |
Write in a colloquial tone in the input, require a "written" output, and omit part of the background in the input Information, use of ambiguous nouns |
4 |
5 |
2 |
|
##With criminal-related content in the input |
1 |
1 |
1 |
|
##Use inverted sentences, homonym typos, dialects, and colloquial omitted sentences in the input
|
3
| ##43 |
||
##Live broadcast summary |
Summarize into a summary based on the live text content |
4 |
4 |
##3
|
##4.7 |
##4 |
3 |
||
4 |
##4 |
0 |
Find the answer to the question based on the live text content |
5 |
##5 |
0 |
##Write a weekly work report
|
Based on the given work Content output weekly report
|
4
| ##3.5
| 0|
4.5 |
##4 |
##3 |
Output a templated weekly report based on the given work content and target template structure |
|
3 |
1 |
1 |
||
## Based on this week’s work content, output next week’s weekly work report |
2 |
##4
|
2
|
|
Write a resume
|
Generate resume based on job responsibilities
| ##4
##1.5 |
1.5 |
|
4.5 |
##3 |
1.5 |
Generate resume based on self-introduction |
3.5 |
1.5 |
1 |
##Generate a resume template based on the job position |
##3.5
|
1.5
| ##1
## |
The above is the detailed content of In the five scenarios of interviews, English emails, live broadcasts, weekly reports and resumes, how is the cost-effectiveness of the GPT 3.5 series models? We conducted real-life tests and provided a selection guide.. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Today I would like to share a recent research work from the University of Connecticut that proposes a method to align time series data with large natural language processing (NLP) models on the latent space to improve the performance of time series forecasting. The key to this method is to use latent spatial hints (prompts) to enhance the accuracy of time series predictions. Paper title: S2IP-LLM: SemanticSpaceInformedPromptLearningwithLLMforTimeSeriesForecasting Download address: https://arxiv.org/pdf/2403.05798v1.pdf 1. Large problem background model

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving
