How to call GPT3.5 interface in Python

PHPz
Release: 2023-05-02 18:25:07
forward
4438 people have browsed it

The GPT3.5 interface calling method mainly includes four parts: openai installation, api_requestor.py replacement, interface calling, and sample program description.

1 openai installation

The Python openai library can be installed directly through pip install openai. If openai has been installed, but subsequent prompts indicate that ChatCompletion cannot be found, please use the command "pip install -U openai" to upgrade openai.

2 api_requestor.py replacement

After the installation of Python openai is completed, the api_requestor.py file will be generated. The file is located in the python environment library file directory "site-packages\openai\api_requestor.py". As follows. Replace the file and reply api35 in the public account Lele Sensing School to obtain the file for replacement.

Windows:
C:\ProgramData\Anaconda3\Lib\site-packages\openai\api_requestor.py
or
C:\ProgramData\Anaconda3\envs\xxx\ lib\site-packages\openai\api_requestor.py
Linux:
/root/miniconda3/lib/pythonxx/site-packages/openaiapi_requestor.py
or
/root/miniconda3/envs/xxx /lib/pythonxx/site-packages/openaiapi_requestor.py
Replace this file and reply api35 in the public account Lele Perception School to obtain the replacement file.

3 Interface calling instructions

The interface calling method remains unchanged and is consistent with openai’s own calling method. There are mainly 7 parameters in the input.

(1) model: model name, gpt-3.5-turbo or gpt-3.5-turbo-0301

(2) messages: questions or content to be completed, which are highlighted below.

(3) Temperature: Control the randomness of the result. 0.0 means the result is fixed. If the randomness is high, it can be set to 0.9.

(4) max_tokens: The maximum number of words returned (including questions and answers). Usually Chinese characters account for two tokens. Assume it is set to 100. If there are 40 Chinese characters in the prompt question, then the returned result will include up to 10 Chinese characters. The maximum number of tokens allowed by the ChatGPT API is 4096, that is, the maximum setting of max_tokens is 4096 minus the number of tokens in the question.

(5) top_p: Set to 1.

(6) Frequency_Penalty: Just set to 0.

(7) Presence_penalty: Set to 0.

(8) stream: Control continuous output or complete output.

It should be noted that the above input parameters add stream, that is, whether to use control flow to output.

If the value of stream is False, then all text results will be returned completely, which can be read through response.choices[0].delta['content']. However, the greater the number of words, the longer the waiting time for return. The time can refer to 4 words/second when reading the control flow. If the steam value is True, the returned result is a Python generator, which needs to be obtained through iteration. The average is about 4 words per second (134 words in 33 seconds and 157 words in 39 seconds). The reading program is as follows.

4 message

The messages field consists of two parts: role and content, as shown below:

  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
Copy after login

In gpt- In the 3.5-turbo model, roles include three types: system, assistant and user. The System role is equivalent to telling ChatGPT which role to answer the question specifically. You need to specify the specific role and question content in the content. The main difference of gpt-3.5-turbo-0301 is that it pays more attention to the content of the problem and does not pay special attention to the specific role part. The gpt-3.5-turbo-0301 model is valid until June 1st, and gpt-3.5-turbo will continue to be updated.

The assistant assistant and user user are equivalent to specifying the role, and the content can be directly written into the issue of concern.

5 Sample program

                                                            using using using using ’ ’s ’ using ’s ’s 3 API calling effect  

The above is the detailed content of How to call GPT3.5 interface in Python. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:yisu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!