With the arrival of the language generation model, will school homework become useless? Recently, New York City education officials sparked controversy when they announced a ban on students using ChatGPT in public schools.
The content automatically generated by the language model will "reference" existing works, and the speed of output is almost unlimited. People's concerns about it have spread to the AI academic community itself. The world's leading machine learning ICML, one of the conferences, also recently announced a ban on publishing papers containing content generated by ChatGPT and other similar systems to avoid "unintended consequences."
For such a situation, OpenAI, the creator of ChatGPT, has announced that it is working hard to develop "mitigations" to help people detect text automatically generated by AI.
"We are using ChatGPT as a preview technology for new research, hoping to learn from real-world applications. We believe that this is the development and deployment A critical part of a powerful and secure AI system. We will continue to learn from feedback and lessons learned," a company spokesperson said. “OpenAI has always called for transparency when using AI-generated text. Our terms of use hold users responsible first when using our API and creative tools for the people they are intended for... We look forward to working with educators to develop effective Solutions to help teachers and students alike find ways to benefit from AI."
If algorithms emerge that can distinguish between human and machine-generated content, the use of generative models in academia The methods may change. Schools will be able to more effectively limit the papers generated by artificial intelligence, and if people's attitudes change and expect these tools to help students, perhaps we can also gradually accept the use of AI for assistance and improve the efficiency of work and study.
Now it seems that there is still a long way to go. While AI-generated text may look impressive at academic conferences and in the news about schools banning machines from cheating on essays, you need to know that they often lack real understanding and logic compared to real human writing.
While tools like GPT-3 or ChatGPT surprise people by giving shockingly detailed answers, there are also dispassionate experts who say this proves the model is capable of encoding knowledge. , but when they don't get things right, the answers they give are often outrageous. Pomona College economics professor Gary Smith reminds us not to be fooled.
In a column, Gary Smith showed several examples of GPT-3’s inability to effectively reason and answer questions, “If you try to use GPT-3, your initial reaction Possibly surprising - it seemed like you were having a real conversation with a very smart person. Dig deeper, however, and you quickly discover that while GPT-3 can string words together in a convincing way, it doesn't. I don’t know the meaning of the word."
"Predicting that the word down may follow the word fall does not require any understanding of the meaning of these two words. Just through statistics level of calculation, AI can think that these words often go together. Therefore, GPT-3 can easily make completely wrong but arbitrary statements."
OpenAI in November 2022 In March, ChatGPT was released, a newer model that is improved upon GPT-3. Nonetheless, it still suffers from these same problems as all existing language models.
Once upon a time, the text content generated by AI was still "fake at first glance", but since the birth of ChatGPT, this kind of discrimination has become increasingly difficult to do.
In the education world, the ChatGPT debate revolves around the possibility of cheating. Search Google for "ChatGPT essay writing" and you'll find numerous examples of educators, journalists, and students testing the waters by using ChatGPT for homework and standardized essay tests.
A Wall Street Journal columnist used ChatGPT to write a passing AP English paper, while a Forbes reporter used it to complete two college essays in 20 minutes. Dan Gillmor, a professor at Arizona State University, recalled in an interview with the Guardian that he tried giving ChatGPT an assignment to students and found that AI-generated papers could also get good grades.
Currently, some developers have produced a detection tool for ChatGPT-generated content - "GPTZero". You only need to paste the content into the input box, and you can get it within a few seconds. Get the analysis results and quickly detect whether an article is written by ChatGPT or manually.
Netizen comments: Students all over the world cried after seeing it.
The author is Edward Tian, a student at Princeton University, who used part of his vacation time to write GPTZero.
#Let's take a look at the detection process, first taking a piece of reporting content from the "New Yorker" as an example (100% sure to be written by humans):
Look at another piece of content generated by ChatGPT for testing:
## The principle of the #GPTZero application is analysis with the help of some text attributes. The first is perplexity, which is the randomness of the text to the model, or the degree to which the language model "likes" the text; and then there is the burstiness, which is the degree of perplexity displayed by machine-written text over a period of time. More uniform and constant, which is not the case with human-written text.
GPTZero: "Students, I'm sorry! Professors, you're welcome!"
##According to " The Guardian reported that OpenAI is currently developing a feature to count the "watermarks" of ChatGPT output results so that readers can discover hidden patterns in AI text selection.In a speech at the University of Texas, OpenAI visiting researcher Scott Aaronson said the company is working on a system to combat cheating by "statistically watermarking the output." Aaronson said the technology will work by subtly adjusting the specific word choices chosen by ChatGPT in a way that won't be noticeable to readers but will be statistically detectable to anyone looking for signs of machine-generated text. predicted.
"We actually have a working prototype of the watermarking solution," Aaronson added. "It seems to perform well - as a rule of thumb, a few hundred words seems to be enough to get a signal: yes, this text comes from GPT."
Despite the concerns, Applications related to ChatGPT are also spreading rapidly. In many scenarios, people don’t want to talk to a chatbot that can’t understand simple queries, and ChatGPT, which can say anything, can solve this problem. Toronto-based Ada has partnered with OpenAI to apply GPT-3.5, the large model behind ChatGPT, to customer service chatbots, completing 4.5 billion customer service interactions.
According to The Information, Microsoft has also signed an exclusive licensing agreement with OpenAI and plans to integrate the technology into the Bing search engine.
ChatGPT is accelerating on the road of simulating real people, and this battle against counterfeiting will continue.
The above is the detailed content of ChatGPT cheating is causing concern, OpenAI: is developing its own audit tool. For more information, please follow other related articles on the PHP Chinese website!