It’s a big departure!
The graduation thesis that I worked so hard to code was actually taken by the professor and put into ChatGPT for testing, and then it was judged as plagiarism?
The professor failed half the class because of this, and then the school refused to issue a diploma because of this?
Recently, such a ridiculous thing happened at Texas A&M University (Texas A&M) .
In order to detect whether students submitted papers for cheating, a professor named Jared Mumm submitted their papers to ChatGPT.
He said to the students: I will copy and paste your papers into ChatGPT and it will tell me, Your paper was not generated by it.
"I will put everyone's last three papers in two different time periods. If they are claimed by ChatGTP both times, I will give You score 0 points."
Obviously, Professor Mumm, who does not have any computer-related background knowledge, knows nothing about the principles of ChatGPT.
In fact, ChatGPT does not recognize content created by AI, even if it is written by itself.
Even, he didn't even spell ChatGPT correctly - he directly wrote "Chat GPT" and "chat GPT".
As a result, more than half of the class failed the subject because their papers were irresponsibly "claimed" by ChatGPT.
What’s even more unfortunate is that most of the diplomas of graduated students were directly rejected by the school.
Of course, Professor Mumm was not merciless. He provided the whole class with the opportunity to redo their homework.
After receiving the above email, several students wrote to Mumm to prove their innocence. They provided Google Docs with timestamps to prove they were not using ChatGPT.
But Professor Mumm simply ignored these emails and only left this response in several students’ grading software – I don’t grade AI-generated shit.
However, there are still students who have been "rehabilitated". It is said that one student has been "acquitted". And got an apology from Mumm.
However, to make the situation more complicated, two students "came forward" and admitted that they had indeed used ChatGPT this semester.
This suddenly makes it more difficult for other students who did not use ChatGPT to write their papers to prove their innocence...
In this regard, Texas A&M University's College of Business said it is investigating the incident, but no students failed and no one was deferred due to the issue.
The school stated that Professor Mum is talking to students one-on-one to understand whether AI is used in their homework writing process and to what extent it is used. Individual student diplomas will be withheld until the investigation is completed.
And the students said they did not receive diplomas.
Currently, the incident is still under investigation.
Then the question is, can ChatGPT prove whether an article was written by yourself?
In this regard, based on the content of the professor’s email, we asked ChatGPT’s point of view:
ChatGPT said as soon as it came up that it There is no ability to verify the originality of the content and whether it was generated by AI.
"This teacher seems to misunderstand how AI like me works. While AI can generate text based on prompts, it cannot determine whether another text was generated by AI ."
Having said that, this does not stop netizens who love to do things.
They came up with a play called "Teaching the other person his own way" to teach Professor Mumm how to behave online.
First of all, ChatGPT stated that the email written by the professor was written by him.
## Immediately afterwards, netizens copied Professor Mumm’s approach——
Take an excerpt that looks like a certain paper and ask ChatGPT if it was written by it.
This time, although ChatGPT did not say that it was written by itself, it is basically certain that the content came from AI.
Among them, several characteristics are consistent with the content generated by Al:
1. The text is coherent and follows a clear structure, From general to specific.
2. Accurately cite sources and numerical data.
3. The correct use of terminology is a characteristic of the typical Al model. For example, GPT-4 is trained on various texts including scientific literature.
So actually, where does this content come from?
Here comes the interesting part. Unexpectedly, it turned out to be a doctoral thesis written by Professor Mumm himself!
The AI detector is not working?Since ChatGPT cannot check whether a piece of content is generated by AI, what can it do?
Of course, the "AI detector" was specially created for this purpose, claiming to defeat magic with magic.
Among the many AI detectors, the most famous one is GPTZero created by Princeton Chinese undergraduate Edward Tian - it is not only free, but also has outstanding results.
Just copy and paste the text, and GPTZero can clearly point out which paragraph in a piece of text is generated by AI and which is generated by humans written.
In principle, GPTZero mainly relies on "perplexity" (randomness of text) and "suddenness" (changes in perplexity) as indicators to make judgments.
In each test, GPTZero will also select the sentence with the highest degree of confusion, that is, the sentence that is most like human speech.
But this method is not completely reliable. Although GPTZero claims that the false positive rate of the product is
In actual testing, someone once entered the U.S. Constitution into GPTZero, and the result was determined to be written by AI.
As for the ChatGPT reply just now, GPTZero believes that it was probably written entirely by humans.
The consequence of this is that teachers who do not understand the principles and are too stubborn will inadvertently wrong many students. , such as Professor Mumm.
So, if we encounter this situation, how should we prove our innocence?
Some netizens suggested that, similar to the "U.S. Constitution Experiment," throw articles written before the emergence of ChatGPT into an AI detector to see what the results are.
However, logically speaking, even if it can be proven that the AI detector is indeed unreliable, students cannot directly prove that their paper was not generated by AI.
Ask ChatGPT how to break it, this is what it said.
"Let teachers understand the working methods and limitations of AI", well, ChatGPT discovered Huadian.
The only answer I can think of at the moment is that if you don’t write directly under the nose of the professor, then write the paper every time Record the screen, or simply broadcast live to the professor.
Even OpenAI can only guarantee a "true positive" accuracy rate of 26% for its official ChatGPT detector.
They also issued an official statement to inoculate everyone: "We really do not recommend using this tool in isolation, because we know that it can go wrong, and using AI for any kind of This is true for all assessments."
There are already countless detectors on the market - GPTZero, Turnitin, GPT-2 Output, Writer AI, Content at Scale AI, etc., but their accuracy is far from satisfactory.
So, why is it so difficult for us to detect whether a piece of content is generated by AI?
Eric Wang, Turnitin’s vice president of AI, said that the principle of using software to detect AI writing is based on statistics. From a statistical perspective, the difference between AI and humans is that it is extremely stable at the average level.
"A system like ChatGPT is like an advanced version of autocomplete, looking for the next most likely word to write. That's actually why it reads so naturally .AI writing is the most likely subset of human writing."
Turnitin's detector will "identify average situations where writing is too consistent." However, sometimes human writing can look average.
In economics, mathematics, and lab reports, students tend to follow a fixed writing style, which means they are more likely to be mistaken for AI writing.
What’s even more interesting is that in a recent paper, a research team from Stanford University found that for papers written by non-native speakers, the GPT detector is more likely to identify them as written by AI. . Among them, the probability that English papers written by Chinese people are judged to be generated by AI is as high as 61%.
Paper address: https://arxiv.org/pdf/2304.02819.pdf
The researchers obtained it from China’s education forum 91 TOEFL essays and 88 essays written by American eighth-grade students were extracted from the data set of the Hewlett Foundation in the United States and entered into seven major GPT detectors.
The percentage in the picture is the "misjudgment" ratio, that is, it is clearly written by someone, but it is As can be seen from the
# judged as generated by AI, the highest probability of American students’ essays being misjudged is only 12%, while the probability of Chinese students’ essays being misjudged is basically more than half, or even as high as 76%.
The researcher concluded that because the writing written by non-native speakers is not authentic and has low complexity, it is easy to be misjudged.
It can be seen that it is unreasonable to judge whether the author is a human or AI based on complexity.
Or, are there other reasons behind it?
In this regard, NVIDIA scientist Jim Fan said that the detector will be unreliable for a long time. After all, AI will become more powerful and write in an increasingly human-like manner.
It’s safe to say that these little language model quirks will become less common over time.
I wonder if this will be good news or bad news for the students.
The above is the detailed content of Outrageous! An American professor used ChatGPT to 'confirm' that his paper was plagiarized, and half the class failed the course. For more information, please follow other related articles on the PHP Chinese website!