Home > Technology peripherals > AI > A new detection tool that can identify scientific texts generated by AI is released, claiming an accuracy rate of over 99%

A new detection tool that can identify scientific texts generated by AI is released, claiming an accuracy rate of over 99%

王林
Release: 2023-06-10 15:06:03
forward
746 people have browsed it

IT House News on June 8th: Earlier this year, Som Biswas, a radiologist at the Tennessee Health Science Center in the United States, attracted attention because he published an article in the journal Radiology powered by the artificial intelligence chatbot ChatGPT. Assisted in writing an article titled "ChatGPT and the Future of Medical Writing." He expressed his use and modification of ChatGPT-generated text in an effort to raise awareness of the utility of the technology. According to him, he published 16 journal articles using ChatGPT over the next four months. Some journal editors report that they receive a large number of articles authored by ChatGPT.

A new detection tool that can identify scientific texts generated by AI is released, claiming an accuracy rate of over 99%

In order to deal with this situation, Heather Desaire, a professor of chemistry at the University of Kansas, and her team developed a new AI detection tool that can efficiently and accurately distinguish whether scientific texts are generated by humans or ChatGPT. Their research results were published In the journal Cell Reports Physical Science.

Professor Desaire said she and her team first analyzed 64 "Perspectives" articles in the journal Science, which are review articles that review and evaluate current research. Next, they analyzed 128 articles generated by ChatGPT on the same research topic. They compared the two and found 20 features that could be used to determine the identity of the author of a scientific text.

They found that human scientists and ChatGPT differed significantly in paragraph complexity, sentence length, punctuation, and vocabulary usage. Compared to symbols such as brackets, dashes, question marks, semicolons, and capital letters, ChatGPT uses them less often, while human scientists use them more. Human scientists are more likely to use vague language such as “however,” “although,” “but,” etc. ChatGPT tends to have a relatively even distribution of sentence length, while human scientists may use both short and long sentences in their writing..

Based on these 20 features, they used a ready-made machine learning algorithm XGBoost to train their AI detection tool. They tested the performance of their AI detection tool on 180 articles and found that it was very good at judging an Whether this scientific article was written by a human or ChatGPT. "This method has an accuracy of over 99%," Professor Desaire said, adding that it was much better than existing tools, which are trained on a wider range of text types rather than specifically For scientific texts.

Professor Desaire said that this AI detection tool can help journal editors deal with a large number of articles written using ChatGPT, allowing them to prioritize which articles are worthy of review. She added that the tool could be adapted for different areas, such as detecting plagiarism in students, as long as it was trained on the appropriate language. Once you identify useful features, you can adapt it for any domain you want. ”

IT House noticed that not everyone thinks this AI detection tool is of great use. Dr Vitomir Kovanović from the Center for Change and Complexity Learning (C3L) at the University of South Australia said the comparison made by Professor Desaire and her team was unrealistic because they only compared 100% AI-generated and 100% human-generated text without taking into account collaboration between humans and AI. He said that when scientists use ChatGPT, there is often some degree of human-machine collaboration, such as the scientist editing the AI-generated text. Rewritten sentence: This is necessary because ChatGPT can occasionally make errors and even generate false references. But because the researchers only compared two extreme cases, their success rate was improved.

Dr. Lingqiao Liu of the University of Adelaide’s Machine Learning Institute also believes that in the real world, the accuracy of such AI detection tools may be reduced, leading to more misclassifications. Dr. Liu, an expert in developing algorithms to detect AI-generated images, said: "Methodologically, this is fine, but there are certain risks in using it."

On the other hand, Dr. Liu pointed out that it is also possible for people to instruct ChatGPT to write in a specific way, so that 100% text written by AI will pass detection. In fact, some commentators have even spoken of an “arms race,” referring to the competition between those trying to make machines more human-like and those seeking to expose those using the technology for nefarious purposes.

Dr Kovanović believes the competition is pointless given the technology’s momentum and potential positive impact. He suggested that AI detection has not yet reached a critical point, so we should invest our energy in making better use of AI. He opposes the use of anti-plagiarism software to measure whether college students use AI in their writing, arguing that it would put unnecessary pressure on students.

The above is the detailed content of A new detection tool that can identify scientific texts generated by AI is released, claiming an accuracy rate of over 99%. For more information, please follow other related articles on the PHP Chinese website!

source:sohu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template