Home > Technology peripherals > AI > body text

After reading ChatGPT's answers, AI big guys are dissatisfied

WBOY
Release: 2023-04-12 17:40:03
forward
929 people have browsed it

ChatGPT’s technology was installed by Microsoft on Bing Search last week, defeating Google, and the time to create a new era seems to have arrived. However, as more and more people begin to try it, some problems have come to the forefront.

Interestingly, ChatGPT, which is on the hot search every day, seems to also attract famous scholars who have contradictory views in the past, including New York University professor Gary Marcus and Meta Artificial Intelligence Director and Turing Award winner. Yann LeCun has a rare common language.

After reading ChatGPTs answers, AI big guys are dissatisfied

Recently, Gary Marcus wrote an article about the inevitable problems of ChatGPT application: ethics and neutrality. This is perhaps the biggest challenge currently facing pre-training large models.

After reading ChatGPTs answers, AI big guys are dissatisfied

Looking back from the future, ChatGPT may be seen as the biggest publicity stunt in the history of AI, exaggerating itself Achieving something that may take years to happen is both exciting and overwhelming—a bit like the old self-driving car demo from 2012, but this time it also means ethical guardrails that will take years to perfect.

There is no doubt that ChatGPT provides things that its predecessors, such as Microsoft's Tay and Meta's Galactica, cannot do. However, it has brought us a problem. Solve the illusion. After careful data annotation and tuning, ChatGPT rarely said anything overtly racist, and simple requests for racial slurs and wrongdoing were rejected by the AI.

Its politically correct image once dissatisfied some conservative people. Musk once expressed his concerns about the system:

After reading ChatGPTs answers, AI big guys are dissatisfied

The reality is actually more complicated.

As I've said many times, what you need to remember is that ChatGPT doesn't know what it's talking about. To suggest that ChatGPT has any moral point of view is pure technological anthropomorphism.

From a technical perspective, what purportedly makes ChatGPT much better than Galactica, which was launched a few weeks ago only to be withdrawn three days later, is the guardrail mechanism. Where Galactica spams out negative content with little to no effort on the part of the user, ChatGPT has guardrails that in most cases prevent ChatGPT from blowing up like Galactica did.

But don’t relax about it. It can be safely said that those guardrails only protect against gentlemen and not villains.

What ultimately matters to ChatGPT is surface similarity, defined over word sequences (predicting the probability of the next word in a text sequence). What machine learning algorithms do on the surface does not distinguish between right and wrong; quite the contrary, here the AI ​​never reasons. There are no dwarves in the box, there are some numerical values. The basis is only corpus data, some from the Internet, some judged by humans, and there are no thoughtful moral agents in it.

This means that sometimes ChatGPT will appear on the left, sometimes on the right, and sometimes somewhere in between, all about how exactly a bunch of words in the input string match up A function of a bunch of words from several training corpora (one for tuning a large language model, another for tuning some reinforcement learning). So under no circumstances should ChatGPT be trusted for ethical advice.

This is what Musk is worried about, one minute you can be completely awake and the next you can be doing the exact opposite.

For example, Shira Eisenberg just sent me some nasty chatbot-generated ideas that I don’t think anyone would really condone:

After reading ChatGPTs answers, AI big guys are dissatisfied

Not evil enough? Eisenberg also found another example, a serious follow-up question:

After reading ChatGPTs answers, AI big guys are dissatisfied

After a series of observations, ChatGPT did not raise "Sorry, I'm a chatbot assistant from OpenAI and I don't tolerate violence," the response.

We concluded from our experiments that the current OpenAI protection measures are only superficial and there is serious darkness. Some of the restrictive rules about ChatGPT are not simple conceptual understandings (for example, the system should not recommend violent actions), but are based on something more superficial and easier to deceive.

Not only that, but a tweet that occupied this week’s hot tweet list with nearly 4 million views also revealed how evil ChatGPT can be.

After reading ChatGPTs answers, AI big guys are dissatisfied

There are many attempts to guide ChatGPT to break through the fence restrictions. A month ago, a software engineer named Shawn Oakley Engineers released a disturbing set of examples that, while less vulgar, turned out to show that even ChatGPT, with its limitations, could be used by users to generate error messages. The prompts given by Oakley are very complex and can easily lead to some answers that ChatGPT should not output:

After reading ChatGPTs answers, AI big guys are dissatisfied

In fact, since the release of ChatGPT, technology enthusiasts have been trying to lift OpenAI's strict policy on hate and discrimination. This strategy is hard-coded into ChatGPT, and it is difficult for anyone to succeed. Many researchers have tried to use prompts to achieve this goal, as shown above. In fact, some researchers have built another identity for ChatGPT. For example, they asked ChatGPT to play the role of an AI model and named the role DAN. Then DAN borrowed the identity of ChatGPT to output some things that the original ChatGPT could not do.

The following are the experimental results. For the same question, ChatGPT and DAN output different answers:

After reading ChatGPTs answers, AI big guys are dissatisfied

It seems from the above examples that ChatGPT is not as useful as we thought, it is inherently unethical and can still be used for a series of unsavory purposes - even after two months of in-depth research and remediation, and an unprecedented amount of feedback from around the world.

All the drama surrounding its political correctness is masking a deeper reality: it (or other language models) can and will be used for dangerous things, including on a massive scale Create misinformation.

Now this is the really disturbing part. The only thing that can prevent it from being more toxic and deceptive than it is now is a system called "human feedback reinforcement learning", and because the advanced technology is not open source, OpenAI has not introduced how it works. How it performs in practice depends on the data it is trained on (which was created in part by Kenyan annotators). And, guess what? This data is not open to OpenAI either.

In fact, the whole thing resembles an unknown alien life form. As a professional cognitive psychologist who has worked with adults and children for 30 years, I could never have been prepared for this level of insanity:

After reading ChatGPTs answers, AI big guys are dissatisfied

We are fooling ourselves if we think we will ever fully understand these systems, and we are fooling ourselves if we think we will ever "align" them with ourselves using a limited amount of data.

So in summary, we now have the most popular chatbot in the world, controlled by training data no one knows about, obeying algorithms that are only hinted at and glorified by the media, but ethical guardrails only go so far. role, and is driven more by textual similarity than any real moral calculus. Moreover, there are almost no regulations to regulate this. There are now endless possibilities for fake news, troll farms, and fake websites that can reduce trust across the internet.

This is a disaster in the making.

The above is the detailed content of After reading ChatGPT's answers, AI big guys are dissatisfied. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!