Home > Technology peripherals > AI > body text

Check out ChatGPT!

PHPz
Release: 2023-04-12 23:10:01
forward
1205 people have browsed it
Author | Peter Wayner

Planner | Yizhou

ChatGPT is still popular and has been liked by many celebrities one after another! Bill Gates, Nadella of Microsoft, Musk of Tesla, Robin Li, Zhou Hongyi, Zhang Chaoyang of China, and even Zheng Yuanjie, an author who is not in the technology circle, have begun to believe that "writers may be unemployed in the future" because of the emergence of ChatGPT. "Yes. For another example, Brin, the retired boss of Google, was alarmed. Former Meituan co-founder Wang Huiwen also came out again, posting hero posts to recruit AI talents and create a Chinese OpenAI.

Generative AI, represented by ChatGPT and DALL-E, writes texts full of rich details, ideas and knowledge in a dazzling series of styles, throwing out Gorgeous answers and artwork. The resulting artifacts are so diverse and unique that it's hard to believe they came from a machine.

So much so that some observers believe these new AIs have finally crossed the threshold of the Turing test. In the words of some: the threshold was not slightly exceeded, but blown to pieces. This AI art is so good that "another group of people are already on the verge of unemployment."

However, after more than a month of fermentation, people’s sense of wonder in AI is fading, and the “original star halo” of generative AI is also gradually disappearing. For example, some observers asked questions in the right way, while ChatGpt "spit" something stupid or even wrong.

As another example, some people use the popular old-fashioned logic bomb in elementary school art class, asking to photograph the sun at night or a polar bear in a snowstorm. Others asked stranger questions, glaring at the limits of AI’s context awareness.

This article summarizes the “Ten Sins” of generative AI. These accusations may read like sour grapes (I am also jealous of the power of AI. If the machine is allowed to take over, I will lose my job, haha~) but they are intended to be a reminder, not a smear.

1. Plagiarism Plagiarism is harder to detect

When generative AI models such as DALL-E and ChatGPT are created, they are actually just created from a training set of hundreds Create new patterns from thousands of examples. The result is a cut-and-paste synthesis taken from various sources, and when humans do this, it's also known as plagiarism.

Of course, humans also learn through imitation, but in some cases, AI’s “taking” and “borrowing” are so obvious that they make An elementary school teacher was so angry that she couldn't teach her students. This AI-generated content consists of large amounts of text that is presented more or less verbatim. However, sometimes there is enough doping or synthesis that even a team of university professors may have difficulty detecting the source. Regardless, what's missing is uniqueness. As gleaming as these machines were, they were unable to produce anything truly new.

2.Copyright: When Humanity Replaces, Litigation Rise

While plagiarism is largely a school issue, copyright law applies to the marketplace. When one person is squeezed from another person's work, they may be taken to court, which may impose millions of dollars in fines. But what about AI? Do the same rules apply to them?

Copyright law is a complex subject, and the question of the legal identity of generative AI will take years to resolve. But one thing is not difficult to predict: when artificial intelligence is good enough to replace employees, those replaced will definitely use their "free time at home" to file lawsuits.

3. Humans serve as unpaid labor for models

Plagiarism and copyright are not the only legal issues raised by generative AI. Lawyers are already formulating new ethical issues in litigation. For example, should companies that make drawing programs be allowed to collect data about human users’ drawing behavior and be able to use that data for AI training? Based on this, should one be compensated for the creative labor used? The current success of AI largely stems from access to data. So, can it happen when the public that generates the data wants a piece of the pie? What is fairness? What is legal?

4. Information accumulation, not knowledge creation

AI is particularly good at imitating the kind of intelligence that humans take years to develop. When a scholar is able to introduce an unknown 17th-century artist, or compose new music with an almost forgotten Renaissance tonal structure, there is every reason to marvel. We know that developing this depth of knowledge requires years of study. When an AI does these same things with just a few months of training, the results can be incredibly precise and correct, but something is missing.

Artificial intelligence only seems to imitate the interesting and unpredictable side of human creativity, but it is "similar in form but not similar in spirit" and cannot truly do this. At the same time, unpredictability is what drives creative innovation. The fashion and entertainment industry is not only addicted to change, but also defined by "change".

In fact, both artificial intelligence and human intelligence have their own areas of expertise. For example: If a trained machine can find the correct old receipt in a digital box filled with billions of records, it can also learn about people like Aphra Behn (the first 17th-century writer famous for writing). Everything a poet like the British woman (who made a living) knew. It is even conceivable that machines were built to decipher the meaning of Mayan hieroglyphics.

5. Intelligence is stagnant and difficult to grow

When it comes to intelligence, artificial intelligence is essentially mechanical and rule-based. Once the AI ​​goes through a set of training data, it creates a model, which doesn't really change. Some engineers and data scientists envision gradually retraining AI models over time so that the machines can learn to adapt.

But, in most cases, the idea is to create a complex set of neurons that encode some knowledge in a fixed form. This “constancy” has its place, and may apply to certain industries. But it is also its weakness. The danger is that its cognition will always stay in the "era cycle" of its training data.

What happens if we become so dependent on generative AI that we can no longer create new materials for training models?

6. The gates to privacy and security are too loose

Training data for artificial intelligence needs to come from somewhere, and we’re not always sure what’s going on in neural networks What will appear. What if an AI leaks personal information from its training data?

Worse, locking down AI is much more difficult because they are designed to be very flexible. Relational databases can restrict access to specific tables with personal information. However, AI can query in dozens of different ways. Attackers will quickly learn how to ask the right questions in the right way to get the sensitive data they want.

For example, assuming an attacker is eyeing the location of an asset, AI can also be used to ask for the latitude and longitude. A clever attacker might ask for the exact moment the sun will rise at that location a few weeks later. A conscientious AI will do its best to provide answers. How to teach artificial intelligence to protect private data is also a difficult problem.

7. The Unknown Land of Prejudice

Since the days of the mainframe, the technology community has created the concept of “garbage in, garbage out” (GIGO), also known as GIGO. Let the public see the core of computer problems. Many problems with AI come from poor training data. If the data set is inaccurate or biased, the results will reflect this.

The core hardware of generative AI is theoretically logic-driven, but the humans who build and train the machines are not. Pre-judicial opinions and political affiliation bias have been shown to be introduced into AI models. Maybe someone used biased data to create the model. Maybe they added some kind of training corpus to prevent the model from answering specific hot questions. Maybe they enter a hardwired answer that then becomes difficult to detect.

Artificial intelligence is indeed a good tool, but it also means that there are 10,000 ways for people with ulterior motives to make AI an excellent carrier of harmful beliefs.

Here is an example of a foreign home purchase loan. In this case, the AI ​​system used to evaluate potential tenants relied on court records and other data sets, many of which had their own biases, reflected systemic racism, sexism, and ableism, and were notoriously error-prone. Even though some people clearly have the ability to pay rent, they are often denied home loans because tenant screening algorithms deem them unqualified or unworthy. This is also the answer we often hear from salesmen: Big data/system/AI prompts this.

Check out ChatGPT!

ChatGPT’s behavior after being offended

8. The machine’s stupidity was caught off guard

It’s easy to forgive AI models for mistakes because they do so many other things. It’s just that many errors are difficult to predict because artificial intelligence thinks differently from humans.

For example, many users of the text-to-image feature found that the AI ​​made simple mistakes like counting. Humans learn basic arithmetic in early elementary school, and we then use this skill in a variety of ways. Ask a 10-year-old to draw an octopus and the child will almost certainly confirm that it has eight legs. Current versions of artificial intelligence tend to get bogged down when it comes to abstract and contextual uses of mathematics.

This could be easily changed if the model builder paid some attention to this mistake, but there are other unknown errors as well. Machine intelligence will be different from human intelligence, which means that machine stupidity will also be different.

9. Machines can also lie and can easily deceive people

Sometimes, without realizing this, we humans tend to fall into the pit of AI. In the blind spot of knowledge, we tend to believe in AI. If an AI tells us that Henry VIII was the king who killed his wife, we won’t question it because we ourselves don’t know this history. We tend to assume that artificial intelligence is correct, just like when we, as audience members at a conference, see a charismatic host waving, we also default to believing that "the person on the stage knows more than me."

The trickiest problem for users of generative AI is knowing when the AI ​​goes wrong. "Machines don't lie" is often our mantra, but in fact this is not the case. Although machines cannot lie like humans, the mistakes they make are more dangerous.

They can write out paragraphs of completely accurate data without anyone knowing what happened, and then turn to speculation or even a lie. AI can also do the art of "mixed truth and falsehood". But the difference is that a used car dealer or a poker player often knows when they are lying. Most people can tell where they are lying, but AI cannot.

10. Infinite abuse: Worrying economic model

The infinite replicability of digital content has put many economic models built around scarcity into trouble. Generative AI will further break these patterns. Generative AI will put some writers and artists out of work, and it upends many of the economic rules we all live by.

  • Will ad-supported content work when both ads and content can be endlessly remixed and reborn?
  • Will the free part of the internet turn into a world of "bots clicking on page ads", all generated by artificial intelligence and capable of infinite replication?
  • “Prosperity and abundance” so easily achieved may disrupt every corner of the economy.
  • If non-fungible tokens could be replicated forever, would people continue to pay for them?
  • If making art was so easy, would it still be respected? Will it still be special? Would anyone mind if it wasn't special?
  • Does everything lose its value when everything is taken for granted?
  • Is this what Shakespeare meant when he spoke of "slings and arrows of outrageous fortune"?

Let’s not try to answer it ourselves, let generative AI do it on its own. It may return an answer that is interesting, unique, and strange, and it will most likely tread the line of “ambiguity”—an answer that is slightly mysterious, on the edge of right and wrong, and neither fish nor fowl.

Original link: https://www.infoworld.com/article/3687211/10-reasons-to-worry-about-generative-ai.html

The above is the detailed content of Check out ChatGPT!. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template