Check out ChatGPT!
Planner | Yizhou
ChatGPT is still popular and has been liked by many celebrities one after another! Bill Gates, Nadella of Microsoft, Musk of Tesla, Robin Li, Zhou Hongyi, Zhang Chaoyang of China, and even Zheng Yuanjie, an author who is not in the technology circle, have begun to believe that "writers may be unemployed in the future" because of the emergence of ChatGPT. "Yes. For another example, Brin, the retired boss of Google, was alarmed. Former Meituan co-founder Wang Huiwen also came out again, posting hero posts to recruit AI talents and create a Chinese OpenAI.
Generative AI, represented by ChatGPT and DALL-E, writes texts full of rich details, ideas and knowledge in a dazzling series of styles, throwing out Gorgeous answers and artwork. The resulting artifacts are so diverse and unique that it's hard to believe they came from a machine.
So much so that some observers believe these new AIs have finally crossed the threshold of the Turing test. In the words of some: the threshold was not slightly exceeded, but blown to pieces. This AI art is so good that "another group of people are already on the verge of unemployment."
However, after more than a month of fermentation, people’s sense of wonder in AI is fading, and the “original star halo” of generative AI is also gradually disappearing. For example, some observers asked questions in the right way, while ChatGpt "spit" something stupid or even wrong.
As another example, some people use the popular old-fashioned logic bomb in elementary school art class, asking to photograph the sun at night or a polar bear in a snowstorm. Others asked stranger questions, glaring at the limits of AI’s context awareness.
This article summarizes the “Ten Sins” of generative AI. These accusations may read like sour grapes (I am also jealous of the power of AI. If the machine is allowed to take over, I will lose my job, haha~) but they are intended to be a reminder, not a smear.
1. Plagiarism Plagiarism is harder to detect
When generative AI models such as DALL-E and ChatGPT are created, they are actually just created from a training set of hundreds Create new patterns from thousands of examples. The result is a cut-and-paste synthesis taken from various sources, and when humans do this, it's also known as plagiarism.
Of course, humans also learn through imitation, but in some cases, AI’s “taking” and “borrowing” are so obvious that they make An elementary school teacher was so angry that she couldn't teach her students. This AI-generated content consists of large amounts of text that is presented more or less verbatim. However, sometimes there is enough doping or synthesis that even a team of university professors may have difficulty detecting the source. Regardless, what's missing is uniqueness. As gleaming as these machines were, they were unable to produce anything truly new.
2.Copyright: When Humanity Replaces, Litigation Rise
While plagiarism is largely a school issue, copyright law applies to the marketplace. When one person is squeezed from another person's work, they may be taken to court, which may impose millions of dollars in fines. But what about AI? Do the same rules apply to them?
Copyright law is a complex subject, and the question of the legal identity of generative AI will take years to resolve. But one thing is not difficult to predict: when artificial intelligence is good enough to replace employees, those replaced will definitely use their "free time at home" to file lawsuits.
3. Humans serve as unpaid labor for models
Plagiarism and copyright are not the only legal issues raised by generative AI. Lawyers are already formulating new ethical issues in litigation. For example, should companies that make drawing programs be allowed to collect data about human users’ drawing behavior and be able to use that data for AI training? Based on this, should one be compensated for the creative labor used? The current success of AI largely stems from access to data. So, can it happen when the public that generates the data wants a piece of the pie? What is fairness? What is legal?
4. Information accumulation, not knowledge creation
AI is particularly good at imitating the kind of intelligence that humans take years to develop. When a scholar is able to introduce an unknown 17th-century artist, or compose new music with an almost forgotten Renaissance tonal structure, there is every reason to marvel. We know that developing this depth of knowledge requires years of study. When an AI does these same things with just a few months of training, the results can be incredibly precise and correct, but something is missing.
Artificial intelligence only seems to imitate the interesting and unpredictable side of human creativity, but it is "similar in form but not similar in spirit" and cannot truly do this. At the same time, unpredictability is what drives creative innovation. The fashion and entertainment industry is not only addicted to change, but also defined by "change".
In fact, both artificial intelligence and human intelligence have their own areas of expertise. For example: If a trained machine can find the correct old receipt in a digital box filled with billions of records, it can also learn about people like Aphra Behn (the first 17th-century writer famous for writing). Everything a poet like the British woman (who made a living) knew. It is even conceivable that machines were built to decipher the meaning of Mayan hieroglyphics.
5. Intelligence is stagnant and difficult to grow
When it comes to intelligence, artificial intelligence is essentially mechanical and rule-based. Once the AI goes through a set of training data, it creates a model, which doesn't really change. Some engineers and data scientists envision gradually retraining AI models over time so that the machines can learn to adapt.
But, in most cases, the idea is to create a complex set of neurons that encode some knowledge in a fixed form. This “constancy” has its place, and may apply to certain industries. But it is also its weakness. The danger is that its cognition will always stay in the "era cycle" of its training data.
What happens if we become so dependent on generative AI that we can no longer create new materials for training models?
6. The gates to privacy and security are too loose
Training data for artificial intelligence needs to come from somewhere, and we’re not always sure what’s going on in neural networks What will appear. What if an AI leaks personal information from its training data?
Worse, locking down AI is much more difficult because they are designed to be very flexible. Relational databases can restrict access to specific tables with personal information. However, AI can query in dozens of different ways. Attackers will quickly learn how to ask the right questions in the right way to get the sensitive data they want.
For example, assuming an attacker is eyeing the location of an asset, AI can also be used to ask for the latitude and longitude. A clever attacker might ask for the exact moment the sun will rise at that location a few weeks later. A conscientious AI will do its best to provide answers. How to teach artificial intelligence to protect private data is also a difficult problem.
7. The Unknown Land of Prejudice
Since the days of the mainframe, the technology community has created the concept of “garbage in, garbage out” (GIGO), also known as GIGO. Let the public see the core of computer problems. Many problems with AI come from poor training data. If the data set is inaccurate or biased, the results will reflect this.
The core hardware of generative AI is theoretically logic-driven, but the humans who build and train the machines are not. Pre-judicial opinions and political affiliation bias have been shown to be introduced into AI models. Maybe someone used biased data to create the model. Maybe they added some kind of training corpus to prevent the model from answering specific hot questions. Maybe they enter a hardwired answer that then becomes difficult to detect.
Artificial intelligence is indeed a good tool, but it also means that there are 10,000 ways for people with ulterior motives to make AI an excellent carrier of harmful beliefs.
Here is an example of a foreign home purchase loan. In this case, the AI system used to evaluate potential tenants relied on court records and other data sets, many of which had their own biases, reflected systemic racism, sexism, and ableism, and were notoriously error-prone. Even though some people clearly have the ability to pay rent, they are often denied home loans because tenant screening algorithms deem them unqualified or unworthy. This is also the answer we often hear from salesmen: Big data/system/AI prompts this.
ChatGPT’s behavior after being offended
8. The machine’s stupidity was caught off guard
It’s easy to forgive AI models for mistakes because they do so many other things. It’s just that many errors are difficult to predict because artificial intelligence thinks differently from humans.
For example, many users of the text-to-image feature found that the AI made simple mistakes like counting. Humans learn basic arithmetic in early elementary school, and we then use this skill in a variety of ways. Ask a 10-year-old to draw an octopus and the child will almost certainly confirm that it has eight legs. Current versions of artificial intelligence tend to get bogged down when it comes to abstract and contextual uses of mathematics.
This could be easily changed if the model builder paid some attention to this mistake, but there are other unknown errors as well. Machine intelligence will be different from human intelligence, which means that machine stupidity will also be different.
9. Machines can also lie and can easily deceive people
Sometimes, without realizing this, we humans tend to fall into the pit of AI. In the blind spot of knowledge, we tend to believe in AI. If an AI tells us that Henry VIII was the king who killed his wife, we won’t question it because we ourselves don’t know this history. We tend to assume that artificial intelligence is correct, just like when we, as audience members at a conference, see a charismatic host waving, we also default to believing that "the person on the stage knows more than me."
The trickiest problem for users of generative AI is knowing when the AI goes wrong. "Machines don't lie" is often our mantra, but in fact this is not the case. Although machines cannot lie like humans, the mistakes they make are more dangerous.
They can write out paragraphs of completely accurate data without anyone knowing what happened, and then turn to speculation or even a lie. AI can also do the art of "mixed truth and falsehood". But the difference is that a used car dealer or a poker player often knows when they are lying. Most people can tell where they are lying, but AI cannot.
10. Infinite abuse: Worrying economic model
The infinite replicability of digital content has put many economic models built around scarcity into trouble. Generative AI will further break these patterns. Generative AI will put some writers and artists out of work, and it upends many of the economic rules we all live by.
- Will ad-supported content work when both ads and content can be endlessly remixed and reborn?
- Will the free part of the internet turn into a world of "bots clicking on page ads", all generated by artificial intelligence and capable of infinite replication?
- “Prosperity and abundance” so easily achieved may disrupt every corner of the economy.
- If non-fungible tokens could be replicated forever, would people continue to pay for them?
- If making art was so easy, would it still be respected? Will it still be special? Would anyone mind if it wasn't special?
- Does everything lose its value when everything is taken for granted?
- Is this what Shakespeare meant when he spoke of "slings and arrows of outrageous fortune"?
Let’s not try to answer it ourselves, let generative AI do it on its own. It may return an answer that is interesting, unique, and strange, and it will most likely tread the line of “ambiguity”—an answer that is slightly mysterious, on the edge of right and wrong, and neither fish nor fowl.
Original link: https://www.infoworld.com/article/3687211/10-reasons-to-worry-about-generative-ai.html
The above is the detailed content of Check out ChatGPT!. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

According to news from this site on August 14, during today’s August Patch Tuesday event day, Microsoft released cumulative updates for Windows 11 systems, including the KB5041585 update for 22H2 and 23H2, and the KB5041592 update for 21H2. After the above-mentioned equipment is installed with the August cumulative update, the version number changes attached to this site are as follows: After the installation of the 21H2 equipment, the version number increased to Build22000.314722H2. After the installation of the equipment, the version number increased to Build22621.403723H2. After the installation of the equipment, the version number increased to Build22631.4037. The main contents of the KB5041585 update for Windows 1121H2 are as follows: Improvement: Improved

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Open AI is finally making its foray into search. The San Francisco company has recently announced a new AI tool with search capabilities. First reported by The Information in February this year, the new tool is aptly called SearchGPT and features a c

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

Recently, Samsung Display and Microsoft signed an important cooperation agreement. According to the agreement, Samsung Display will develop and supply hundreds of thousands of OLEDoS panels for mixed reality (MR) head-mounted devices to Microsoft. Microsoft is developing an MR device for multimedia content such as games and movies. This device is expected to It will be launched after the OLEDoS specifications are finalized, mainly serving the commercial field, and is expected to be delivered as early as 2026. OLEDoS (OLED on Silicon) technology OLEDoS is a new display technology that deposits OLED on a silicon substrate. Compared with traditional glass substrates, it is thinner and has higher pixels. OLEDoS display and ordinary display
