10 Reasons Why Generative AI Is Worrying
Generative AI models like ChatGPT are so astounding that some now claim that AI can not only be as good as humans, but often smarter. They throw up wonderful works of art in dazzling styles. They can write texts full of details, ideas, and knowledge. The resulting artifacts are so diverse and seemingly so unique that it's hard to believe they came from a machine. We are only beginning to discover all that generative AI can do.
Some observers believe that these new artificial intelligences have finally crossed the threshold of the Turing test. Others argue that the threshold is not easily surpassed, but is simply overhyped. However, this masterpiece is so amazing that a group of people are already on the verge of unemployment.
However, once people get used to it, the aura of generative artificial intelligence will fade away. A group of observers ask questions in the right way, causing these intelligent machines to say something stupid or wrong. This has become a fashion nowadays. Some used old logic bombs popular in elementary school art classes, such as asking for a photo of the sun at night or a polar bear in a snowstorm. Others made bizarre requests that demonstrated the limits of AI's contextual awareness, also known as common sense. Those interested can calculate the patterns by which generative AI fails.
This article proposes ten shortcomings or pitfalls of generative artificial intelligence. This Qing Neng may read as a bit sour grapes, because if the machines were allowed to take over, he would lose his job. You can say that I am a little person who supports the human team, but I just hope that humans can show heroism in their struggle with machines. Still, shouldn't we all be a little worried?
1, Plagiarism
When generative artificial intelligence models like DALL-E and ChatGPT were first created, they actually just started from It makes new patterns from millions of examples in its training set, and the results are a synthesis of cut and paste from a variety of sources. If a human does this, it is called plagiarism.
Of course, humans also learn through imitation. In some cases, however, the borrowing is so obvious that it would make an elementary school teacher uneasy. This AI-generated content consists of large chunks of text, presented more or less word for word. Sometimes, however, there is enough mixing or synthesis involved that even when handed to a group of university professors it is difficult to discover its origins. In any case, it is impossible to see uniqueness in it. As shiny as these machines were, they were not capable of producing truly new work.
2, Copyright
Although plagiarism is largely a concern of schools, copyright law also applies to the market. When one person plagiarizes another person's work, they risk being taken to court and fined potentially millions of dollars. But what about artificial intelligence? Do the same rules apply to them?
Copyright law is a complex topic, and the legal status of generative AI will take years to resolve. But remember this: When AI starts producing work that looks good enough to put humans on the verge of unemployment, some of them will surely use their new spare time to file lawsuits.
3, Unpaid labor
Plagiarism and copyright are not the only legal issues raised by generative artificial intelligence. Lawyers are already dreaming up new litigation ethics issues. For example, should a company that makes a painting program collect data on human users’ painting behavior and then use that data to train artificial intelligence? Should humans be compensated for the use of this creative labor? The success of the current generation of artificial intelligence largely stems from the acquisition of data. So what happens when the people who generated the data want a piece of the pie? Which ones are fair? What can be considered legal?
4. Information is not knowledge
AI is particularly good at imitating the kind of intelligence that takes humans many years to develop. When anthropologists profile an obscure 17th-century artist or write new music using the tonal structures of an almost forgotten Renaissance, we have good reason to be impressed. We know it takes years of research to develop this depth of knowledge. When an AI does these same things after only a few months of training, the results can be dazzlingly accurate and correct, but missing some key ingredients.
If a well-trained machine could find the right old receipt in a digital shoebox filled with billions of records, it could also learn everything there is to know about a poet like Aphra Behn. You might even believe that machines were built to decode the meaning of Mayan hieroglyphics. AI may appear to be imitating the playful and unpredictable side of human creativity, but they can't really do that. At the same time, unpredictability is what drives creative innovation. An industry like fashion is not only obsessed with change, but defined by it. Indeed, artificial intelligence has its place, but so does good old, hard-won human intelligence.
5, Intelligence Stagnates
When it comes to intelligence, artificial intelligence is mechanical and rule-based in nature. Once AI processes a set of training data, it creates a model, which doesn't really change. Some engineers and data scientists envision gradually retraining AI models over time so the machines can learn to adapt. But, in most cases, the idea is to create a complex set of neurons that encode some knowledge in a fixed form. Constancy has its place and may work for certain industries. The danger with AI is that it will forever remain stuck in the zeitgeist of its training data. What happens when we humans become so reliant on generative AI that we can no longer generate new material for training models?
6, Privacy and Security
AI training data needs to come from somewhere, and we are not always so sure what will happen in the neural network. What appears. What if an AI leaks personal information from its training data? Worse, locking down AI is much more difficult because they are designed to be so flexible. A relational database can restrict access to specific tables containing personal information. However, AI can query in dozens of different ways. Attackers will quickly learn how to ask the right questions in the right way to get the sensitive data they want. For example, let's say the longitude and latitude of a certain asset are locked. A clever attacker might ask the location the exact time the sun rises in a few weeks. A dutiful AI will try to answer. We don’t yet have a handle on teaching AI to protect private data.
7、Unperceived bias
If you know that the earliest mainframe programmers coined the acronym GIGO or "Garbage In," Garbage Out” and you can tell that they recognized the heart of the computer problem from then on. Many problems with AI come from poor training data. If the data set is inaccurate or biased, the results are bound to reflect it.
The hardware at the heart of generative AI may be as logic-driven as Spock, but the humans who build and train the machines are not. Bias and favoritism have been shown to find their way into AI models. Maybe someone used biased data to create the model. Maybe they added overrides to prevent the model from answering specific hot questions. Maybe they put hard-coded answers in and then it becomes difficult to detect. Humanity has found many ways to ensure that artificial intelligence becomes an excellent vehicle for our harmful beliefs.
8, The Stupidity of Machines
It’s easy to forgive AI models for making mistakes because they do so many other things well. However, many errors are difficult to predict because artificial intelligence thinks differently from humans. For example, many users of the text-to-image feature found that the AI got fairly simple things wrong, like arithmetic. Humans learn basic arithmetic in elementary school, and we then use this skill in a variety of ways. Ask a 10-year-old child to draw a sketch of an octopus, and the child will almost certainly determine that it has eight legs. Current versions of artificial intelligence tend to get bogged down when it comes to abstract and contextual uses of mathematics. This could easily be changed if the model builder devoted some attention to this misstep, but there are other missteps. Machine intelligence is different from human intelligence, which means machine stupidity will be different too.
9, Human Gullibility
Sometimes without realizing it, we humans tend to fill in the gaps of artificial intelligence. We fill in missing information or plug in answers. If an AI tells us that Henry VIII was the king who murdered his wife, we won’t question it because we ourselves don’t understand this history. We simply assume ahead of time that the AI is right, just as we do when we cheer in front of a charismatic star. If a statement sounds confident, the human mind is often willing to accept it as true and correct.
The trickiest problem for users of generative AI is knowing when the AI is wrong. Machines can't lie like humans, but that makes them more dangerous. They can produce a few pieces of perfectly accurate data and then veer into speculation or even outright slander without anyone realizing it. Used car dealers or poker players often know when they are cheating, and most have evidence that exposes their defamatory behavior. But artificial intelligence does not.
10, Infinite richness
Digital content can be infinitely copied, which has changed many economic models built around scarcity. Gotta be nervous. Generative AI will break these patterns even more. Generative AI will put some writers and artists out of work; it will also upend many of the economic rules we rely on to survive. Can ad-supported content still work when both ads and content can be endlessly remixed and reborn? Will the free part of the internet be reduced to a world where robots click on ads on web pages, all crafted and infinitely replicated by generative AI?
This easy abundance could disrupt every corner of the economy. If these tokens could be replicated forever, would people continue to pay for non-replicable tokens? If making art was so easy, would it still be respected? Will it still be special? If it wasn't special, would anyone care? Does everything lose value when it is taken for granted? Is this what Shakespeare meant when he said "The slings and arrows of outrageous fortune"? Let us not try to answer this question ourselves. Let’s look to generative artificial intelligence for answers. The answer will be interesting, strange, and ultimately mysteriously trapped in some underworld between right and wrong.
Source: www.cio.com
The above is the detailed content of 10 Reasons Why Generative AI Is Worrying. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G

In the world of front-end development, VSCode has become the tool of choice for countless developers with its powerful functions and rich plug-in ecosystem. In recent years, with the rapid development of artificial intelligence technology, AI code assistants on VSCode have sprung up, greatly improving developers' coding efficiency. AI code assistants on VSCode have sprung up like mushrooms after a rain, greatly improving developers' coding efficiency. It uses artificial intelligence technology to intelligently analyze code and provide precise code completion, automatic error correction, grammar checking and other functions, which greatly reduces developers' errors and tedious manual work during the coding process. Today, I will recommend 12 VSCode front-end development AI code assistants to help you in your programming journey.
