Table of Contents
Complaint details
AI academic community, a debate
Home Technology peripherals AI GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

Apr 07, 2023 pm 09:31 PM
AI Disable

A few days ago, Musk, Yoshua Bengio and others jointly signed an open letter calling on all AI laboratories to immediately suspend the training of AI models more powerful than GPT-4. Now, someone wants to stop the released GPT-4.

This time it was the non-profit organization Center for Artificial Intelligence and Digital Policy (CAIDP) that stopped GPT-4. CAIDP requested the U.S. Federal Trade Commission (FTC) to investigate OpenAI and prohibit the company from further releasing GPT-4.

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

File address: https://cdn.arstechnica.net/wp-content/uploads/2023/03/CAIDP-FTC -Complaint-OpenAI-GPT-033023.pdf

CAIDP filed this application with the FTC because they believed that "the consumer product GPT-4 released by OpenAI is biased, Deceptive and posing a risk to privacy and public safety. The model's output cannot be proven or reproduced and is not independently evaluated before deployment."

CAIDP calls for independent oversight and evaluation of all commercial artificial intelligence products in the United States and ensures necessary "safeguards" are in place to protect consumers, businesses, and the commercial market.

Previously, the FTC has announced an artificial intelligence standard that "while promoting accountability, requires the use of artificial intelligence to be transparent, explainable, fair, and "Empirically reasonable", CAIDP stated that "OpenAI's GPT-4 does not meet any of these requirements."

It’s only been two weeks since the release of GPT-4, and people are already deeply divided over this type of powerful AI model. On the one hand, those who want to stop powerful models such as GPT-4 believe that these models will pose greater risks to information security and even human society; on the other hand, some people believe that this is a good time for artificial intelligence to flourish. Accelerated technological progress should be promoted.

Interestingly, OpenAI CEO Sam Altman posted a new tweet: "Stay calm in the center of the storm." This may be his response to the recent series of "suspended GPT-4, etc." A response to the call for “model research”.

In terms of risk, OpenAI said at the time of the release that it had asked external experts to assess the potential risks posed by GPT-4. However, CAIDP made it clear in its filing to the FTC which acts GPT-4 violated.

CAIDP believes that GPT-4 has seriously affected commercial fairness, saying: "The commercial release of GPT-4 violates Section 5 of the FTC Act and the FTC's regulations on the use and advertising of artificial intelligence products. Standards set by enterprises, as well as emerging norms for artificial intelligence governance, etc."

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

In addition, OpenAI did not disclose any technical details of GPT-4. This is one of the reasons why CAIDP filed an application with the FTC.

CAIDP said: "OpenAI did not disclose details about the architecture, model size, hardware, computing resources, training techniques, dataset construction or training methods, and it is common practice in the research community to document Training data and training technology for large language models, but OpenAI chose not to do these things for GPT-4. In particular, generative artificial intelligence models are not ordinary consumer products, because they may exhibit some abnormal behaviors during use, These behaviors may not have been discovered by the issuing company before."

Complaint details

Specifically, CAIDP targeted GPT-4 and A related model, ChatGPT, points out a range of potential risks.

For example, OpenAI has made it clear in the article "GPT-4 System Card" that GPT-4 may reinforce and reproduce specific biases and worldviews, including against certain marginalized groups. of stereotypes and derogatory associations. CAIDP also cited a blog post from the company OpenAI, which said that a similar large model, ChatGPT, sometimes responded to harmful instructions or exhibited biased behavior.

In the complaint submitted to the FTC, CAIDP stated that "OpenAI released GPT-4 to the public for commercial use despite fully understanding the risks." The complaint also alleges that "the GPT-4 System Card does not provide details about the security checks OpenAI conducted during its testing, nor does it detail any steps OpenAI takes to protect children, which also raises questions about the use of GPT-4 on children." Concerns."

CAIDP also pointed out concerns raised by the European consumer organization BEUC: "If ChatGPT is used for consumer credit or insurance scores, is it possible that it will produce inappropriate results? Fair and biased outcome?" This tweet was also cited in CAIDP's complaint application.

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

In addition, in terms of network security, ChatGPT can be used for phishing, creating fake texts, or generating malicious code. On the privacy front, CAIDP said there were reports this month that OpenAI displayed private chats to other users.

In another case, an AI researcher described how ChatGPT could take over someone's account, view chat history, and access their bill without them realizing it information. However, OpenAI fixed the vulnerability.

CAIDP also stated that GPT-4 can provide text responses from image inputs, a feature that has a huge impact on personal privacy and personal autonomy, and allows users to convert personal images to linked to detailed personal data. It is understood that OpenAI has suspended the image-to-text function, but it is difficult to say what the actual situation is.

CAIDP believes that the FTC should prohibit OpenAI from further commercial deployment of GPT, require independent evaluation of GPT products before deployment and throughout the GPT AI life cycle, require OpenAI to comply with FTC AI standards, and Establish a publicly accessible incident reporting mechanism for GPT-4, similar to the FTC's mechanism for reporting consumer fraud.

CAIDP also urged the FTC to further publish standard specifications to serve as "basic standards for products in the generative artificial intelligence market area."

AI academic community, a debate

Thousands of people signed a petition in the past two days to suspend the development of GPT-4 follow-up AI large models. Today, CAIDP asked the FTC to investigate OpenAI’s ban Release a new commercial version of GPT-4. In just one or two days, various discussions have exploded, and AI bosses and various experts have come out to respond publicly. There are those who oppose it and those who support it.

In terms of suspending the research and development of GPT-4 follow-up AI large models, Turing Award winner Yoshua Bengio, Tesla CEO (OpenAI co-founder) Elon Musk, New York University Emeritus Professor Gary Marcus and UC Berkeley Professor Stuart Russell are all in favor. They have signed the open letter to suspend giant AI experiments. Notably, Marc Rotenberg, president and founder of CAIDP, also signed the open letter.

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

Open letter address: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

#However, Yann LeCun, who has been criticizing ChatGPT, publicly stated that he would not sign the open letter and disagreed with the content of the open letter.

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

Thomas G. Dietterich, professor emeritus at Oregon State University, said on Twitter, "I didn't sign it either. This letter is filled with a bunch of scary stuff. Rhetoric and ineffective or non-existent policies. There are important technical and policy issues that are being worked on." LeCun publicly stated "I agree."

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

Ng Enda also issued an article publicly opposing the thousands of people who signed the petition, and said: "GPT-4 has many new applications in education, health care, food, etc., and will help many people. Unless the government intervenes, the suspension will be implemented And preventing all teams from scaling LLM is unrealistic. Furthermore, asking governments to suspend emerging technologies they don’t understand is anti-competitive, sets a bad precedent, and is bad policy.”

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

Tian Yuandong later also supported Ng Enda’s view, “stated that he would not sign a moratorium. And said that once this kind of thing starts, there is no way to stop or reverse this trend. This It is the inevitable necessity of evolution. We should look forward from a different perspective, better understand LLM, adapt to it and utilize its power, and feel the heat."

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

# #Yi Tay (formerly a senior researcher at Google Brain), who just announced his departure from Google Brain, said: "If people who discuss LLM randomly on the Internet are banned for 6 months, I will sign."

GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC

The above is the detailed content of GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Seven Cool GenAI & LLM Technical Interview Questions Seven Cool GenAI & LLM Technical Interview Questions Jun 07, 2024 am 10:06 AM

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Five schools of machine learning you don't know about Five schools of machine learning you don't know about Jun 05, 2024 pm 08:51 PM

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Laying out markets such as AI, GlobalFoundries acquires Tagore Technology's gallium nitride technology and related teams Laying out markets such as AI, GlobalFoundries acquires Tagore Technology's gallium nitride technology and related teams Jul 15, 2024 pm 12:21 PM

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G

See all articles