What you need to know about general artificial intelligence
Recently, there has been increasing discussion about generative artificial intelligence tools, especially after the release of several large-scale language models and image generators (such as DALL-E or Midjourney).
These inventions have once again put general artificial intelligence (GPAI) under the spotlight and once again raised hypothetical questions such as whether GPAI should be regulated.
Before one explores the possibilities further, first understand the concept of GPAI, what it means, when was it introduced, etc.
What is general artificial intelligence?
Two years ago, in April 2021, the European Commission launched general artificial intelligence. The original AI bill proposal exempted creators of general artificial intelligence from complying with a number of legal instruments and other liability standards.
The reason is that it only applies to high-risk artificial intelligence, which is clearly mentioned and explained in the bill based on their purpose and context.
Another regulation, Article 28, supports this assertion and recommends that AGI developers only make significant adjustments or developments to AI systems for high-risk uses. Responsible for regulations.
But according to recent reports, the European Parliament is also considering certain "obligations" for original general artificial intelligence providers.
The basic purpose of the EU Artificial Intelligence Act is to classify and categorize the various chains of actors involved in developing and deploying systems using artificial intelligence.
Here are 5 considerations to guide the regulation of general artificial intelligence
The Artificial Intelligence Act’s approach to general artificial intelligence is ideal for establishing a regulatory tone for addressing the harms of global artificial intelligence. With the recent increase in public interest in generative AI, there is also the risk that regulatory stances may be overly adapted to current issues.
Surprisingly, newer innovations like ChatGPT, dall-e2, and Bard aren’t even really a problem; in fact, they’re just the tip of the iceberg.
Recommended: Responsible Artificial Intelligence - How to Adopt an Ethical Model in Regulatory Environments
General Artificial Intelligence is a huge category
The first thing to understand is that General Artificial Intelligence Intelligence is a huge category, so it is logical to apply it to a wide range of technology areas, rather than limiting it to chatbots and LL.M.s.
To ensure that the EU Artificial Intelligence Bill is futuristic in nature, it must address a much larger scale. First, a proper description of GPAI should include many techniques ("tasks") that can serve as the basis for other artificial intelligence systems.
The European Council defines this as:
“The provider intends to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation, etc.; General artificial intelligence systems can be used in multiple scenarios and can be integrated into multiple other AI systems."
General artificial intelligence may cause a wide range of harm
Although these risks cannot be applied in layers are completely overcome, but we can deny the fact that they can have an impact on various applications and actors. We should consider the current state of AI technology, its applications, and how it works while developing a general approach to AI regulation.
For example, general AI models run the risk of generating anti-democratic discourse, such as hate speech targeting sexual, racial and religious minorities. The risk with these models is that they entrench constrained or distorted views into the data underlying them.
General Artificial Intelligence should be governed throughout the product lifecycle
In order for General Artificial Intelligence to take into account the diversity of stakeholders, it must be governed throughout the entire product lifecycle, not just It's at the application layer. The first stages of development are critical, and the businesses creating these models must take responsibility for the information they leverage and the architectural decisions they make. The existing architecture of the AI supply network effectively enables participants to profit from remote downstream applications while avoiding any consequential liability due to a lack of oversight at the development layer. This includes the process of collecting, cleaning, and annotating data, as well as the process of creating, testing, and evaluating models.
A standard legal disclaimer is not enough
It is impossible for the creators of general artificial intelligence to use basic legal disclaimers to exclude their liability. This approach can lead to a dangerous vulnerability that releases all responsibility from the original developer and places the responsibility on downstream actors who do not have the ability to manage all risks. The council does have an exception to the general approach, which would allow AGI developers to absolve themselves of any liability as long as they exclude all high-risk uses in their instructions and ensure the system cannot be abused.
Recommended: Uncovering, addressing and debunking hidden AI bias
Engaging non-industry actors, society and researchers in wider consultation
A fundamental, unifying Documentation of practices to evaluate general AI models, and generative AI models in particular, across a variety of hazards is an ongoing area of research. To avoid superficial tick-box exercises, regulation should prevent narrow assessment approaches.
General artificial intelligence systems must undergo meticulous vigilance, verification, and inspection before they are implemented or made available to the public. Recent proposals to bring general artificial intelligence models within the scope of the AI Bill either defer the development of future specific standards (to be decided by the Commission) or attempt to do so in the wording of the AI Bill.
For example, in a consensus society, the distribution of possible impacts may differ depending on whether a prototype is built and used by the entire community or by a small community.
The EU Artificial Intelligence Bill is about to become the first broad artificial intelligence law, and one day, it will become a unified standard for all countries. This is why it is crucial to take the field of artificial intelligence and translate it into a global template that everyone can follow.
The above is the detailed content of What you need to know about general artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
