Home > Technology peripherals > AI > body text

IEEE Interpretable AI Architecture Standard P2894 Officially Released

王林
Release: 2024-04-10 13:25:15
forward
1042 people have browsed it

Explainable AI (XAI) is an emerging branch of artificial intelligence. It is used to analyze the logic behind every decision made by artificial intelligence capabilities. It is one of the core concerns for the sustainable development of artificial intelligence. With the advent of the era of large models, models are becoming more and more complex, and paying attention to interpretability is of great significance to improving the transparency, security, and reliability of artificial intelligence systems.

Interpretable AIInternational Standard IEEE P2894 is released, open A I“Black Box”

Recently, the IEEE Standards Association’s standard P2894 (Guide for an Architectural Framework for Explainable Artificial Intelligence) on explainable AI architecture was officially released. IEEE is the world's largest non-profit professional and technical society. It is recognized as authoritative in the fields of academic and international standards and has formulated more than 900 current industrial standards. IEEE Interpretable AI Architecture Standard P2894 Officially Released

Standard original text link:https://www.php.cn/link/b252e54edce965ac4408effd7ce41fb7

The interpretable content of this release AI architecture standards provide the industry with a technical blueprint for building, deploying, and managing machine learning models, while meeting the requirements for transparent and trustworthy AI by adopting various explainable AI methods. The standard defines the architectural framework and application guidelines for explainable AI, including the description and definition of explainable AI, the classification of explainable AI methods and applicable application scenarios for each type, as well as the accuracy, privacy and security of explainable AI systems. performance evaluation method.

As early as June 2020, WeChat Bank, Huawei, JD.com, Baidu, Yitu, Hisense, CETC Big Data Research Institute, Institute of Computing Technology, Chinese Academy of Sciences, China Telecom, China Mobile, China Unicom, Shanghai Computer Software More than 20 companies and institutions, including the Technology Development Center, ENN Group, China Asset Management, and Sinovation Ventures, have developed a deep understanding of AI technology security specifications and explainability based on business scenarios in finance, retail, smart cities and other fields, and jointly discussed it with the IEEE Standards Association. The Interpretability Working Group was established and the first standards working group meeting was organized that month. Dr. Fan Lixin, chief scientist of artificial intelligence at WeChat Bank, serves as the chairman of the standards working group, and Dr. Chen Yixin, a professor at the University of Washington in the United States, serves as the vice chairman. Since then, the standards working group has held multiple meetings, and the final draft standard will be officially released by the IEEE Standards Association in February 2024.

Dr. Fan Lixin, chairman of the standards working group, said: "Explainability is an important issue that cannot be ignored in the current development stage of AI technology, but the relevant industry standards and normative documents are still not perfect. This standard formulation has absorbed The cutting-edge practical experience from leading companies and research institutions in various fields such as finance, communications, retail, and the Internet is believed to provide valuable reference for the wider implementation of AI."

Trusted Federation Standards related to learning and trusted AI will be released one after another, focusing on AI data security and privacy protection

"Data Dr. Fan Lixin introduced that the explainable AI system architecture standard released this time is also "trusted federated learning" An important milestone in the research and implementation of the new paradigm. "Trusted federated learning" is a distributed machine learning paradigm that can meet the needs of users and supervision. In this paradigm, privacy protection, model performance, and algorithm efficiency are the core triangular cornerstones. Together with the two pillars of model decision-making interpretability and model supervisability, they form a more secure and trustworthy federated learning. 》This is an article introducing the new paradigm of "Trusted Federated Learning". In this paradigm, privacy protection, model performance, and algorithm efficiency are the core triangular cornerstones. Together with the two pillars of model decision-making interpretability and model supervisability, they form a more secure and trustworthy federated learning. This paradigm can meet the needs of all aspects and is a new distributed machine learning method. This article introduces the importance and components of this paradigm.

The safe circulation of data plays a key role, and trusted federated learning plays a key role in promoting the safe circulation of data elements. The "Data Elements" Three-Year Action Plan (2024-2026) issued by the National Data Administration proposes to "create a safe and trusted circulation environment, deepen the application of technologies such as privacy computing and federated learning, and enhance the credibility and controllability of data utilization." , measurable capabilities, and promote data compliance and efficient circulation and use." Trusted federated learning, as a data compliance circulation method based on privacy computing, federated learning and other technologies, can enhance the credibility, controllability, and accountability of data utilization. Measurement capabilities promote the application of data in compliance, efficient circulation and use, thereby maximizing the value of data.

As industry and academia pay attention to federated learning and trustworthy artificial intelligence, multiple trusted federated learning and trustworthy artificial intelligence standards approved by the IEEE Standards Association will also be released one after another. Among them, the draft of the standard IEEE P2986 (Recommended Practice for Privacy and Security for Federated Machine Learning) on ​​the privacy and security architecture of federated learning has been completed and is expected to be officially released soon. This standard proposes the privacy risk level and security risk level assessment methods of federated learning for the first time in the industry. Specifically, it includes common faults and countermeasures in federated machine learning, privacy and security requirements for federated machine learning, and privacy and security assessment guidelines for federated machine learning.

In addition, based on IEEE P2986, the trusted federated learning standard IEEE P3187 (Guide for Framework for Trustworthy Federated Machine Learning) that focuses more on federated learning's trustworthiness, explainability, optimization, and supervision has also been Complete initial review. The standard proposes the framework and characteristics of trusted federated learning, sets specific constraints on the implementation of these characteristics, and introduces solutions for implementing trusted federated learning.

Big Model AI A#gent Federated learning, creating trustworthy artificial intelligence in the era of large models

Recently, China Telecom and WeBank also jointly initiated the establishment of the IEEE P3427 (Standard for Federated Machine Learning of Semantic Information Agents) working group on the federated learning standard for semantic information agents. Topics discussed in this standard plan include the role definition, incentive mechanism, semantic communication of different semantic agents in the semantic cognitive network based on federated machine learning, the representation of semantic information on semantic agents that is easy for human understanding, and the federation between semantic agents. Information security, efficient interaction, etc. The standards working group plans to launch standard development at the end of March 2024, and is currently recruiting relevant experts from various industries to join in to jointly improve the standards and promote industry development.

The successive release of relevant industry standards will further promote cross-industry and cross-field technical cooperation and innovation, open the "black box" of AI, and promote the safe and efficient circulation of data elements, high accuracy and high interpretability. Artificial intelligence will help achieve widespread, responsible and effective application of technology to bring benefits to mankind.

The above is the detailed content of IEEE Interpretable AI Architecture Standard P2894 Officially Released. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template