Home > Technology peripherals > AI > body text

Three ways researchers can properly understand AI decision-making

王林
Release: 2023-04-12 10:43:11
forward
1157 people have browsed it

Understanding artificial intelligence decision-making is important for researchers, policymakers, and the general public. Fortunately, there are ways to ensure we learn more. Deep learning models used by cutting-edge AI companies and academics have become so complex that even the researchers building the models have difficulty understanding the decisions being made.

This was most clearly reflected in a certain tournament. In this game, data scientists and professional Go players are often confused by the artificial intelligence's decision-making during the game because it makes informal games that are not considered the strongest moves.

Three ways researchers can properly understand AI decision-making

To better understand the models they build, AI researchers have developed three main interpretation methods. These are local explanation methods, explaining only one specific decision rather than the decision of the entire model, which can be challenging given the scale.

Three ways researchers can properly understand AI decision-making

Feature Attribution

Through feature attribution, the AI ​​model will identify which parts of the input are important for a specific decision. For X-rays, researchers can see heat maps or individual pixels that the model considers most important for its decisions.

Using this feature attribution interpretation, it is possible to check for spurious correlations. For example, it shows whether pixels in the watermark are highlighted, or whether pixels in the actual tumor are highlighted.

Counterfactual explanation

When making a decision, we may be confused and wonder why the AI ​​made this or that decision. Since AI is deployed in high-risk settings such as prisons, insurance, or mortgage lending, understanding AI rejection factors or reasons for appeals should help them get approved the next time they apply.

The benefit of the counterfactual interpretation approach is that it tells you exactly how the inputs need to be changed to flip the decision, which may have practical uses. For those who applied for a mortgage but didn't get one, this explanation will tell them what they need to do to achieve the results they want.

Sample Importance

Sample importance interpretation requires access to the underlying data behind the model. If researchers notice what they think is an error, they can run a sample significance interpretation to see if the AI ​​was fed data it couldn't compute, leading to errors in judgment. ​

The above is the detailed content of Three ways researchers can properly understand AI decision-making. For more information, please follow other related articles on the PHP Chinese website!

source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template