Home > Technology peripherals > AI > body text

Problems with the interpretability of neural networks: revisiting the critique of NNs from thirty years ago

WBOY
Release: 2023-04-21 14:19:09
forward
1333 people have browsed it

1 Explainable AI (XAI)

As deep neural networks (DNN) are used to decide loan approvals, job applications, court bail approval, etc., which are closely related to people’s interests Or some life-or-death decisions (such as making a sudden stop on the highway), it is crucial to explain these decisions, rather than just produce a predictive score.

Research in explainable artificial intelligence (XAI) has recently focused on the concept of counterfactual examples. The idea is simple: first create some counterfactual examples with expected outputs and feed them into the original network; then, read the hidden layer units to explain why the network produced some other output. More formally:

"The fraction p is returned because the variable V has the value (v1, v2, ...) associated with it. If V has the value (v′1, v ′2, ...), and all other variables remain unchanged, the score p' will be returned."

The following is a more specific example:

"You were refused a loan because your annual income was £30,000. If your income was £45,000 you would get a loan."

However , a paper by Browne and Swift [1] (hereafter B&W) recently showed that counterfactual examples are only slightly more meaningful adversarial examples generated by performing small and unobservable perturbations on the input , resulting in the network misclassifying them with high confidence.

Furthermore, counterfactual examples "explain" what some features should be to get correct predictions, but "do not open the black box"; that is, they do not explain how the algorithm works. of. The article goes on to argue that counterfactual examples do not provide a solution for interpretability and that "without semantics there is no explanation".

In fact, the article even makes a stronger suggestion:

1) We either find a way to extract what is assumed to exist in Semantics in the hidden layers of the network, either

2) admit we failed.

And Walid S. Saba himself is pessimistic about (1). In other words, he regretfully admits our failure. The following are his reasons.

2 The "Ghost" of Fodor and Pylyshyn

Although the public fully agrees with B&W's view that "there is no explanation without semantics", but The hope of interpreting the semantics of hidden layer representations in deep neural networks to produce satisfactory explanations for deep learning systems has not been realized, the authors believe, for reasons outlined more than thirty years ago by Fodor and Pylyshyn [2] .

Walid S. Saba then argued: Before explaining where the problem lies, we need to note that purely extensional models (such as neural networks) cannot account for systematicity and Compositionality is modeled because they do not recognize symbolic structures with derivable syntax and corresponding semantics.

Thus, representations in neural networks are not really "symbols" that correspond to anything interpretable - but rather distributed, correlated, and continuous numerical values ​​that are themselves does not mean anything that can be explained conceptually.

In simpler terms, the subsymbolic representations in neural networks do not themselves refer to anything that humans can conceptually understand (the hidden units themselves cannot represent any metaphysical meaning Object). Rather, it is a set of hidden units that together typically represent some salient feature (e.g., a cat's whiskers).

But this is exactly why neural networks cannot achieve interpretability, namely because the combination of several hidden features is undeterminable - once the combination is completed (through some linear combination function) , a single unit is lost (we will show below).

3 Interpretability is "reverse reasoning", DNN cannot do reverse reasoning

The author has discussed why Fodor and Pylyshyn reached the conclusions is that NN cannot model systematic (and therefore interpretable) inferences [2].

In symbolic systems, there are well-defined combinatorial semantic functions that calculate the meaning of compound words based on the meanings of their constituents. But this combination is reversible - that is, one can always get the (input) components that produce that output, and precisely because in a symbolic system one can Access a "syntactic structure" that contains a map of how components are assembled. None of this is true in NN. Once vectors (tensors) are combined in a NN, their decomposition cannot be determined (the ways in which vectors (including scalars) can be decomposed are infinite!)

To illustrate why this is a problem At the core, let us consider B&W's proposal to extract semantics in DNNs to achieve interpretability. B&W's recommendation is to follow these guidelines:

The input image is labeled "Architecture" because the hidden neuron 41435 that normally activates the hubcap has an activation value of 0.32. If the activation value of hidden neuron 41435 is 0.87, the input image will be labeled "car".

To understand why this does not lead to interpretability, just note that requiring neuron 41435 to have an activation of 0.87 is not enough. For simplicity, assume that neuron 41435 has only two inputs, x1 and x2. What we have now is shown in Figure 1 below:

重温三十年前对于 NN 的批判:神经网络无法实现可解释 AI

重温三十年前对于 NN 的批判:神经网络无法实现可解释 AI

######## federal ###############Now assuming that our activation function f is the popular ReLU function, it can produce an output of z = 0.87. This means that for the values ​​of x1, x2, w1 and w2 shown in the table below, an output of 0.87 is obtained. ########################Table Note: Multiple input methods can produce a value of 0.87############# ## Looking at the table above, it is easy to see that there are countless linear combinations of x1, x2, w1 and w2 that will produce an output of 0.87. The important point here is that compositionality in NNs is irreversible, so meaningful semantics cannot be captured from any neuron or any collection of neurons. ############In keeping with B&W's slogan "No semantics, no explanation", we state that no explanation can ever be obtained from NN. In short, there is no semantics without compositionality, there is no explanation without semantics, and DNN cannot model compositionality. This can be formalized as follows: ############1. Without semantics, there is no explanation[1] 2. Without reversible compositionality, there is no semantics[2]######### ###3. Compositionality in DNN is irreversible[2]############=> DNN cannot be explained (without XAI)############End . ############By the way, the fact that compositionality in DNNs is irreversible has consequences besides not being able to produce interpretable predictions, especially when higher-level reasoning is required. Fields such as natural language understanding (NLU). ############In particular, such a system really cannot explain how a child can learn how to interpret an infinite number of sentences from only templates like (### ### ###), Because "John", "neighbor girl", "the boy who always comes here wearing a T-shirt", etc. are all possible instantiations of ###, as well as "classic rock", "fame", "Mary's grandma", " Running on the beach," etc. are all possible instances of ###. ############Because such systems have no "memory" and their composition cannot be reversed, in theory they need countless examples to learn this simple structure. [Editor’s note: This point was precisely Chomsky’s questioning of structural linguistics, and thus initiated the transformational generative grammar that has influenced linguistics for more than half a century. 】######

Finally, the author emphasizes that more than thirty years ago, Fodor and Pylyshyn [2] raised a criticism of NN as a cognitive architecture - they showed why NN cannot build systematicness, productivity and composition. model, all of which is necessary to talk about anything "semantic" - and this is a compelling criticism that has never been perfectly answered.

As the need to solve the problem of explainability in AI becomes critical, we must revisit that classic paper as it shows how statistical pattern recognition can be equated with artificial intelligence The limits of progress.

The above is the detailed content of Problems with the interpretability of neural networks: revisiting the critique of NNs from thirty years ago. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template