Home > Technology peripherals > AI > body text

Summary of the collaboration between the National University of Science and Technology and the First Normal University: Revealing how 'white box” tensor networks can improve the interpretability and efficiency of quantum machine learning

WBOY
Release: 2023-12-01 18:34:55
forward
1255 people have browsed it

Summary of the collaboration between the National University of Science and Technology and the First Normal University: Revealing how white box” tensor networks can improve the interpretability and efficiency of quantum machine learning

Editor|Ziluo

Deep machine learning has achieved remarkable success in various fields of AI, but achieving high interpretability and efficiency at the same time is still It is a serious challenge

Tensor network, namely Tensor Network (TN), originated from quantum mechanics and is a mature mathematical tool. It has demonstrated unique advantages in developing efficient "white box" machine learning solutions

Recently, Ran Shiju of Capital Normal University and Su Gang of the University of Chinese Academy of Sciences drew inspiration from quantum mechanics and reviewed a Innovative approaches based on TN provide a promising solution to the long-standing challenge of reconciling interpretability and efficiency in deep machine learning.

On the one hand, the interpretability of TN ML can be achieved through a solid theoretical foundation based on quantum information and many-body physics. On the other hand, powerful TN expression and advanced computing techniques developed in quantum many-body physics can achieve high efficiency. With the rapid development of quantum computers, TN is expected to produce novel solutions that can run on quantum hardware in the direction of "quantum AI" in the near future

This review is based on "Tensor Networks for Interpretable and "Efficient Quantum-Inspired Machine Learning" was published on "Intelligent Computing" on November 17, 2023.

Summary of the collaboration between the National University of Science and Technology and the First Normal University: Revealing how white box” tensor networks can improve the interpretability and efficiency of quantum machine learning

Paper link: https://spj.science.org/doi/10.34133/icomputing.0061

Deep learning model , especially neural network models, are often called "black boxes" because their decision-making processes are complex and difficult to explain. Neural networks are currently the most powerful deep learning model. A prime example of its power is GPT. However, even GPT faces serious issues such as robustness and privacy protection due to lack of explainability

The lack of explainability of these models may lead to a lack of trust in their predictions and decisions, thus limiting them Practical applications in important fields

Tensor networks based on quantum information and multi-body physics provide a "white box" approach to ML. The researchers said: "Tensor networks play a crucial role in connecting quantum concepts, theories and methods with ML and effectively implementing tensor network-based ML."

From Quantum Powerful "white box" mathematical tools for physics Quantum physics has brought forth powerful "white box" mathematical tools.

With the rapid development of classical computing and quantum computing, TN provides new ideas for overcoming the dilemma between interpretability and efficiency. . TN is defined as the contraction of multiple tensors. Its network structure determines how the tensor shrinks.

In Figure 1, a diagrammatic representation of the three types of TN is shown. These three types are matrix product state (MPS) representation, tree TN and projected entangled pair state (PEPS) representation

Summary of the collaboration between the National University of Science and Technology and the First Normal University: Revealing how white box” tensor networks can improve the interpretability and efficiency of quantum machine learning

Figure 1: Three types of TN Diagrammatic representation of: (A) MPS, (B) tree TN and (C) PEPS. (Source: paper)

TN has achieved remarkable success in the field of quantum mechanics as an efficient representation of the state of large-scale quantum systems. In TN theory, states satisfying the entanglement entropy area law can be efficiently approximated by a TN representation with finite bond dimensions.

MPS-based algorithms, including density matrix renormalization groups and time-evolving block extraction, show significant efficiency in simulating entanglement entropy. In addition, MPS can also represent many artificially constructed states that are widely used in quantum information processing and computation, such as the Greenberger–Horne–Zeilinger state and the W state.

PEPS stands for compliance with the area law in two and higher dimensions and has achieved great success in the study of high-dimensional quantum systems. In summary, the area law of entanglement entropy provides an intrinsic explanation for the representation or computational power of TN in simulating quantum systems. This explanation also applies to TN ML. Furthermore, TN as a "white box" numerical tool (Born machine), similar to ML's (classical) probability model, can be explained by Born's quantum probability interpretation (also known as Born's rule)

Summary of the collaboration between the National University of Science and Technology and the First Normal University: Revealing how white box” tensor networks can improve the interpretability and efficiency of quantum machine learning

Picture 2: Using MPS (Tensor Train form) can effectively represent or formulate a large number of mathematical objects. (Quoted from: paper)

Technological advancements in machine learning inspired by quantum

TN provides a new way to solve the dilemma between interpretability and efficiency in machine learning, thanks to its sound theory and effective methods. Currently, two intertwined lines of research are being debated:

  1. How does quantum theory serve as a mathematical basis for TN ML interpretability?
  2. How do quantum mechanics TN methods and quantum computing technology produce efficient T N ML solutions?

In this article, researchers introduce the recent encouraging progress in quantum-inspired ML from the perspectives of feature mapping, modeling, and quantum computing-based ML. issue was discussed. These advances are closely related to the advantages of using TN in improving efficiency and interpretability. These ML approaches are often called "quantum inspired" because their theories, models, and methods originate from or are inspired by quantum physics. However, we need more efforts to develop a system framework for interpretability based on quantum physics

In the following table, the main methods regarding TN ML and their relationship with efficiency and interpretability are summarized. Relationship

Summary of the collaboration between the National University of Science and Technology and the First Normal University: Revealing how white box” tensor networks can improve the interpretability and efficiency of quantum machine learning

Technical Networks that Strengthen Classical Machine Learning

As a basic mathematical tool, the application of neural networks in ML It is not limited to applications that follow quantum probabilistic interpretations. Given that TN can be used to effectively represent and simulate the partition function of classical stochastic systems, such as the Ising and Potts models, the relationship between TN and Boltzmann machines has been extensively studied.

TN is also used to enhance NN and develop novel ML models, ignoring any probabilistic interpretation. Rewritten into Chinese: TN is also used to enhance NN and develop novel ML models, disregarding any probabilistic explanation

Based on the same basis, model compression methods are proposed to decompose the variational parameters of NN into TN or directly change the variational parameters Expressed as TN. The latter may not require an explicit decomposition process, where the parameters of the neural network are not restored to tensors but directly to TT forms, matrix product operators, or deep TNs. Nonlinear activation functions have been added to TN to improve its ML performance, generalizing TN from multilinear models to nonlinear models.

What needs to be rewritten is: Conclusion

For a long time, people have been concerned about solving the trade-off between efficiency and interpretability in artificial intelligence (especially deep machine learning) dilemma. In this regard, we review the encouraging progress made by TN, an interpretable and efficient quantum-inspired machine learning method

The "N ML butterfly" in Figure 3 lists TN Advantages in ML. For quantum-inspired ML, the advantages of TN can be summarized in two key aspects: quantum theory for interpretability and quantum methods for improved efficiency. On the one hand, TN enables us to apply statistics and quantum theory (e.g., entanglement theory) to build probabilistic frameworks for interpretability that may go beyond what can be described by classical information or statistical theory. On the other hand, powerful quantum mechanical TN algorithms and greatly enhanced quantum computing technology will enable quantum-inspired TN ML methods to be highly efficient on both classical and quantum computing platforms.

Summary of the collaboration between the National University of Science and Technology and the First Normal University: Revealing how white box” tensor networks can improve the interpretability and efficiency of quantum machine learning

Figure 3: "TN ML butterfly" summarizes 2 unique advantages: interpretability based on quantum theory (left wing) and efficiency based on quantum methods (right wing). (Source: paper)

In particular, with the recent significant progress in the field of GPT, there has been an unprecedented surge in model complexity and computing power, which has brought new opportunities and challenges to TN ML. In the face of emerging GPT AI, explainability is becoming more and more valuable, not only to improve research efficiency, but also to enable better application and safer control in the current NISQ era and the upcoming In the true era of quantum computing, TN is rapidly growing into an important mathematical tool for exploring quantum artificial intelligence, from various perspectives such as theory, model, algorithm, software, hardware and application.

Reference content:

https ://techxplore.com/news/2023-11-tensor-networks-efficiency-quantum-inspired-machine.html

The above is the detailed content of Summary of the collaboration between the National University of Science and Technology and the First Normal University: Revealing how 'white box” tensor networks can improve the interpretability and efficiency of quantum machine learning. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!