Home Technology peripherals AI Quantum CNN has high test accuracy on data sets, but has limitations

Quantum CNN has high test accuracy on data sets, but has limitations

Apr 14, 2023 pm 02:10 PM
deep learning quantum

In the 2022 Nobel Prize announced on October 4, three scientists, Alain Aspect, John F. Clause and Anton Zeilinger, won the physics award for their quantum entanglement, attracting attention from the outside world to the field of quantum research. and discussion.

Among them, research investment represented by quantum computing has increased significantly in recent years. People have begun to explore using quantum methods to subvert existing technologies in fields such as security and network communications. classical computing techniques.

Some researchers believe that the core of quantum computing lies in "solving classic problems through lower computational cost technology." With the parallel development of deep learning and quantum computing research in recent years, Many researchers have also begun to pay attention to the intersection of these two fields: quantum deep learning.

Recently, Holly Emblem, Head of Insights at Xbox Game Studio Rare, discussed the current state of quantum deep learning in a new article "Quantum Deep Learning: A Quick Guide to Quantum Convolutional Neural Networks" Research and applications are introduced, with a focus on the advantages and limitations of quantum convolutional neural networks (QCNN) compared with classical computing methods.

1 The difference between classical computing and quantum computing

First introduce an important concept about the difference between classical computing and quantum computing. When a program is executed on a classical computer, a compiler converts the program statements into binary bits; in quantum computing, unlike a classical computer where bits represent either a 1 or a 0 at any time, qubits can be in either state. "Hovering" between states, the qubit collapses to one of its two ground states, 1 or 0, only when it is measured.

This property is called superposition and plays a crucial role in quantum computing tasks. Through superposition, quantum computers can perform tasks in parallel without requiring a fully parallel architecture or GPU to perform. The reason for this is that when each overlay state corresponds to a different value, if an operation is performed on the overlay state, the operation is performed on all states at the same time.

Here is an example of superposition of quantum states:

The superposition of quantum states is exponential, a and b refer to the probability amplitude, which gives gives the probability of projecting to a state once a measurement is performed. Among them, superposition quantum states are created by using quantum logic gates.

Quantum CNN has high test accuracy on data sets, but has limitations

Caption: Ragsxl’s IQM quantum computer in Espoo, Finland

2 Entanglement and Bell states

Superposition is very important in quantum physics, and another key principle is entanglement.

Entanglement refers to the behavior that creates or causes interaction between two or more particles in a way that means that the quantum states of these particles can no longer act independently of each other. Description, even from a distance. When particles are entangled, if one particle is measured, the other particle with which it is entangled will immediately be measured in the opposite state (these particles have no local state).

As the understanding of qubits and entanglement develops, the discussion of Bell states continues. The following shows the maximum entangled state of qubits:

|00  → β → 1 √ 2 (|00  |11 ) = |β00 ,

|01  → β → 1 √ 2 (|01  | 10 ) = |β01 

##|10  → β → 1 √ 2 (|00  - |11 ) = |β10 

|11  → β → 1 √ 2 (|01  - |10 ) = |β11 

Use quantum circuits to create Bell states:

Quantum CNN has high test accuracy on data sets, but has limitations

Caption: Bell state circuit at Perry’s Temple of Quantum Computing

In the Bell state circuit shown, it takes a qubit input and applies Hadamard gates and CNOT gates to create an entangled Bell state.

Currently, Bell states have been used to develop a series of quantum computing applications; among them, Hegazy, Bahaa-Eldin and Dakroury proposed that Bell states and ultra-dense coding can be used to achieve " "Unconditional Security" theory.

3 Convolutional Neural Network and Quantum Convolutional Neural Network

François Chollet pointed out in Python deep learning that the convolutional neural network (CNN ) are popular in tasks such as image classification due to their ability to build pattern hierarchies, such as representing lines first and then the edges of those lines. This allows CNNs to build on information between layers and represent complex Visual data.

CNNs have convolutional layers, consisting of filters that "slide" through the input and produce "feature maps" that allow patterns in the input to be detected. At the same time, CNN can use pooling layers to reduce the size of feature maps, thereby reducing the resources required for learning.

Quantum CNN has high test accuracy on data sets, but has limitations

Caption: Convolutional neural network demonstrated by Cecbur

Definition After understanding the classic CNN, we can explore how quantum CNN (Quantum Convolutional Neural Network, QCNN) utilizes these traditional methods and extends them.

Garg and Ramakrishnan believe that a common approach to developing quantum neural networks is to develop a "hybrid" approach that introduces so-called "quantum convolutional layers," which are Transformations based on stochastic quantum circuits appear as add-on components in classical CNNs.

Shown below is a hybrid QCNN developed by researchers such as Yanxuan Lü and tested on the MNIST handwritten digits dataset:

Research In the paper "A Quantum Convolutional Neural Network for Image Classification," the researchers used quantum circuits and entanglement as part of a classical model to obtain input images and generate predictions as outputs.

Quantum CNN has high test accuracy on data sets, but has limitations

In this method, QCNN takes image data as input and encodes it into a quantum state |x>, and then uses quantum convolution and a pooling layer to transform it to extract features; finally, a fully connected layer of strongly entangled circuits is used for classification, and predictions are obtained through measurements.

Where optimization is handled through stochastic gradient descent (SGD), which can be used to reduce the difference between training data labels and QCNN predicted labels. Focusing on quantum circuits, the gates used in the quantum convolutional layer are as follows, including rotation operators and CNOT gates.

Measure a subset of qubits in the pooling layer, and the result determines whether to apply single qubit gates to adjacent bits:

The fully connected layer consists of a "universal single qubit gate" and a CNOT gate that generates entangled states. To compare QCNN with other methods, the researchers used the MNIST data set with simulated QCNN. Following the typical approach, we created a training/test dataset and developed a QCNN consisting of the following layers:

  • 2 Quantum Convolutional Layers
  • 2 Quantum pooling layers
  • 1 Quantum fully connected layer

This QCNN pair The test set accuracy of the data set reached 96.65%, and after testing on the data from Papers with Code, the highest accuracy score of this data set in classic CNN could reach 99.91%.

It should be noted that only two types of MNIST data sets were classified in this experiment, which means that there will be limitations in fully comparing its performance with other MNIST model performance.

4 Feasibility Assessment and Summary

Although researchers have developed methods in QCNN, a key issue in the field currently is the hardware required to implement the theoretical model Doesn't exist yet. In addition, hybrid methods also face challenges in testing methods that simultaneously introduce quantum evolution layers into classical CNN calculations.

If we consider that one of the advantages of quantum computing is that it can solve "classically intractable problems with computationally cheaper techniques", then an important aspect of these solutions is It lies in "quantum acceleration". Some researchers believe that the advantage of quantum machine learning compared with classical implementation is that quantum algorithms are expected to have polynomial or even exponential acceleration times.

However, one limitation of the QCNN method shown above is that when we need algorithms (such as QCNN) that consistently decode/encode classical data and measurements, "quantum acceleration" "The gains are limited; and currently, there is not much information on how to design the best encoding/decoding and protocols that require minimal measurements to benefit from "quantum acceleration."

Entanglement has been proven to be an important property of quantum machine learning. The research mentioned in this article on QCNN using strong entanglement circuits can generate entangled states as its fully connected layer, making The model is able to make predictions. Not only that, entanglement is also used to assist deep learning models in other fields, such as using entanglement to extract important features from images, and using entanglement in data sets may mean that models can learn from smaller training data sets than previously expected. etc.

This article provides a comparison of classical deep learning methods and quantum deep learning methods, and discusses the use of quantum layers (including strongly entangled circuits) to generate predicted QCNN, analyzing the benefits of quantum deep learning and limitations, and introduces the more general application of entanglement in machine learning, which also means that we can start to think about the next step of quantum deep learning, especially the application of QCNN in more fields. In addition, quantum hardware is also constantly improving, and companies such as PsiQuantum have even proposed the goal of developing a quantum processor with one million qubits.

As research in the fields of deep learning and quantum computing continues, we can expect to see further developments in quantum deep learning.

The above is the detailed content of Quantum CNN has high test accuracy on data sets, but has limitations. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Methods and steps for using BERT for sentiment analysis in Python Methods and steps for using BERT for sentiment analysis in Python Jan 22, 2024 pm 04:24 PM

BERT is a pre-trained deep learning language model proposed by Google in 2018. The full name is BidirectionalEncoderRepresentationsfromTransformers, which is based on the Transformer architecture and has the characteristics of bidirectional encoding. Compared with traditional one-way coding models, BERT can consider contextual information at the same time when processing text, so it performs well in natural language processing tasks. Its bidirectionality enables BERT to better understand the semantic relationships in sentences, thereby improving the expressive ability of the model. Through pre-training and fine-tuning methods, BERT can be used for various natural language processing tasks, such as sentiment analysis, naming

Analysis of commonly used AI activation functions: deep learning practice of Sigmoid, Tanh, ReLU and Softmax Analysis of commonly used AI activation functions: deep learning practice of Sigmoid, Tanh, ReLU and Softmax Dec 28, 2023 pm 11:35 PM

Activation functions play a crucial role in deep learning. They can introduce nonlinear characteristics into neural networks, allowing the network to better learn and simulate complex input-output relationships. The correct selection and use of activation functions has an important impact on the performance and training results of neural networks. This article will introduce four commonly used activation functions: Sigmoid, Tanh, ReLU and Softmax, starting from the introduction, usage scenarios, advantages, disadvantages and optimization solutions. Dimensions are discussed to provide you with a comprehensive understanding of activation functions. 1. Sigmoid function Introduction to SIgmoid function formula: The Sigmoid function is a commonly used nonlinear function that can map any real number to between 0 and 1. It is usually used to unify the

Beyond ORB-SLAM3! SL-SLAM: Low light, severe jitter and weak texture scenes are all handled Beyond ORB-SLAM3! SL-SLAM: Low light, severe jitter and weak texture scenes are all handled May 30, 2024 am 09:35 AM

Written previously, today we discuss how deep learning technology can improve the performance of vision-based SLAM (simultaneous localization and mapping) in complex environments. By combining deep feature extraction and depth matching methods, here we introduce a versatile hybrid visual SLAM system designed to improve adaptation in challenging scenarios such as low-light conditions, dynamic lighting, weakly textured areas, and severe jitter. sex. Our system supports multiple modes, including extended monocular, stereo, monocular-inertial, and stereo-inertial configurations. In addition, it also analyzes how to combine visual SLAM with deep learning methods to inspire other research. Through extensive experiments on public datasets and self-sampled data, we demonstrate the superiority of SL-SLAM in terms of positioning accuracy and tracking robustness.

Latent space embedding: explanation and demonstration Latent space embedding: explanation and demonstration Jan 22, 2024 pm 05:30 PM

Latent Space Embedding (LatentSpaceEmbedding) is the process of mapping high-dimensional data to low-dimensional space. In the field of machine learning and deep learning, latent space embedding is usually a neural network model that maps high-dimensional input data into a set of low-dimensional vector representations. This set of vectors is often called "latent vectors" or "latent encodings". The purpose of latent space embedding is to capture important features in the data and represent them into a more concise and understandable form. Through latent space embedding, we can perform operations such as visualizing, classifying, and clustering data in low-dimensional space to better understand and utilize the data. Latent space embedding has wide applications in many fields, such as image generation, feature extraction, dimensionality reduction, etc. Latent space embedding is the main

Understand in one article: the connections and differences between AI, machine learning and deep learning Understand in one article: the connections and differences between AI, machine learning and deep learning Mar 02, 2024 am 11:19 AM

In today's wave of rapid technological changes, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) are like bright stars, leading the new wave of information technology. These three words frequently appear in various cutting-edge discussions and practical applications, but for many explorers who are new to this field, their specific meanings and their internal connections may still be shrouded in mystery. So let's take a look at this picture first. It can be seen that there is a close correlation and progressive relationship between deep learning, machine learning and artificial intelligence. Deep learning is a specific field of machine learning, and machine learning

From basics to practice, review the development history of Elasticsearch vector retrieval From basics to practice, review the development history of Elasticsearch vector retrieval Oct 23, 2023 pm 05:17 PM

1. Introduction Vector retrieval has become a core component of modern search and recommendation systems. It enables efficient query matching and recommendations by converting complex objects (such as text, images, or sounds) into numerical vectors and performing similarity searches in multidimensional spaces. From basics to practice, review the development history of Elasticsearch vector retrieval_elasticsearch As a popular open source search engine, Elasticsearch's development in vector retrieval has always attracted much attention. This article will review the development history of Elasticsearch vector retrieval, focusing on the characteristics and progress of each stage. Taking history as a guide, it is convenient for everyone to establish a full range of Elasticsearch vector retrieval.

Super strong! Top 10 deep learning algorithms! Super strong! Top 10 deep learning algorithms! Mar 15, 2024 pm 03:46 PM

Almost 20 years have passed since the concept of deep learning was proposed in 2006. Deep learning, as a revolution in the field of artificial intelligence, has spawned many influential algorithms. So, what do you think are the top 10 algorithms for deep learning? The following are the top algorithms for deep learning in my opinion. They all occupy an important position in terms of innovation, application value and influence. 1. Deep neural network (DNN) background: Deep neural network (DNN), also called multi-layer perceptron, is the most common deep learning algorithm. When it was first invented, it was questioned due to the computing power bottleneck. Until recent years, computing power, The breakthrough came with the explosion of data. DNN is a neural network model that contains multiple hidden layers. In this model, each layer passes input to the next layer and

AlphaFold 3 is launched, comprehensively predicting the interactions and structures of proteins and all living molecules, with far greater accuracy than ever before AlphaFold 3 is launched, comprehensively predicting the interactions and structures of proteins and all living molecules, with far greater accuracy than ever before Jul 16, 2024 am 12:08 AM

Editor | Radish Skin Since the release of the powerful AlphaFold2 in 2021, scientists have been using protein structure prediction models to map various protein structures within cells, discover drugs, and draw a "cosmic map" of every known protein interaction. . Just now, Google DeepMind released the AlphaFold3 model, which can perform joint structure predictions for complexes including proteins, nucleic acids, small molecules, ions and modified residues. The accuracy of AlphaFold3 has been significantly improved compared to many dedicated tools in the past (protein-ligand interaction, protein-nucleic acid interaction, antibody-antigen prediction). This shows that within a single unified deep learning framework, it is possible to achieve

See all articles