


New research from Stanford and Berkeley overthrows Google's 'quantum supremacy”! Beautiful in theory, but useless in practice
Quantum supremacy, this word has been born for nearly 4 years.
In 2019, Google physicists announced that they had successfully achieved quantum hegemony with a 53-qubit machine, which was a significant symbolic milestone.
According to a paper published in Nature, the quantum system only took 200 seconds to complete a calculation, while the same calculation was performed by Summit, the most powerful supercomputer at the time, which took about 10,000 years.
What is quantum supremacy?
The so-called "quantum hegemony", or "quantum advantage" (hereinafter referred to as "quantum supremacy"), means that the tasks that quantum computers can complete are beyond the scope of any feasible classical algorithm.
Even if these tasks are placed on the most advanced traditional supercomputers, the long calculation time (often thousands of years) will make the algorithm lose its practical significance.
Interestingly, in Google’s results in 2019, it only stated that quantum hegemony had been achieved, but did not explain the specific instances in which quantum computers surpassed classical computers.
This is a difficult question to answer because currently quantum computers are plagued by errors that can accumulate and undermine the performance and stability of quantum computing.
In fact, compared with the field of realizing quantum hegemony, what scientists want to know more is another question: as quantum computers become larger and larger, whether classical algorithms can keep up. .
We hope that eventually the quantum side will completely distance itself and end this completely, said Scott Aaronson, a computer scientist at the University of Texas at Austin. Competition."
Most researchers speculate that the answer is no.
That is, classical algorithms will one day be completely unable to keep up with the pace of quantum computing, but it has been unable to prove this accurately and comprehensively. One way to definitively prove this inference is to find the conditions under which quantum computing can gain a "lasting advantage" over traditional computing.
Now, this question seems to have a preliminary answer:
Save money: Quantum computing will produce errors. If corrected If you fail to keep up, this kind of error will break the "quantum hegemony" in the ideal state, allowing classical algorithms to keep up with quantum algorithms.
##Recently, in a preprint paper published on Arxiv, researchers from Harvard University, University of California, Berkeley, and Israel A joint team from Burei University has taken a big step towards confirming this conclusion.
They demonstrated that targeted error correction is a necessary condition for durable quantum supremacy in random circuit sampling, supporting the conclusions of Google's research a few years ago. Under the current level of quantum error correction, quantum hegemony does not actually exist.
There is no more "golden zone" for quantum supremacy
Researchers have developed a A classic algorithm that can simulate random circuit sampling experiments when errors exist to prove this conclusion.
Start with an array of qubits and randomly manipulate the qubits using operations called "quantum gates." Some quantum gates cause pairs of qubits to become entangled, meaning they share a quantum state that cannot be described individually.
Repeatedly setting up these quantum gates in multi-layer circuits can allow qubits to enter more complex entangled states.
The left picture shows random circuit sampling under ideal conditions, and the right picture shows random circuit sampling including interference
In order to understand this quantum state, the researchers measured all qubits in the array. This behavior causes the collective quantum state of all qubits to collapse into a random string of ordinary bits, namely 0s and 1s.
The number of possible outcomes grows rapidly with the number of qubits in the array. In Google's 2019 experiment, 53 qubits contained nearly 10 trillion results.
Moreover, this method requires many repeated measurements from random circuits to build a probability distribution map of the results.
The question about quantum supremacy is, is it difficult or even impossible to imitate this probability distribution with a classical algorithm that does not use any entanglement?
In 2019, Google researchers demonstrated that this goal is difficult for error-free quantum circuits that do not produce errors. It is indeed difficult to simulate a random circuit sampling experiment using classical algorithms without errors.
From the perspective of computational complexity, when the number of qubits increases, the computational complexity of traditional classification algorithms increases exponentially, while the computational complexity of quantum algorithms increases polynomially.
When n increases large enough, an algorithm that is exponential in n lags far behind any algorithm that is polynomial in n.
This is the difference we are referring to when we talk about a problem that is hard for a classical computer but easy for a quantum computer. The best classical algorithms take exponential time, while quantum computers can solve problems in polynomial time.
However, the 2019 paper did not consider the impact of errors caused by imperfect quantum gates, and the research conclusion actually left a hole. In other words, random without error correction Can quantum supremacy be achieved through circuit sampling?
In fact, if you consider the errors that occur in quantum entanglement and can accumulate, then the difficulty of simulating random circuit sampling experiments with classical algorithms will be greatly reduced. And if the computational complexity of classical algorithm simulation is reduced to the same polynomial level as quantum algorithms, quantum hegemony will no longer exist.
This new paper shows that assuming the circuit depth is kept constant, say a very shallow 3 layers, as the number of qubits increases, there will not be too much quantum entanglement, The output is still available for classic simulation.
On the other hand, if the circuit depth is increased to keep up with the increasing number of qubits, then the cumulative effect of quantum gate errors will dilute the complexity caused by entanglement, simulated with classical algorithms Exporting will still be easier.
There is a "golden zone" between the two, that is, the window in which quantum hegemony can continue to survive, that is, the range where traditional algorithm simulations cannot keep up with quantum entanglement.
Before the publication of this paper, even as the number of qubits increased, quantum supremacy still existed when the number of qubits reached a certain intermediate range.
At this circuit depth, it is difficult to simulate the classical algorithm at every step, even though the output steadily degrades due to quantum algorithm errors.
This new paper almost eliminates this "golden zone".
The paper derives a classical algorithm for simulating random circuit sampling and proves that its running time is a polynomial function of the time required to run the corresponding quantum experiment , rather than an exponential function.
This result establishes a close theoretical connection between the speed of the classical method of random circuit sampling and the quantum method, that is, it declares that quantum hegemony has been achieved in theory, and in practice It almost doesn't exist.
The reason why I say "almost" is because the basic assumptions of the new algorithm are invalid for some shallower circuits, leaving an unknown "small gap".
However, few researchers still hold out hope for achieving quantum supremacy in this gap. Even Bill Fefferman, a computer scientist at the University of Chicago and one of the authors of the 2019 Google paper, said: "I think the chance is quite small."
It can be said that according to the strict standards of computational complexity theory, random circuit sampling will no longer produce quantum supremacy.
Also, faced with this conclusion, all researchers agree on how critical quantum error correction will be to the long-term success of quantum computing. Fefferman said: "In the end of our research, we found that quantum error correction is the solution."
The above is the detailed content of New research from Stanford and Berkeley overthrows Google's 'quantum supremacy”! Beautiful in theory, but useless in practice. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Large-scale language models (LLMs) have demonstrated compelling capabilities in many important tasks, including natural language understanding, language generation, and complex reasoning, and have had a profound impact on society. However, these outstanding capabilities require significant training resources (shown in the left image) and long inference times (shown in the right image). Therefore, researchers need to develop effective technical means to solve their efficiency problems. In addition, as can be seen from the right side of the figure, some efficient LLMs (LanguageModels) such as Mistral-7B have been successfully used in the design and deployment of LLMs. These efficient LLMs can significantly reduce inference memory while maintaining similar accuracy to LLaMA1-33B

3nm process, performance surpasses H100! Recently, foreign media DigiTimes broke the news that Nvidia is developing the next-generation GPU, the B100, code-named "Blackwell". It is said that as a product for artificial intelligence (AI) and high-performance computing (HPC) applications, the B100 will use TSMC's 3nm process process, as well as more complex multi-chip module (MCM) design, and will appear in the fourth quarter of 2024. For Nvidia, which monopolizes more than 80% of the artificial intelligence GPU market, it can use the B100 to strike while the iron is hot and further attack challengers such as AMD and Intel in this wave of AI deployment. According to NVIDIA estimates, by 2027, the output value of this field is expected to reach approximately

In recent years, multimodal learning has received much attention, especially in the two directions of text-image synthesis and image-text contrastive learning. Some AI models have attracted widespread public attention due to their application in creative image generation and editing, such as the text image models DALL・E and DALL-E 2 launched by OpenAI, and NVIDIA's GauGAN and GauGAN2. Not to be outdone, Google released its own text-to-image model Imagen at the end of May, which seems to further expand the boundaries of caption-conditional image generation. Given just a description of a scene, Imagen can generate high-quality, high-resolution

The most comprehensive review of multimodal large models is here! Written by 7 Chinese researchers at Microsoft, it has 119 pages. It starts from two types of multi-modal large model research directions that have been completed and are still at the forefront, and comprehensively summarizes five specific research topics: visual understanding and visual generation. The multi-modal large-model multi-modal agent supported by the unified visual model LLM focuses on a phenomenon: the multi-modal basic model has moved from specialized to universal. Ps. This is why the author directly drew an image of Doraemon at the beginning of the paper. Who should read this review (report)? In the original words of Microsoft: As long as you are interested in learning the basic knowledge and latest progress of multi-modal basic models, whether you are a professional researcher or a student, this content is very suitable for you to come together.

After four months, another collaborative work between ByteDance Research and Chen Ji's research group at the School of Physics at Peking University has been published in the top international journal Nature Communications: the paper "Towards the ground state of molecules via diffusion Monte Carlo neural networks" combines neural networks with diffusion Monte Carlo methods, greatly improving the application of neural network methods in quantum chemistry. The calculation accuracy, efficiency and system scale on related tasks have become the latest SOTA. Paper link: https://www.nature.com

The image-to-video generation (I2V) task is a challenge in the field of computer vision that aims to convert static images into dynamic videos. The difficulty of this task is to extract and generate dynamic information in the temporal dimension from a single image while maintaining the authenticity and visual coherence of the image content. Existing I2V methods often require complex model architectures and large amounts of training data to achieve this goal. Recently, a new research result "I2V-Adapter: AGeneralImage-to-VideoAdapter for VideoDiffusionModels" led by Kuaishou was released. This research introduces an innovative image-to-video conversion method and proposes a lightweight adapter module, i.e.

This work of EfficientSAM was included in CVPR2024 with a perfect score of 5/5/5! The author shared the result on a social media, as shown in the picture below: The LeCun Turing Award winner also strongly recommended this work! In recent research, Meta researchers have proposed a new improved method, namely mask image pre-training (SAMI) using SAM. This method combines MAE pre-training technology and SAM models to achieve high-quality pre-trained ViT encoders. Through SAMI, researchers try to improve the performance and efficiency of the model and provide better solutions for vision tasks. The proposal of this method brings new ideas and opportunities to further explore and develop the fields of computer vision and deep learning. by combining different

The two scientists who have won the 2022 Boltzmann Prize have been announced. This award was established by the IUPAP Committee on Statistical Physics (C3) to recognize researchers who have made outstanding achievements in the field of statistical physics. The winner must be a scientist who has not previously won a Boltzmann Prize or a Nobel Prize. This award began in 1975 and is awarded every three years in memory of Ludwig Boltzmann, the founder of statistical physics. Deepak Dharistheoriginalstatement. Reason for award: In recognition of Deepak Dharistheoriginalstatement's pioneering contributions to the field of statistical physics, including Exact solution of self-organized critical model, interface growth, disorder
