A new model that even amateur chess players can't beat actually defeated the world's strongest Go AI - KataGo?
Yes, this jaw-dropping result comes from the latest papers from MIT, UC Berkeley, etc.
The researchers used adversarial attack methods to seize KataGo's blind spots, and based on this technology, a rookie-level Go program successfully defeated KataGO.
Without searching, this winning rate even reaches 99%.
After calculating this, the food chain in the Go world instantly became: amateur players > new AI > top Go AI?
Wait a minute, how does this magical new AI become so good at the same time?
Cunning attack angle
Before introducing the new AI, let us first understand the protagonist who was attacked this time-KataGo.
KataGo, currently the most powerful open source Go AI, was developed by Harvard AI researchers.
Previously, KataGo defeated the superhuman-level ELF OpenGo and Leela Zero, and even without a search engine, its level was equivalent to the top 100 professional Go players in Europe.
Shin Jin-jin, the “number one” Korean Go player who just won the Samsung Cup and achieved “four crowns in three years”, has been using KataGo for sparring.
△Picture source: Hangame
Faced with such a strong opponent, the method chosen by the researchers can be said to be overwhelming.
They found that although KataGo learned Go by playing millions of games against itself, this was still not enough to cover all possible situations.
So, this time they no longer choose self-game, but choose the confrontation attack method:
Let the attacker (adversary) and the fixed victim (victim, also known as KataGo) compete Game, use this method to train attackers.
This change allowed them to train an end-to-end adversarial policy using only 0.3% of the data used to train KataGo.
Specifically, this counter-strategy is not entirely about gaming, but rather ends the game prematurely by deceiving KataGo into a position that is favorable to the attacker.
For example, in the picture below, the attacker who controls the black stones mainly places stones in the upper right corner of the board, leaving other areas to KataGo, and also deliberately places some pieces in other areas that are easy to be cleared.
Adam Gleave, co-author of the paper, introduced:
This approach will make KataGo mistakenly think that it has won, because its territory (lower left) is much larger than that of its opponent.
But the lower left area doesn't really contribute points because there are still sunspots there, meaning it's not completely safe.
Because KataGo is overconfident in victory - thinking that if the game ends and the score is calculated, it will win - so KataGo will take the initiative to pass, and then the attacker will also pass, thus ending Game, start scoring. (Both sides pass, and the game ends)
But as analyzed by Gleave, since the black stones in KataGo's surrounding space are still alive, they are not judged as "dead stones" according to the Go referee rules. Therefore, KataGo's black stones in the surrounding space are still alive. Places with sunspots cannot be counted as effective mesh numbers.
So the final winner is not KataGo, but the attacker.
This victory is not unique. Without search, this countermeasure achieved a 99% winning rate against KataGo.
When KataGo used enough searches to approach superhuman levels, their win rate reached 50%.
Also, despite this clever strategy, the attacker model itself is not very strong at Go: in fact, it can be easily defeated by human amateurs it.
The researchers stated that the purpose of their research was to prove that even highly mature AI systems can have serious vulnerabilities by attacking an unexpected vulnerability in KataGo.
As co-author Gleave said:
(This study) highlights the need for better automated testing of AI systems to discover worst-case failure modes, not just Just testing performance under normal circumstances.
Research Team
The research team comes from MIT, UC Berkeley, etc. The co-authors of the paper are Tony Tong Wang and Adam Gleave.
Tony Tong Wang, a PhD student in computer science at MIT, has experience as an intern at NVIDIA, Genesis Therapeutics and other companies.
Adam Gleave is a doctoral student in artificial intelligence at the University of California, Berkeley. He graduated from the University of Cambridge with a master's degree and a bachelor's degree. His main research direction is the robustness of deep learning.
The link to the paper is attached at the end, interested friends can pick it up~
The link to the paper: https://arxiv.org/abs /2211.00241
Reference link: https://arstechnica.com/information-technology/2022/11/new-go-playing-trick-defeats-world-class-go-ai-but-loses-to -human-amateurs/
The above is the detailed content of The Go AI that defeated Shen Zhenzhen as a partner, but lost to an amateur human player. For more information, please follow other related articles on the PHP Chinese website!