


The winning rate against humans is 84%. DeepMind AI reaches the level of human experts in Western chess for the first time
DeepMind has made new achievements in the field of game AI, this time in Western chess.
#In the field of AI games, the progress of artificial intelligence is often demonstrated through board games. Board games can measure and evaluate how humans and machines develop and execute strategies in controlled environments. For decades, the ability to plan ahead has been key to AI's success in perfect-information games like chess, checkers, shogi, and Go, as well as imperfect-information games like poker and Scotland Yard.
Stratego has become one of the next frontiers of AI research. A visualization of the game’s stages and mechanics is shown below in 1a. The game faces two challenges.
First, Stratego’s game tree has 10,535 possible states, which is more than the well-studied imperfect information games Unrestricted Texas Hold’em (10,164 possible states) and Go (10,360 possible states) ).
Second, acting in a given environment in Stratego requires reasoning over 1066 possible deployments for each player at the start of the game, whereas poker only has 103 possible pairs of hands. Perfect information games such as Go and chess do not have a private deployment phase, thus avoiding the complexity of this challenge in Stratego.
Currently, it is not possible to use model-based SOTA perfect information planning technology, nor to use imperfect information search technology that decomposes the game into independent situations.
For these reasons, Stratego provides a challenging benchmark for studying large-scale policy interactions. Like most board games, Stratego tests our ability to make relatively slow, thoughtful and logical decisions in a sequential manner. And because the structure of the game is very complex, the AI research community has made little progress, and the artificial intelligence can only reach the level of human amateur players. Therefore, developing an agent to learn end-to-end strategies to make optimal decisions under Stratego's imperfect information, starting from scratch and without human demonstration data, remains one of the major challenges in AI research.
Recently, in a latest paper from DeepMind, researchers proposed DeepNash, an agent that learns Stratego self-game in a model-free way without human demonstration. DeepNask defeated previous SOTA AI agents and achieved the level of expert human players in the game's most complex variant, Stratego Classic.
Paper address: https://arxiv.org/pdf/2206.15378.pdf.
The core of DeepNash is a structured, model-free reinforcement learning algorithm, which researchers call Regularized Nash Dynamics (R-NaD). DeepNash combines R-NaD with a deep neural network architecture and converges to a Nash equilibrium, meaning it learns to compete under incentives and is robust to competitors trying to exploit it.
Figure 1 b below is a high-level overview of the DeepNash method. The researchers systematically compared its performance with various SOTA Stratego robots and human players on the Gravon gaming platform. The results show that DeepNash defeated all current SOTA robots with a winning rate of more than 97% and competed fiercely with human players. It ranked in the top 3 in the rankings in 2022 and in each period, with a winning rate of 84%.
Researchers said that for the first time, an AI algorithm can reach the level of human experts in complex board games without deploying any search methods in the learning algorithm. , it is also the first time that AI has achieved human expert level in the Stratego game.
Method Overview
DeepNash uses an end-to-end learning strategy to run Stratego and strategically place chess pieces on the board at the beginning of the game (see Figure 1a). During the game-play phase, The researchers used integrated deep RL and game theory methods. The agent aims to learn an approximate Nash equilibrium through self-play.
This research uses orthogonal paths without search, and proposes a new method that combines model-free reinforcement learning in self-game with game theory algorithm ideas-regularized Nash dynamics (RNaD) combined.
The model-free part means that the research does not establish an explicit opponent model to track the possible states of the opponent. The game theory part is based on the idea that based on the reinforcement learning method, they guide the agent to learn Behavior moves toward a Nash equilibrium. The main advantage of this compositional approach is that there is no need to explicitly mock private state from public state. An additional complex challenge is to combine this model-free reinforcement learning approach with R-NaD to enable self-play in chess to compete with human expert players, something that has not been achieved so far. This combined DeepNash method is shown in Figure 1b above.
Regularized Nash Dynamics Algorithm
The R-NaD learning algorithm used in DeepNash is based on the idea of regularization to achieve convergence. R-NaD relies on three A key step, as shown in Figure 2b below:
DeepNash consists of three components: (1) Core training component R-NaD ; (2) fine-tuning the learning strategy to reduce the residual probability of the model taking highly unlikely actions, and (3) post-processing at test time to filter out low-probability actions and correct errors.
DeepNash’s network consists of the following components: a U-Net backbone with residual blocks and skip connections, and four heads. The first DeepNash head outputs the value function as a scalar, while the remaining three heads encode the agent policy by outputting a probability distribution of its actions during deployment and gameplay. The structure of this observation tensor is shown in Figure 3:
Experimental results
DeepNash also interacts with several existing Some Stratego computer programs have been evaluated: Probe won the Computer Stratego World Championship three of the years (2007, 2008, 2010); Master of the Flag won the championship in 2009; Demon of Ignorance is Stratego's Open source implementation; Asmodeus, Celsius, Celsius1.1, PeternLewis and Vixen were programs submitted to the Australian University Programming Competition in 2012, which PeternLewis won.
As shown in Table 1, DeepNash won the vast majority of games against all these agents, even though DeepNash had no adversarial training and only used self-game.
Figure 4a below illustrates some of the frequently repeated deployment methods in DeepNash; Figure 4b shows DeepNash (blue square) on the chess piece A situation where the center is behind (losing 7 and 8) but ahead in terms of information, because the red side's opponent has 10, 9, 8 and two 7s. The second example in Figure 4c shows DeepNash having an opportunity to capture the opponent's 6 with its 9, but this move was not considered, probably because DeepNash believed that protecting the identity of the 9 was considered more important than the material gain.
In Figure 5a below, the researchers demonstrate positive bluffing, where players pretend that the value of the piece is higher than it actually is. value. DeepNash chases the opponent's 8 with the unknown piece Scout (2) and pretends it is a 10. The opponent thinks the piece might be a 10 and guides it next to the Spy (where the 10 can be captured). However, in order to capture this piece, the opponent's Spy lost to DeepNash's Scout.
The second type of bluffing is negative bluffing, as shown in Figure 5b below. It is the opposite of active bluffing, where the player pretends that the piece is worth less than it actually is.
Figure 5c below shows a more complex bluff, where DeepNash brings its undisclosed Scout (2) close to the opponent's 10, which could be interpreted as a Spy. This strategy actually allows Blue to capture Red's 5 with 7 a few moves later, thus gaining material, preventing 5 from capturing Scout (2), and revealing that it is not actually a Spy.
The above is the detailed content of The winning rate against humans is 84%. DeepMind AI reaches the level of human experts in Western chess for the first time. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
