Policy or Value ? Loss Function and Playing Strength in AlphaZero
Por um escritor misterioso
Last updated 07 fevereiro 2025
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](https://d3i71xaburhd42.cloudfront.net/b125c8933d0264b9a103cb8fa80f226f8c9c3cdc/5-Figure3-1.png)
Results indicate that, at least for relatively simple games such as 6x6 Othello and Connect Four, optimizing the sum, as AlphaZero does, performs consistently worse than other objectives, in particular by optimizing only the value loss. Recently, AlphaZero has achieved outstanding performance in playing Go, Chess, and Shogi. Players in AlphaZero consist of a combination of Monte Carlo Tree Search and a Deep Q-network, that is trained using self-play. The unified Deep Q-network has a policy-head and a value-head. In AlphaZero, during training, the optimization minimizes the sum of the policy loss and the value loss. However, it is not clear if and under which circumstances other formulations of the objective function are better. Therefore, in this paper, we perform experiments with combinations of these two optimization targets. Self-play is a computationally intensive method. By using small games, we are able to perform multiple test cases. We use a light-weight open source reimplementation of AlphaZero on two different games. We investigate optimizing the two targets independently, and also try different combinations (sum and product). Our results indicate that, at least for relatively simple games such as 6x6 Othello and Connect Four, optimizing the sum, as AlphaZero does, performs consistently worse than other objectives, in particular by optimizing only the value loss. Moreover, we find that care must be taken in computing the playing strength. Tournament Elo ratings differ from training Elo ratings—training Elo ratings, though cheap to compute and frequently reported, can be misleading and may lead to bias. It is currently not clear how these results transfer to more complex games and if there is a phase transition between our setting and the AlphaZero application to Go where the sum is seemingly the better choice.
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](https://content.iospress.com/media/icg/2019/41-2/icg-41-2-icg190108/icg-41-icg190108-g001.jpg)
RankNet for evaluation functions of the game of Go - IOS Press
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](http://tim.hibal.org/blog/wp-content/uploads/2017/11/Selection_280.png)
AlphaGo Zero – How and Why it Works – Tim Wheeler
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](https://www.frontiersin.org/files/Articles/1014561/frai-06-1014561-HTML-r1/image_m/frai-06-1014561-t002.jpg)
Frontiers AlphaZe∗∗: AlphaZero-like baselines for imperfect information games are surprisingly strong
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](http://tim.hibal.org/blog/wp-content/uploads/2017/11/Selection_285.png)
AlphaGo Zero – How and Why it Works – Tim Wheeler
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](http://tim.hibal.org/blog/wp-content/uploads/2017/11/Selection_289.png)
AlphaGo Zero – How and Why it Works – Tim Wheeler
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](https://cdn.chess24.com/77w6K1WGSiuxveqn4F1aBg/original/kramnik-sideways.jpg)
AlphaZero, Vladimir Kramnik and reinventing chess
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](https://dl.acm.org/cms/attachment/2b6acfe5-8a20-4630-8372-d5f6b2bab90d/f2.jpg)
Reimagining Chess with AlphaZero, February 2022
The future is here – AlphaZero learns chess
Simple Alpha Zero
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](https://image.slidesharecdn.com/2hx8ocb1rtcxjhaovfpi-signature-b1dc8307ff83e0affd0e6d40dd8135771d7c5ec340fb5905f7839d4662d32112-poli-171219170402/85/alphazero-54-320.jpg?cb=1667968700)
AlphaZero
![Policy or Value ? Loss Function and Playing Strength in AlphaZero](https://www.science.org/cms/10.1126/sciadv.adg3256/asset/b7079fe1-085a-4c7d-8d9d-317db46f0257/assets/images/large/sciadv.adg3256-f6.jpg)
Student of Games: A unified learning algorithm for both perfect and imperfect information games
Recomendado para você
-
AlphaZero Gomoku: Paper and Code - CatalyzeX07 fevereiro 2025
-
DeepMind's AlphaGo Zero and AlphaZero07 fevereiro 2025
-
Google's self-learning AI AlphaZero masters chess in 4 hours07 fevereiro 2025
-
Leela Zero( A Neural Network engine similar to Alpha Zero) - Chess Forums - Page 1507 fevereiro 2025
-
Galactica. Galactica is a large language…, by karim, MLearning.ai07 fevereiro 2025
-
Oren Neumann on X: Do #RL models have scaling laws like LLMs? #AlphaZero does, and the laws imply SotA models were too small for their compute budgets. Check out our new paper07 fevereiro 2025
-
PDF) AlphaZero-What's Missing?07 fevereiro 2025
-
Mutant: Genlab Alpha Card Deck07 fevereiro 2025
-
How the Artificial Intelligence Program AlphaZero Mastered Its Games07 fevereiro 2025
-
How AlphaZero Learns Chess?. DeepMind and Google Brain researchers…, by Gayan Samuditha, Expo-MAS07 fevereiro 2025
você pode gostar
-
Hololive x Granblue Fantasy Collaboration : r/Hololive07 fevereiro 2025
-
Secullum Check-in on the App Store07 fevereiro 2025
-
NARUTO ONLINE MOBILE - SAMUI GAMEPLAY07 fevereiro 2025
-
Biggie Cheese Meme Mouse | Sticker07 fevereiro 2025
-
Demon Slayer: Kimetsu no Yaiba - Todas as Luas Superiores07 fevereiro 2025
-
Circling the Squares: Photography and Armenia's Public Spaces of Pain and Beauty07 fevereiro 2025
-
Xbox Showcase: The Medium Show Off Patented Dual-Reality Gameplay07 fevereiro 2025
-
Chashu チャーシュー, Recipe07 fevereiro 2025
-
Fruit Splash Slot - Free Play in Demo Mode - Nov 202307 fevereiro 2025
-
Mega Animes BR07 fevereiro 2025