Policy or Value ? Loss Function and Playing Strength in AlphaZero-like Self-play
Por um escritor misterioso
Last updated 28 março 2025

Results indicate that, at least for relatively simple games such as 6x6 Othello and Connect Four, optimizing the sum, as AlphaZero does, performs consistently worse than other objectives, in particular by optimizing only the value loss. Recently, AlphaZero has achieved outstanding performance in playing Go, Chess, and Shogi. Players in AlphaZero consist of a combination of Monte Carlo Tree Search and a Deep Q-network, that is trained using self-play. The unified Deep Q-network has a policy-head and a value-head. In AlphaZero, during training, the optimization minimizes the sum of the policy loss and the value loss. However, it is not clear if and under which circumstances other formulations of the objective function are better. Therefore, in this paper, we perform experiments with combinations of these two optimization targets. Self-play is a computationally intensive method. By using small games, we are able to perform multiple test cases. We use a light-weight open source reimplementation of AlphaZero on two different games. We investigate optimizing the two targets independently, and also try different combinations (sum and product). Our results indicate that, at least for relatively simple games such as 6x6 Othello and Connect Four, optimizing the sum, as AlphaZero does, performs consistently worse than other objectives, in particular by optimizing only the value loss. Moreover, we find that care must be taken in computing the playing strength. Tournament Elo ratings differ from training Elo ratings—training Elo ratings, though cheap to compute and frequently reported, can be misleading and may lead to bias. It is currently not clear how these results transfer to more complex games and if there is a phase transition between our setting and the AlphaZero application to Go where the sum is seemingly the better choice.

Reimagining Chess with AlphaZero, February 2022

Representation Matters: The Game of Chess Poses a Challenge to

Policy or Value ? Loss Function and Playing Strength in AlphaZero

Value targets in off-policy AlphaZero: a new greedy backup

AlphaGo/AlphaGoZero/AlphaZero/MuZero: Mastering games using

PDF) Expediting Self-Play Learning in AlphaZero-Style Game-Playing

Acquisition of chess knowledge in AlphaZero

Policy or Value ? Loss Function and Playing Strength in AlphaZero

Multiplayer AlphaZero – arXiv Vanity
Does the neural net of AlphaZero only evaluate the score of a
Simple Alpha Zero
Recomendado para você
-
Acquisition of Chess Knowledge in AlphaZero28 março 2025
-
Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control28 março 2025
-
AlphaZero paper published in journal Science : r/baduk28 março 2025
-
Multiplayer AlphaZero28 março 2025
-
DeepMind's game-playing AI just beat 50-year-old record in computer science28 março 2025
-
AlphaZero-Inspired Game Learning: Faster Training by Using MCTS Only at Test Time28 março 2025
-
PDF] Acquisition of chess knowledge in AlphaZero28 março 2025
-
TLDR: When AlphaZero played Stockfish it had a 31x hardware advantage. : r/chess28 março 2025
-
MuZero Intuition28 março 2025
-
How the Artificial Intelligence Program AlphaZero Mastered Its Games28 março 2025
você pode gostar
-
Auditoria de projetos - Kot Engenharia28 março 2025
-
SCP: Containment Breach Remastered Achievements - Steam28 março 2025
-
Camisa Internacional Feminina - Temporada 21/22 - Edição Especial Cons28 março 2025
-
Sea of Thieves' new Pirates of the Caribbean expansion hides28 março 2025
-
ArtStation - majin sonic28 março 2025
-
Weiss Schwarz TCG: Premium Booster JoJo's Bizarre Adventure Stone28 março 2025
-
SpeedRun - a fast-paced sci-fi comic where speed saves lives by28 março 2025
-
Tyler1 was the most watched streamer in August's list of top Chess28 março 2025
-
Has Fry's given up? Last time I went to fry's in Plano was before28 março 2025
-
Fantasia de Sereia em Oferta28 março 2025