Shatranj.ai proje müfredatı lms.shatranj.ai üzerinden erişilebilir. lms.shatranj.ai
Aşağıda müfredat konularının kısa birer özeti yer almaktadır
Explores how modern chess engines evolved and how open-source engines can be adapted to historical variants.
Introduces reinforcement learning (RL) by solving a small gridworld exactly when the rules are known, then shows why this “all‑knowing” approach breaks for large games like chess.
Moves from planning to learning: the agent starts with no map and learns a policy by trial and error using tabular Q-learning.
Applies Q-learning to a small chess endgame and makes the RL codebase “real” by separating the experiment notebook from the learning and training modules.
Introduces function approximation for RL by replacing the Q-table with a neural network (DQN) and applying it to several small board games.
Builds a complete Qirkat environment and then progresses from random rollouts to full Monte Carlo Tree Search (MCTS) with UCT selection.
Upgrades MCTS into AlphaZero-style search by adding a neural network that supplies a policy prior and a value estimate, then trains through self-play.
Implements Turkish Checkers and compares classical search (alpha–beta) with MCTS using a reusable match runner and batch simulation logs.