4 ChessAI Techniques That Make Computers OP

Chess AI has revolutionized the game, outsmarting even grandmasters with techniques rooted in advanced algos and math. Let’s explore 4 techniques behind their dominance.


1. Minimax Algorithm: The Decision-Making Engine

The Minimax algorithm1 is at the heart of most chess AIs, simulating all possible moves. It assumes both players act optimally:

  • The Maximizer seeks to maximize their advantage.
  • The Minimizer tries to minimize it.

How It Works:

The recursive formula for Minimax is:

Challenges:

The number of possible moves in chess grows exponentially. With a branching factor of approximately 35:

\text{Complexity} = \mathcal{O}(35^d)

where d is the depth of the search.


2. Alpha-Beta Pruning: Smarter Search

To optimize Minimax, Alpha-Beta Pruning2 eliminates moves that cannot influence the outcome. It keeps track of:

  • \alpha: The best score achievable by the Maximizer.
  • \beta: The best score achievable by the Minimizer.

The Pruning Condition:

When:

\alpha \geq \beta

the engine stops exploring that branch, saving valuable computational time.

Improved Efficiency:

This technique reduces the complexity of Minimax from:

\mathcal{O}(b^d) \text{ to approximately } \mathcal{O}(b^{d/2})

where b is the branching factor, and d is the depth.


3. Heuristics and Evaluation Functions

Since it’s impractical to evaluate every possible position, chess engines rely on evaluation functions3 to approximate the strength of a position.

A Simple Evaluation Function:

E(P) = M + \text{(Mobility)} + \text{(King Safety)} + \text{(Pawn Structure)}

Here, M represents the material advantage:

M = 9 \cdot Q + 5 \cdot R + 3 \cdot (N + B) + 1 \cdot P

where Q, R, N, B, and P are the numbers of queens, rooks, knights, bishops, and pawns, respectively.


4. Neural Networks and Monte Carlo Tree Search (MCTS)

Modern engines like AlphaZero4 combine deep learning with Monte Carlo Tree Search (MCTS)5 for exceptional performance.

Monte Carlo Tree Search:

MCTS doesn’t evaluate the entire game tree. Instead, it uses random simulations and a mathematical formula to guide its search:

UCB = Q_i + C \cdot \sqrt{\frac{\ln(N)}{n_i}}

Where:

  • Q_i: Average reward of node i.
  • C: Exploration parameter.
  • N: Total number of simulations for the parent node.
  • n_i: Simulations for child node i.

Neural Networks:

Instead of relying on handcrafted heuristics, AlphaZero trains a neural network to predict:

  1. The value of a position.
  2. The probability distribution of moves.

Final Thoughts

Chess AI combines traditional algos like Minimax and AB Pruning with modern techniques like NNs and MCTS.

For games that rely on randomness e.g. Monopoly 🏡, one might use probability-aware algorithms like expectimax6 or MCTS and long-term optimization strategies like RL.

For games like Poker ♥️ which have imperfect information, randomness and strategic bluffing, one might look into algorithms like counterfactual regret minimization (CFR)7.

Footnotes

  1. https://en.wikipedia.org/wiki/Minimax ↩︎
  2. https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning ↩︎
  3. https://en.wikipedia.org/wiki/Evaluation_function ↩︎
  4. https://github.com/google-deepmind/mctx ↩︎
  5. https://en.wikipedia.org/wiki/Monte_Carlo_tree_search ↩︎
  6. https://en.wikipedia.org/wiki/Expectiminimax ↩︎
  7. https://poker.cs.ualberta.ca/publications/NIPS07-cfr.pdf ↩︎

Leave a Reply

Your email address will not be published. Required fields are marked *