Skip to content

Meta-Zeta是一个基于强化学习的五子棋(Gobang)模型,主要用以了解AlphaGo Zero的运行原理的Demo,即神经网络是如何指导MCTS做出决策的,以及如何自我对弈学习。源码+教程

License

Notifications You must be signed in to change notification settings

YoujiaZhang/AlphaGo-Zero-Gobang

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AlphaGo-Zero-Gobang

  • Do you like to play Gobang ?
  • Do you want to know how AlphaGo Zero works ?
  • Check it out!

You can also read my Blog :)

View a Demo

这是一个基于强化学习的自我博弈模型,运行后的程序如下所示。


Quick Start

python3 MetaZeta.py

Train

我们构建了一个基于MCTS进行决策的 AI玩家,由残差神经网络辅助预测落子。

  • 操作:点击 AI 自我对弈,在右上角点击 开始

Test

我们可以和训练有素的 AI玩家 对弈,以测试 AI 的下棋水平。

  • 操作:点击 与 AI对战,在右上角点击 开始

Environment

  • Ubuntu 18.04.6 LTS
  • tensorflow-gpu==2.6.2

File Structure

filename type description
TreeNode.py MCTS nodes of the MCTS decision tree
MCTS.py MCTS Build MCTS decision tree
AIplayer.py MCTS Build an AI based on MCTS+NN
Board.py Board store board information
Game.py Board defines the game process for selfPlay and play-with-Human
PolicyNN.py NN constructs a residual neural network
MetaZeta.py Main GUI synthesis for all parties All in one

How it works (with code explanation)

首先,我们需要设计一些规则来描述棋盘上的信息

然后,我们需要建立一个残差神经网络 (Network structure)

3. MCTS ✨✨✨

然后,我们需要了解 AI 是如何做出决策的。他是如何积累下棋的知识,并利用学到的知识进行下棋的

最后,我们需要了解强化学习的整个过程(即自我对弈 )

About

Meta-Zeta是一个基于强化学习的五子棋(Gobang)模型,主要用以了解AlphaGo Zero的运行原理的Demo,即神经网络是如何指导MCTS做出决策的,以及如何自我对弈学习。源码+教程

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages