Authors:
Dennis J. N. J. Soemers, Spyridon Samothrakis, Eric Piette, Matthew Stephenson

Venue:
Information Sciences, 2023

Topics:
reinforcement learning, self-play, explainable AI, general game playing, tactics extraction

Links: PDF · ScienceDirect

Abstract

This paper investigates how tactical knowledge can be extracted from agents trained through self-play in general games.

The approach focuses on identifying patterns and decision rules learned by reinforcement learning agents, transforming them into interpretable tactical knowledge.

The results demonstrate that meaningful and human-understandable tactics can be derived from self-play, contributing to the development of more transparent and explainable AI systems.

Context

This work lies at the intersection of reinforcement learning and explainable AI, two central themes in modern artificial intelligence.

Within the Ludii framework, it contributes to understanding how general game playing agents acquire knowledge and how this knowledge can be made interpretable.

The paper is particularly relevant for bridging the gap between high-performance AI systems and human-understandable reasoning, aligning with research on human-like AI agents.

Full reference

Soemers, D. J. N. J., Samothrakis, S., Piette, E., Stephenson, M. (2023). Extracting Tactics Learned from Self-Play in General Games. Information Sciences.

BibTeX

@article{soemers2023tactics,
  author  = {Soemers, Dennis J. N. J. and Samothrakis, Spyridon and Piette, Eric and Stephenson, Matthew},
  title   = {Extracting Tactics Learned from Self-Play in General Games},
  journal = {Information Sciences},
  year    = {2023},
  url     = {https://www.sciencedirect.com/science/article/pii/S0020025522015754}
}