Designing Curriculum for Deep Reinforcement Learning in StarCraft II

Loading...
Thumbnail Image

Authors

Hao, Daniel
Sweetser Kyburz, Penny
Aitchison, Matthew

Journal Title

Journal ISSN

Volume Title

Publisher

Springer

Abstract

Reinforcement learning (RL) has proven successful in games, but suffers from long training times when compared to other forms of machine learning. Curriculum learning, an optimisation technique that improves a model’s ability to learn by presenting training samples in a meaningful order, known as curricula, could offer a solution. Curricula are usually designed manually, due to limitations involved with automating curricula generation. However, as there is a lack of research into effective design of curricula, researchers often rely on intuition and the resulting performance can vary. In this paper, we explore different ways of manually designing curricula for RL in real-time strategy game StarCraft II. We propose four generalised methods of manually creating curricula and verify their effectiveness through experiments. Our results show that all four of our proposed methods can improve a RL agent’s learning process when used correctly. We demonstrate that using subtasks, or modifying the state space of the tasks, is the most effective way to create training samples for StarCraft II. We found that utilising subtasks during training consistently accelerated the learning process of the agent and improved the agent’s final performance.

Description

Citation

Source

Book Title

AI 2020: Advances in Artificial Intelligence
33rd Australasian Joint Conference, AI 2020, Canberra, ACT, Australia, November 29–30, 2020, Proceedings

Entity type

Access Statement

Open Access

License Rights

Restricted until

Downloads