A major limitation of most Reinforcement Learning methods is low sample efficiency. To achieve a reasonable performance the algorithms typically need a huge amount of data.
To solve this the expert demonstrations that act as good examples of steps for the agent could be employed.
We will define and talk about the problem of learning from not a single expert, but from many of them and how it could be solved for the discrete state space. Also, we will discuss how the demonstrations could be used with Deep Reinforcement Learning methods. More specifically, how state-of-the-art DQNs could benefit from that data.
Speaker: Nikita Sazanovich.
Presentation language: Russian.
Date and Time: November 26th, 18:30-20:00.
Place: Times, room 204.