Human brain is able to collect knowledge for future use in novel tasks. In machine learning and reinforcement learning, this knowledge transfer is still an unsolved problem.
At the seminar we will discuss various approaches to this continual or lifelong learning, focusing on deep learning approaches such as PG-ELLA , Policy Distillation , Learning without Forgetting , and Pseudo-Rehearsal .
 Ammar, et al. "Online multi-task learning for policy gradient methods." ICML (2014). http://proceedings.mlr.press/v32/ammar14.pdf
 Rusu, et al. "Policy distillation." arXiv preprint (2015). https://arxiv.org/pdf/1511.06295
 Li & Hoiem. "Learning without forgetting." IEEE Trans. Patt. An. Machine Intelligence (2018). https://arxiv.org/pdf/1606.09282
 Atkinson, et al. "Pseudo-Rehearsal: Achieving Deep Reinforcement Learning without Catastrophic Forgetting." arXiv preprint (2018). https://arxiv.org/pdf/1812.02464
Speaker: Azat Tagirdzhanov.
Presentation language: Russian.
Date and Time: April 2nd, 18:30-20:00.
Place: Times, room 204.
Videos from previous seminars are available at http://bit.ly/MLJBSeminars
6 April 2020Agent57: Outperforming the Atari Human Benchmark
23 March 2020Dream To Control
16 March 2020Почему иерархическое обучение (иногда) работает?
2 March 2020Model Based RL для игр Atari
17 February 2020A Survey and Critique of Multiagent Deep Reinforcement Learning