Human brain is able to collect knowledge for future use in novel tasks. In machine learning and reinforcement learning, this knowledge transfer is still an unsolved problem.
At the seminar we will discuss various approaches to this continual or lifelong learning, focusing on deep learning approaches such as PG-ELLA , Policy Distillation , Learning without Forgetting , and Pseudo-Rehearsal .
 Ammar, et al. "Online multi-task learning for policy gradient methods." ICML (2014). http://proceedings.mlr.press/v32/ammar14.pdf
 Rusu, et al. "Policy distillation." arXiv preprint (2015). https://arxiv.org/pdf/1511.06295
 Li & Hoiem. "Learning without forgetting." IEEE Trans. Patt. An. Machine Intelligence (2018). https://arxiv.org/pdf/1606.09282
 Atkinson, et al. "Pseudo-Rehearsal: Achieving Deep Reinforcement Learning without Catastrophic Forgetting." arXiv preprint (2018). https://arxiv.org/pdf/1812.02464
Speaker: Azat Tagirdzhanov.
Presentation language: Russian.
Date and Time: April 2nd, 18:30-20:00.
Place: Times, room 204.
Videos from previous seminars are available at http://bit.ly/MLJBSeminars
- About seminars
18 May 2020The AI Economist
11 May 2020Self-Tuning Deep Reinforcement Learning
27 April 2020Sample Efficiency in RL
20 April 2020Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors
13 April 2020AlphaGo to MuZero. Победа компьютера над человеком в интеллектуальных играх.