Reward shaping in reinforcement learning
Reinforcement learning tasks often formulate around some type of reward function. This function defines how effectively our algorithms will train and what would be the optimal policy.
At the seminar, we will discuss some examples of how you can modify the reward function to improve algorithms' convergence. We will also discuss various potential functions and how to use them.
Speaker: Oleg Svidchenko.
Presentation language: Russian.
Date and Time: March 5th, 18:30-20:00.
Place: Times, room 204.
Videos from previous seminars are available at http://bit.ly/MLJBSeminars
6 April 2020Agent57: Outperforming the Atari Human Benchmark
23 March 2020Dream To Control
16 March 2020Почему иерархическое обучение (иногда) работает?
2 March 2020Model Based RL для игр Atari
17 February 2020A Survey and Critique of Multiagent Deep Reinforcement Learning