JetBrains Research unites scientists working in challenging new disciplines

Knowledge Distillation

A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets.

Knowledge distillation is model compression method in which a small model is trained to mimic a pre-trained, larger model designed to solve this problem.
On the next seminar we will discuss this approach and go over most relevant papers and provide some coding examples.

Papers:

Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model Compression.


Geoffrey Hinton, Oriol Vinyals and Jeff Dean. Distilling the Knowledge in a Neural Network.


Antonio Polino, Razvan Pascanu and Dan Alistarh. Model compression via distillation and quantization.


Hokchhay Tann, Soheil Hashemi, Iris Bahar and Sherief Reda. Hardware-Software Codesign of Accurate, Multiplier-free Deep Neural Networks.


Asit Mishra and Debbie Marr. Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy.


Anubhav Ashok, Nicholas Rhinehart, Fares Beainy and Kris M. Kitani. N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning.


Lucas Theis, Iryna Korshunova, Alykhan Tejani and Ferenc Huszár. Faster gaze prediction with dense networks and Fisher pruning.

Speaker: Kyryl Truskovskyi.

Presentation language: Russian.

Date and time: March 13th, 18:30-20:00.

Location: Times, room 204.

Videos from previous seminars are available at http://bit.ly/MLJBSeminars