JetBrains Research unites scientists working in challenging new disciplines

Interpretation of Decision Trees

At the seminar we will discuss interpretation of decision trees. Interpretability is important aspect of the Machine Learning. It helps to understand why model made certain decision, which factors are important for the decision and how to check and improve the model. Sometimes it is considered that decisive trees are simply interpretable, and for a model of just one small tree it is true. However, in practice, we all use ensembles of thousands of trees and their interpretation becomes a problem. I will tell you about some model-agnostic methods of interpretation and about some methods specifically for trees. We will discuss properties, problems and limitations of these methods.

Speaker: Igor Labutin.

Presentation language: Russian.

Date and time: February 28st, 20:00-21:30.

Location: Times, room 405.

Videos from seminars will be available at http://bit.ly/MLJBSeminars