Summary: It is often talked about “the interpretability-accuracy trade-off”: deep learning, gradient boosted trees and other powerful machine learning methods can capture complex relationships in the data, but lack transparency and interpretability when compared to more traditional methods. In this talk, I’ll briefly review a few of the most popular techniques to measure feature importance in black-box models, with a highlight on a novel class of methods stemming from the game-theoretic concept of Shapley values. Bio: Tadas is a data scientist at Zopa who is passionate about using machine learning and technology to bring more transparency and efficiency to retail finance. Prior to Zopa, he was an early employee of the fintech start-up Oodle Finance. He holds a master’s degree in Mathematics & Philosophy from the University of Oxford.