Shapley regressions: a framework for statistical inference on machine learning models

Staff working papers set out research in progress by our staff, with the aim of encouraging comments and debate.
Published on 08 March 2019

Staff Working Paper No. 784

By Andreas Joseph

Machine learning models often excel in the accuracy of their predictions but are opaque due to their non-linear and non-parametric structure. This makes statistical inference challenging and disqualifies them from many applications where model interpretability is crucial. This paper proposes the Shapley regression framework as an approach for statistical inference on non-linear or non-parametric models. Inference is performed based on the Shapley value decomposition of a model, a pay-off concept from cooperative game theory. I show that universal approximators from machine learning are estimation consistent and introduce hypothesis tests for individual variable contributions, model bias and parametric functional forms. The inference properties of state-of-the-art machine learning models - like artificial neural networks, support vector machines and random forests - are investigated using numerical simulations and real-world data. The proposed framework is unique in the sense that it is identical to the conventional case of statistical inference on a linear model if the model is linear in parameters. This makes it a well-motivated extension to more general models and strengthens the case for the use of machine learning to inform decisions.

PDFShapley regressions: a framework for statistical inference on machine learning models

The code and data to reproduce all analyses presented in this paper are provided on GitHub.

Was this page useful?
Add your details...