From interpretability to inference: an estimation framework for universal approximators

Staff working papers set out research in progress by our staff, with the aim of encouraging comments and debate.
Published on 08 March 2019

Staff Working Paper No. 784

By Andreas Joseph

We present a novel framework for estimation and inference with the broad class of universal approximators. Estimation is based on the decomposition of model predictions into Shapley values. Inference relies on analysing the bias and variance properties of individual Shapley components. We show that Shapley value estimation is asymptotically unbiased, and we introduce Shapley regressions as a tool to uncover the true data generating process from noisy data alone. The well‑known case of the linear regression is the special case in our framework if the model is linear in parameters. We present theoretical, numerical, and empirical results for the estimation of heterogeneous treatment effects as our guiding example. 

This version was updated in December 2024.

Previous versions of this Staff Working Paper were available under the titles ‘Shapley regressions: a framework for statistical inference on machine learning models’ and ‘Parametric inference with universal function approximators’.

From interpretability to inference: an estimation framework for universal approximators