Significance Tests for Neural Networks
Founder, Chairman and Chief Scientist at Infima, Professor at Stanford University
Senior Research Scientist, Upstart
This paper develops a pivotal test to assess the statistical significance of the feature variables in a single-layer feedforward neural network regression model. We propose a gradient-based test statistic and study its asymptotics using nonparametric techniques. Under technical conditions, the limiting distribution is given by a mixture of chi-square distributions. The tests enable one to discern the impact of individual variables on the prediction of a neural network. The test statistic can be used to rank variables according to their influence. Simulation results illustrate the computational efficiency and the performance of the test. An empirical application to house price valuation highlights the behavior of the test using actual data.
This paper is published in the Journal of Machine Learning Research, volume 21 (227), pages 1−29, 2020.