2023 Impact factor 1.8
Soft Matter and Biological Physics

EPJ C Highlight - A cautionary tale of machine learning uncertainty

alt
Underestimating machine learning uncertainty

By decorrelating the performance of machine learning algorithms with imperfections in the simulations used to train them, researchers could be estimating uncertainties that are lower than their true values.

The Standard Model of particle physics offers a robust theoretical picture of the fundamental particles, and most fundamental forces which compose the universe. All the same, there are several aspects of the universe: from the existence of dark matter, to the oscillating nature of neutrinos, which the model can’t explain – suggesting that the mathematical descriptions it provides are incomplete. While experiments so far have been unable to identify significant deviations from the Standard Model, physicists hope that these gaps could start to appear as experimental techniques become increasingly sensitive.

A key element of these improvements is the use of machine learning algorithms, which can automatically improve upon classical techniques by using higher-dimensional inputs, and extracting patterns from many training examples. Yet in new analysis published in EPJ C, Aishik Ghosh at the University of California, Irvine, and Benjamin Nachman at the Lawrence Berkeley National Laboratory, USA, show that researchers using machine learning methods could risk underestimating uncertainties in their final results.

In this context, machine learning algorithms can be trained to identify particles and forces within the data collected by experiments such as high-energy collisions within particle accelerators – and to identify new particles, which don’t match up with the theoretical predictions of the Standard Model. To train machine learning algorithms, physicists typically use simulations of experimental data, which are based on advanced theoretical calculations. Afterwards, the algorithms can then classify particles in real experimental data.

These training simulations may be incredibly accurate, but even so, they can only provide an approximation of what would really be observed in a real experiment. As a result, researchers need to estimate the possible differences between their simulations and true nature – giving rise to theoretical uncertainties. In turn, these differences can weaken or even bias a classifier algorithm’s ability to identify fundamental particles.

Recently, physicists have increasingly begun to consider how machine learning approaches could be developed which are insensitive to these estimated theoretical uncertainties. The idea here is to decorrelate the performance of these algorithms from imperfections in the simulations. If this could be done effectively, it would allow for algorithms whose uncertainties are far lower than traditional classifiers trained on the same simulations. But as Ghosh and Nachman argue, the estimation of theoretical uncertainties essentially involves well-motivated guesswork – making it crucial for researchers to be cautious about this insensitivity.

In particular, the duo argues there is a real danger that these techniques will simply deceive the unsuspecting researcher by reducing only the estimate of the uncertainty, rather than the true uncertainty. A machine learning procedure that is insensitive to the estimated theory uncertainty may not be insensitive to the actual difference between nature, and the approximations used to simulate the training data. This in turn could lead physicists to artificially underestimate their theory uncertainties if they aren’t careful. In high-energy particle collisions, for example, it may cause a classifier to incorrectly confirm the presence of certain fundamental particles.

In presenting this ‘cautionary tale’, Ghosh and Nachman hope that future assessments of the Standard Model which use machine learning will not be caught out by incorrectly shrinking uncertainty estimates. This could enable physicists to better ensure reliability in their results, even as experimental techniques become ever more sensitive. In turn, it could pave the way for experiments which finally reveal long-awaited gaps in the Standard Model’s predictions.

Ghosh, A., Nachman, B. A cautionary tale of decorrelating theory uncertainties. Eur. Phys. J. C 82, 46 (2022). https://doi.org/10.1140/epjc/s10052-022-10012-w

Editors-in-Chief
F. Croccolo, G. Fragneto and H. Stark
Thanks so much for all the corrections. I am again very grateful to the EPJE production office for the great cooperation and look forward to publishing more in EPJ. Thanks a lot.

Rohit Jain, MPI Biophysical Chemistry, Göttingen, Germany

ISSN (Print Edition): 2429-5299
ISSN (Electronic Edition): 2725-3090

© EDP Sciences, Società Italiana di Fisica and Springer-Verlag