Understanding from Machine Learning Models

British Journal for the Philosophy of Science 73 (1):109-133 (2022)
  Copy   BIBTEX

Abstract

Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.

Other Versions

No versions found

Analytics

Added to PP
2019-07-18

Downloads
2,334 (#4,945)

6 months
382 (#4,528)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Emily Sullivan
Utrecht University

References found in this work

Depth: An Account of Scientific Explanation.Michael Strevens - 2008 - Cambridge: Harvard University Press.
Understanding Why.Alison Hills - 2015 - Noûs 49 (2):661-688.
Inductive risk and values in science.Heather Douglas - 2000 - Philosophy of Science 67 (4):559-579.

View all 30 references / Add more references