The Limits of Value Transparency in Machine Learning

Philosophy of Science 89 (5):1054-1064 (2022)
  Copy   BIBTEX

Abstract

Transparency has been proposed as a way of handling value-ladenness in machine learning (ML). This article highlights limits to this strategy. I distinguish three kinds of transparency: epistemic transparency, retrospective value transparency, and prospective value transparency. This corresponds to different approaches to transparency in ML, including so-called explainable artificial intelligence and governance based on disclosing information about the design process. I discuss three sources of value-ladenness in ML—problem formulation, inductive risk, and specification gaming—and argue that retrospective value transparency is only well-suited for dealing with the first, while the third raises serious challenges even for prospective value transparency.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,795

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2022-06-14

Downloads
58 (#372,758)

6 months
10 (#427,773)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Rune Nyrup
Cambridge University

References found in this work

No references found.

Add more references