Philosophical Warnings on the Use of AI in Education: Timely Advice from Plato, Martin Heidegger, Zhuangzi, and C. S. Lewis

Interdisciplinary Journal of Human and Social Studies (forthcoming)
  Copy   BIBTEX

Abstract

Some important warnings about how we use technology in the philosophies of Plato, Martin Heidegger, Zhuangzi, and C. S. Lewis are relevant to the use of AI in education. Plato cautions us concerning what is lost when we let technology replace some of our own thinking processes. Far from making us more intelligent, the use of AI in writing falls into the mistakes Plato warns us against: We get lazy with learning and remembering, and we substitute a bundle of information for the wisdom and comprehension that constitute genuine knowledge. Heidegger advises against using technology to create more of a product and reducing the role of humanity to merely a part of the system of production. When writing with AI, we abandon our responsibility to shepherd our own work, and we become tools in the machinery of creating a written product, even letting the software guide us rather than the other way around. Zhuangzi teaches us not to follow the patterns set by social convention. Material written by AI is a distillation of conventional word-patterns. Lewis warns us that we abolish part of humanity when we use technology to get what we want without first learning to love what is good. Using AI to get our writing done means sacrificing the essential human love for finding and understanding the truth, instead allowing our own words to be conditioned by unknown authors of algorithms. In this article I explain these matters and close with some helpful suggestions for how we might use AI more constructively—assuming we are going to use it at all.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,247

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Society under threat… but not from AI.Ajit Narayanan - 2013 - AI and Society 28 (1):87-94.
A Plea for (In)Human-centred AI.Matthias Braun & Darian Meacham - 2024 - Philosophy and Technology 37 (3):1-21.
Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.

Analytics

Added to PP
2024-11-14

Downloads
12 (#1,369,278)

6 months
12 (#296,635)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Mark J. Boone
Hong Kong Baptist University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references