Hume’s Law as Another Philosophical Problem for Autonomous Weapons Systems

Journal of Military Ethics 20 (2):113-128 (2021)
  Copy   BIBTEX

Abstract

This article contends that certain types of Autonomous Weapons Systems (AWS) are susceptible to Hume’s Law. Hume’s Law highlights the seeming impossibility of deriving moral judgments, if not all evaluative ones, from purely factual premises. If autonomous weapons make use of factual data from their environments to carry out specific actions, then justifying their ethical decisions may prove to be intractable in light of the said problem. In this article, Hume’s original formulation of the no-ought-from-is thesis is evaluated in relation to the dominant views regarding it (viz., moral non-descriptivism and moral descriptivism). Citing the objections raised against these views, it is claimed that, if there is no clear-cut solution to Hume’s is-ought problem that presently exists, then the task of grounding the moral judgements of AWS would still be left unaccounted for.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 103,388

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2021-10-12

Downloads
63 (#350,759)

6 months
12 (#218,371)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Robert James M. Boyles
De La Salle University

Citations of this work

No citations found.

Add more citations

References found in this work

Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
Null. Null - 2016 - Philosophy Study 6 (9).
How to derive "ought" from "is".John R. Searle - 1964 - Philosophical Review 73 (1):43-58.

View all 19 references / Add more references