Gender Bias and Conversational Agents: an ethical perspective on Social Robotics

Science and Engineering Ethics 28 (3):1-23 (2022)
  Copy   BIBTEX

Abstract

The increase in the spread of conversational agents urgently requires to tackle the ethical issues linked to their design. In fact, developers frequently include in their products cues that trigger social biases in order to maximize the performance and the quality of human-machine interactions. The present paper discusses whether and to what extent it is ethically sound to intentionally trigger gender biases through the design of virtually embodied conversational agents. After outlining the complex dynamics involving social biases, social robots, and design, we evaluate the ethics of integrating gender cues in conversational agents, analysing four different approaches to the problem. Finally, we suggest which approach in our opinion might have the best chances to reduce the negative effects of biases and discriminatory visions of gender dynamics.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,667

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Attributionism and Moral Responsibility for Implicit Bias.Michael Brownstein - 2016 - Review of Philosophy and Psychology 7 (4):765-786.

Analytics

Added to PP
2022-04-21

Downloads
30 (#756,477)

6 months
8 (#603,286)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Fabio Fossa
Politecnico di Milano

Citations of this work

No citations found.

Add more citations