AI and the falling sky: interrogating X-Risk

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)

Abstract

This paper argues that the headline-grabbing nature of existential risk (X-Risk) diverts attention away from immediate artificial intelligence (AI) threats, including fairly disseminating AI risks and benefits and justly transitioning towards AI-centred societies. Section I introduces a working definition of X-Risk, considers its likelihood and explores possible subtexts. It highlights conflicts of interest that arise when tech luminaries lead ethics debates in the public square. Section II flags AI ethics concerns brushed aside by focusing on X-Risk, including AI existential benefits (X-Benefits), non-AI X-Risk and AI harms occurring now. Taking the entire landscape of X-Risk into account requires considering how big risks compare, combine and rank relative to one another. As we transition towards more AI-centred societies, which we, the authors, would like to be fair, we urge embedding fairness in the transition process, especially with respect to groups historically disadvantaged and marginalised. Section III concludes by proposing a wide-angle lens that takes X-Risk seriously alongside other urgent ethics concerns.

Original languageEnglish
Pages (from-to)811-817
Number of pages7
JournalJournal of Medical Ethics
Volume50
Issue number12
DOIs
Publication statusPublished - 23 Dec 2024

Fingerprint

Dive into the research topics of 'AI and the falling sky: interrogating X-Risk'. Together they form a unique fingerprint.

Cite this