TY - JOUR
T1 - AI and the falling sky
T2 - interrogating X-Risk
AU - Jecker, Nancy S.
AU - Atuire, Caesar Alimsinya
AU - Bélisle-Pipon, Jean Christophe
AU - Ravitsky, Vardit
AU - Ho, Anita
N1 - Publisher Copyright:
© 2024 BMJ Publishing Group. All rights reserved.
PY - 2024
Y1 - 2024
N2 - This paper argues that the headline-grabbing nature of existential risk (X-Risk) diverts attention away from immediate artificial intelligence (AI) threats, including fairly disseminating AI risks and benefits and justly transitioning towards AI-centred societies. Section I introduces a working definition of X-Risk, considers its likelihood and explores possible subtexts. It highlights conflicts of interest that arise when tech luminaries lead ethics debates in the public square. Section II flags AI ethics concerns brushed aside by focusing on X-Risk, including AI existential benefits (X-Benefits), non-AI X-Risk and AI harms occurring now. Taking the entire landscape of X-Risk into account requires considering how big risks compare, combine and rank relative to one another. As we transition towards more AI-centred societies, which we, the authors, would like to be fair, we urge embedding fairness in the transition process, especially with respect to groups historically disadvantaged and marginalised. Section III concludes by proposing a wide-angle lens that takes X-Risk seriously alongside other urgent ethics concerns.
AB - This paper argues that the headline-grabbing nature of existential risk (X-Risk) diverts attention away from immediate artificial intelligence (AI) threats, including fairly disseminating AI risks and benefits and justly transitioning towards AI-centred societies. Section I introduces a working definition of X-Risk, considers its likelihood and explores possible subtexts. It highlights conflicts of interest that arise when tech luminaries lead ethics debates in the public square. Section II flags AI ethics concerns brushed aside by focusing on X-Risk, including AI existential benefits (X-Benefits), non-AI X-Risk and AI harms occurring now. Taking the entire landscape of X-Risk into account requires considering how big risks compare, combine and rank relative to one another. As we transition towards more AI-centred societies, which we, the authors, would like to be fair, we urge embedding fairness in the transition process, especially with respect to groups historically disadvantaged and marginalised. Section III concludes by proposing a wide-angle lens that takes X-Risk seriously alongside other urgent ethics concerns.
UR - http://www.scopus.com/inward/record.url?scp=85189896243&partnerID=8YFLogxK
U2 - 10.1136/jme-2023-109702
DO - 10.1136/jme-2023-109702
M3 - Article
AN - SCOPUS:85189896243
SN - 0306-6800
JO - Journal of Medical Ethics
JF - Journal of Medical Ethics
M1 - jme-2023-109702
ER -