Existential risk: building a field and influencing policy

lightcycles
Credit: Jason A. Samfield/flickr

The Centre for the Study of Existential Risk (CSER) is dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse. CSER’s work has led governments, policymakers and artificial intelligence (AI) businesses worldwide to increase their attention towards, and introduce measures to reduce, existential risk.

Specific research questions include environmental risks, risks from AI, and our obligations to future generations. The work is highly interdisciplinary, but among other fields it draws heavily on philosophical research, particularly in ethics, political philosophy and the philosophy of science.

CSER also has particular expertise in ‘structured expert elicitation’ methodologies: consulting experts where data and evidence are insufficient to forecast risks, but in a way that controls for the possibility of expert bias.

This method has resulted in an influential report, The Malicious Use of Artificial Intelligence, which surveys and proposes ways to mitigate security threats from AI and machine learning. The method has also been deployed by the economist Professor Sir Partha Dasgupta, Chair of CSER’s Management Board, in the organisation of a major workshop on the ethics of climate change and resulting outputs with the Pontifical Academy of Sciences and the Pontifical Academy of Social Sciences at the Vatican.

CSER researchers have helped to grow and shape the field of existential risk by advising a range of new non-academic research centres and philanthropic funders on these emerging areas of risk research.

The team has had a significant effect on UK and international policy by creating a new All-Party Parliamentary Group on Future Generations; by inspiring a campaign for a new UK Future Generations Bill; and by changing international norms regarding the publication of AI-technology research and the conduct of risk assessments.