Artificial intelligence governance, ethics and foresight

Credit: Mitchell Luo/Unsplash

The Leverhulme Centre for the Future of Intelligence (LCFI) explores the short-term and long-term opportunities and challenges of artificial intelligence (AI). Its research and lobbying activities have resulted in governments, policymakers and AI businesses worldwide introducing measures to improve AI governance and uphold ethical standards in the development of new AI technologies.

Founded in 2016, LCFI has built a new interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal: to work together to ensure that we make the best of the opportunities of AI as it develops over coming decades.

The Centre’s philosophical research also addresses the dangers of AI, ranging from concerns around transparency of algorithms to the potential for AI technologies to undermine core principles of democracy. 

LCFI research led to the inclusion of AI governance in the remit of the UK government’s new Centre for Data Ethics and Innovation – the world’s first national advisory body for AI.  

Researchers at LCFI have also contributed to national and international AI governance documents, including the report of the UK Parliamentary Select Committee on AI and strategies published by the US government and the European Union; and changes to a number of AI company and industry policies.

For instance, by virtue of LCFI’s research expertise, the United Nations (UN) requested that members of LCFI lead one of the four tracks at the UN’s AI for Good Summit, which brought together over 30 UN agencies to discuss global AI policy. As a result, LCFI research set the agenda for discussion among all UN stakeholders in AI policy debate, focussing specifically on issues around trust in AI technology.

“LCFI – and Stephen Cave especially – provided critical help to my team during its early phase of work to establish the need for a new government body. He helped identify the nature of the ethical challenges associated with AI and heavily shaped CDEI’s [Centre for Data Ethics and Innovation] early programme of work.”

– Cora Govett, Deputy Director, Digital Regulation and Markets, UK government’s Department of Digital, Culture, Media and Sport