Americas

  • United States

Asia

Charlotte Trueman
Senior Writer

UK government confirms November global AI summit

news
Aug 24, 20233 mins
Artificial IntelligenceTechnology Industry

The UK government has announced it will be bringing together government officials, AI companies, and researchers to discuss the benefits and risks of AI technology.

artificial intelligence

The UK government has confirmed it will host a global AI summit on November 1 and 2, bringing together government officials, AI companies, and researchers at Bletchley Park to consider the risks and development of AI technologies and discuss how they can be mitigated through internationally coordinated action.

Bletchley Park is a site in Milton Keynes that became the home of code breakers during World War II and saw the development of Colossus, the world’s first programmable digital electronic computer that was used to decrypt the Nazi Party’s Enigma code, shortening the war by at least two years.

“International collaboration is the cornerstone of our approach to AI regulation, and we want the summit to result in leading nations and experts agreeing on a shared approach to its safe use,” said Technology Secretary Michelle Donelan in comments posted alongside the government’s announcement.

“The UK is consistently recognized as a world leader in AI and we are well placed to lead these discussions,” she said, adding that November’s summit will make sure the technology’s huge benefits can be realized “safely and securely” in the future.

The summit will also build on ongoing work at international forums including the OECD, Global Partnership on AI, Council of Europe, and the UN and standards-development organizations, as well as the recently agreed G7 Hiroshima AI Process, the government said in a statement.

The list of invitees has yet to be announced.

How are governments seeking to regulate AI?

In March, the UK government published a white paper outlining its AI strategy, stating it was seeking to avoid what it called “heavy-handed legislation,” and will instead call on existing regulatory bodies to use current regulations to ensure that AI applications adhere to guidelines, rather than draft new laws.

In the coming months, regulators are expected to start issuing practical guidance to organizations, handing out risk assessment templates, and setting out how to implement the government’s principles of safety, security, robustness, transparency and explainability, fairness, accountability and governance, and contestability and redress.

The European Union approved a draft of its AI Act in June, which would require generative AI systems to comply with transparency requirements by disclosing if the content was AI-generated and helping to distinguish deepfake images from real ones. The regulation also proposes a total ban on biometric surveillance in public settings and so-called “social scoring” systems, which classify people based on their social behavior, socioeconomic status, and personal characteristics.

The US government has yet to publish any concrete plans on how it intends to regulate AI. However, the Biden administration faced criticism in June after it met with AI companies and asked for voluntary unenforceable, and relatively vague commitments from them, instead of proposing any legally enforceable legislation.

Charlotte Trueman
Senior Writer

Charlotte Trueman is a staff writer at Computerworld. She joined IDG in 2016 after graduating with a degree in English and American Literature from the University of Kent. Trueman covers collaboration, focusing on videoconferencing, productivity software, future of work and issues around diversity and inclusion in the tech sector.

More from this author