UNSG Antonio Guterres urges Council to approach AI with urgency

Reading Time: 3 minutes

United Nations Secretary General, Antonio Guterres has urged the Security Council to approach Artificial Intelligence (AI) with a sense of urgency, a global lens and learner’s mindset.

The Council is for the first time formally discussing the impacts of the technology in an effort to provide an opportunity for its members on the possible implications of AI on international peace and security; and to promote its safe and responsible use.

The UN Chief has emphasises the need for global standards and approaches, calling the UN the ideal place for that to happen.

Impact on lives

It’s the new technological frontier – and while its radical advances might have many unexplored and unknown pitfalls, AI is expected to have dramatic impacts on sustainable development, the world of work and the social fabric of our societies.

Guterres explains, “It is clear that AI will have an impact on every area of our lives – including the three pillars of the United Nations. It has the potential to turbocharge global development, from monitoring the climate crisis to breakthroughs in medical research. It offers new potential to realise human rights, particularly to health and education. But the High Commissioner for Human Rights has expressed alarm over evidence that AI can amplify bias, reinforce discrimination, and enable new levels of authoritarian surveillance.

He adds, “Today’s debate is an opportunity to consider the impact of Artificial Intelligence on peace and security – where it is already raising political, legal, ethical, and humanitarian concerns.”

Pros and cons

He cautions that while like social media, AI could have many positive advantages, it can also be used with malicious intent as the international community is urged to create guardrails through multilateral efforts to govern the AI domain.

Guterres also earlier announced plans to establish a High-Level Advisory Body on AI that will advise member states on various options as the global body builds towards a consensus driven Summit of the Future which will take place at the UN in September 2024.

This comes as concern grows that AI models can help people to harm themselves and each other on a massive scale with severe implications for peace and security.

“The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.  AI-enabled cyberattacks are already targeting critical infrastructure and our own peacekeeping and humanitarian operations, causing great human suffering. The technical and financial barriers to access are low – including for criminals and terrorists. Both military and non-military applications of AI could have very serious consequences for global peace and security,” adds Guterres.

Collective effort

Meanwhile, experts are warning that the development of AI cannot be left to the private sector.

Jack Clark, Co-Founder of Anthropic and Co-chair of the AI Index Steering Committee at Stanford’s Institute of Human-Centred AI, elaborates, “The governments of the world must come together, develop state capacity and make the development of powerful A.I. systems a shared endeavour across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace.

“Private sector actors are the ones that have the sophisticated computers and large pools of data and capital resources to build these systems. And therefore, private sector actors seem likely to continue to define the development of these systems. While this will bring huge benefits to humans across the world, it also poses potential threats to peace, security and global stability,” Clark explains.

Calls have also been made for greater regulations to govern these frontier systems growing by the day and a Security Council that must be awake to its potential pitfalls, in a peace and security arena it already struggles to maintain.