The UK has taken a significant step toward regulating artificial intelligence (AI) by signing the first-ever international treaty focused on AI safeguards. Alongside the EU and US, the UK committed to implementing measures that will mitigate the risks posed by AI technologies, such as bias in decision-making, misinformation, and threats to fundamental rights like privacy and the rule of law.
The framework convention on AI, developed by the Council of Europe, is a legally binding agreement aimed at ensuring AI technologies adhere to principles like non-discrimination, safe development, and the protection of human dignity. By signing the treaty, the UK and other signatories are expected to adopt policies that prevent AI systems from making biased decisions or infringing on human rights, such as when AI is used in job applications, legal decisions, or government services.
Justice Secretary Shabana Mahmood praised AI’s potential to revolutionize public services and the economy but emphasized that these innovations must respect core human values. The treaty underscores that AI systems should protect personal data and be transparent, giving individuals the right to challenge AI-based decisions and understand when AI, rather than a human, is making a decision.
The treaty covers both public and private AI use. Companies and organizations must assess the impact of their AI technologies on human rights, democracy, and the rule of law, and publicly disclose this information. Governments are also expected to take action against the misuse of AI, including banning AI systems that employ biased training data or threaten civil liberties.
In the UK, the government will evaluate its current laws to ensure they align with the treaty's principles. A consultation on a new AI bill is underway, which will incorporate the treaty’s guidelines. Once ratified, the convention will enhance existing legal measures, making it possible to ban harmful AI applications. For example, the EU’s AI Act already prohibits AI systems that categorize people based on social behavior or use unauthorized facial recognition databases.