The EU AI Act

“Putting together a comprehensive list of AI risks for all time is a fool’s errand. Given the ubiquitous and rapidly evolving nature of AI and its use, we believe that it is more useful to look at risks from the perspective of vulnerable communities and the commons” (The UN 2023). 

“The European AI strategy and the coordinated plan make clear that trust is a prerequisite to ensure a human-centric approach to AI: AI is not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being.” (The European Commission 2019) 

On 9 December 2023, the European Parliament and the Council reached a political agreement on the AI Act.1. The AI Act is a comprehensive legal framework designed to regulate the use of AI across member states. The final compromise text approved by the Council of Ministers and the EU Commission was published the 21st of January 2024. On the 10-11 April 2024 there will be plenary vote on the AI Act in the European Parliament, followed by a formal AI Act adoption and a step wise process of implementation of for example prohibited AI practices, codes of practice and the EC AI Pact (support for companies in planning for AI Act). The newly established European AI Office will play a key role in implementing the AI Act, fostering the development and use of trustworthy AI, and international cooperation. 

The main objective of AI Act should, according to the compromise text, be to “promote the uptake of human centric and trustworthy AI” respecting fundamental rights, democracy, safety, health, the rule of law and a high level of safety. This is underlined in the compromise text in Recital 1, and in Article 1.  

To ensure the introduction of trustworthy AI, the AI Act introduces a risk-based approach to AI governance, categorizing AI systems into different risk levels, with the highest risks subjected to veto or strict compliance and transparency requirements. It includes specific provisions for high-risk AI systems, requiring human oversight and robust data governance, while also addressing AI's impact on consumer protection and market surveillance. The Act also includes recommendations to establish national authorities monitoring and approving AI solutions and to introduce national or transnational safe regulatory “sandbox” testing environment for new AI solutions.  

Key aspects of the AI Act 

  • Risk-Based Approach: The AI Act classifies AI systems into different risk categories, imposing more rigorous regulations on those deemed high-risk to address potential threats and ensure responsible usage. 
  • Legal and Regulatory Framework: The Act establishes a legal structure for AI deployment, aligning with existing EU laws and principles, ensuring integration of AI technologies within the legal landscape of the EU. 
  • Data Governance: Emphasizing the significance of data quality and privacy, the Act sets forth guidelines for managing data in AI systems, prioritizing user privacy and data integrity. 
  • Transparency and Accountability: The Act mandates comprehensive disclosure regarding AI operations and decision-making processes. This aspect is crucial for maintaining public trust and ensuring that AI systems are understandable and accountable. 
  • Ethical Guidelines: Central to the Act is the emphasis on ethical AI practices that respect human rights and dignity, guiding developers and users in creating and employing AI in a manner that aligns with societal values. 
  • Human Oversight: For high-risk scenarios, the Act insists on significant human oversight, ensuring that AI does not operate autonomously in situations where its decisions could have critical consequences. 
  • Consumer Protection: The Act specifically addresses the impact of AI on consumers, focusing on safeguarding them from potentially harmful AI practices and ensuring fair market practices. 

By addressing these core regulatory topics, the EU's AI Act aims to ensure the safe, transparent, and ethical development and use of AI across the EU. It represents a significant stride in introducing common standards for using AI across member states while upholding ethical and societal values, ultimately striving for a trustworthy and secure AI environment within the EU. 

The AI Act and the Nordics 

A common legal EU framework for AI is, however, not equivalent to having identical solutions across Europe. For the Nordic countries that are not members of the EU, there will be a separate process assessing whether the AI Act is relevant for the EEA agreement. This process will be carried out after the AI Act has been formally administered at the EU level.  

The application of AI solutions depends to a large extent on several national or local factors, such as the level of digitalisation, the organisation of institutions, the national or regional regulations or traditions, and trust in systems. This could turn out to be an advantage for the Nordic countries and in particular for the public sectors, because of the high degree of digitalisation, comparable institutions and common cultural values in regard to trust in society.  

The AI strategies of the Nordic countries and the EU's AI Act prioritize trustworthy and ethically responsible use of AI, aiming to protect societies, create economic benefits and drive societal welfare. Despite their alignment in overarching goals, there are notable differences in their approaches and levels of detail. The Nordics are characterised by a high degree of trust in government, the services provided by the public sector, and in society at large. Health and welfare services, education systems and utilities in general are highly digitalised with abundant and relatively easy to access data sources. The Nordic populations are also relatively speaking highly digitally literate. The prerequisites in the Nordics for a successful roll out and implementation of AI could therefore be argued to be good but equally the potential harm is significant if AI leads to for example, an erosion of trust, an increased social polarisation and undermines democracy.