In recent years artificial intelligence (AI) has gained increasing and significant traction. Applications have been rapidly expanding and accessibility is available to virtually anyone with an internet connection. The growth of AI is largely fuelled by significant breakthroughs in computing power, such as the introduction of faster, more efficient processors and advanced Graphics Processing Units (GPUs). In parallel, there has been a surge of data, acting as a vital resource for AI systems to learn, adapt, and progress. The widespread availability has been a catalyst for advancements and innovations in diverse sectors globally, marking a new era of technological integration and advancement. The United States of America and China have established themselves as leaders and competitors in the race for developing technological advancements in AI, and the EU has been leading in developing regulation over the last few years emphasizing the need for oversight in AI deployment and development. The technological development and its global accessibility have been so rapid that experts, government, companies, and lawmakers are now calling for increased regulation. Some are even calling for a pause in the development of the technology and/or its deployment.
As a result, regulatory frameworks are emerging around the globe. In 2019, the OECD AI principles were adopted, including five value-based principles and five recommendations for OECD countries to promote responsible and trustworthy AI policies (OECD 2019). More recently in the US, President Joe Biden has issued an executive order on safe, secure, and trustworthy AI (The White House 2023). This order reflects a growing recognition of the importance of balancing innovation with safety and ethical considerations. The European Union has developed the AI Act that is designed through a risk-based approach, establishing obligations for providers and users depending on the level of risk from AI (The European Parliament 2023). This approach seeks to ensure that AI development aligns with safety standards and ethical principles, particularly in high-risk applications. In November 2023, the UK held a global AI safety summit which resulted in The Bletchley Declaration on AI safety, recognising the urgent need to understand and collectively manage potential risks and ensuring AI is developed and deployed in a safe, responsible way (The AI Safety Summit 2023). There was also agreement on the need for collaboration on testing the next generation of AI models against a range of critical national security, safety and societal risks.
Safety concerns surrounding AI and digital technologies have also featured in discussions at the Nordic level, most recently at the Nordic Council Session in Oslo in October 2023. The Icelandic Prime Minister Katrín Jakobsdóttir argued in her speech that: “AI will impact all sectors of society, and the technology is developing much faster than political decisions can be made in this area. We must act immediately, because democracy itself is vulnerable. It must be cared for and protected at all costs.” Safety concerns were also raised by High Level Swedish representatives in relation to threats to democratic norms and the international reputation of Sweden. Coordination around the development of ethical and regulatory frameworks for AI at various levels, such as the Nordics, was raised both by the Swedish Prime Minister Ulf Kristersson and the Secretary General of NATO, Jens Stoltenberg.
These regulatory efforts signify a global shift towards a more structured and responsible approach to AI development. They aim to harness the benefits of AI while ensuring its alignment with ethical standards, safety, and societal well-being. As the field of AI continues to evolve, these frameworks are expected to play a crucial role in shaping its future trajectory, balancing innovation with responsibility and oversight. However, the uncertain nature of technological development, its uptake and regulation as well as the geopolitical situation (in which AI plays a significant role) introduces significant uncertainty in the prediction of future trajectories.
What is AI and ethical AI?
There is no commonly agreed upon definition of AI. However, in November 2023 the OECD updated its definition to reflect recent scientific developments: “An AI system is a machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. “
There is no unanimous and precise definition for "ethical AI", only different definitions as well as underlying requirements and principles published by different institutions that relate to ethical AI (e.g. EC’s EU High Level Expert Group on AI’s seven key requirements forming the foundation for the EU AI Act Proposal, OECD, UNESCO). For instance, in 2019 the EU High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence based on seven key requirements: 1. Human agency and oversight 2. Technical robustness and safety 3. Privacy and data governance 4. Transparency 5. Diversity, non-discrimination and fairness 6. Societal and environmental wellbeing 7. Accountability (The European Commission 2019).
Recent developments and possible future directions
AI has been developed during a relatively long time. The last 10-15 years deep learning systems have been programmed to identify and classify images and objects, transcribe audio material and to understand meaning in sentences. The technology has now been developed to use classifications to predict and generate new material, so called Generative AI. Large language models (LLM’s), a prominent example of Generative AI, harness vast amounts of data from the internet to produce new, high-quality material. These models have become increasingly accessible on a global scale, marking a new era in AI utility and influence.
The future trajectory of AI may include advancements in AI planning and imagination. These capabilities could enable AI systems to simulate complex scenarios, devise creative solutions to problems, and even generate innovative ideas and designs. This evolution could significantly enhance decision-making processes, creative industries, and problem-solving methodologies. Artificial General Intelligence (AGI) is often considered the possible end product of AI advancements, with capabilities similar to humans (such as creativity and abstract thinking) but with vastly enhanced performance due to its ability to access and process immense sets of information at speed.
Concerns and implications
Significant concerns have been raised around the application of this powerful technology, including issues around bias (such as discriminatory, harmful and sexist content), transparency, privacy, authenticity, and copyright. Harms can be unintended, for example when AI is used for purposes it was not intended for or through incorrect information produced by the technology. There can also be intended harms produced by malign operations, such as spreading misinformation and using fake identities. As AI is increasingly used in many areas of societies, it may have wide ranging and disruptive consequences. A commonly discussed area is the labour market, and in a recent report from the World Economic Forum employers predict that 40% of skills required in the workforce will change in the coming five years and that a quarter of all jobs will be affected by AI, either created or destroyed (World Economic Forum 2023). See Annex 1 for a more comprehensive list of concerns and implications of AI on society.
As AI continues to evolve and integrate into many aspects of our lives, it is crucial to address these challenges and concerns. Ensuring responsible development and deployment of AI technologies will be key to maximizing their benefits while mitigating risks. This requires a concerted effort from researchers, policymakers, technologists, ethicists, and the broader community to navigate the complex landscape of AI's future impact on society.
Research and innovation and the Nordic context
Research and innovation is essential to build an improved understanding of how AI can be deployed and used responsibly, to understand its societal consequences, to fully appreciate the possible ramifications of misuse, and to take advantage of its significant potentials in numerous areas of application. It is critical that the development and implementation of regulatory frameworks and guidelines are informed by research for us to be able to capitalise on the advantages of AI and mitigate problematic consequences for societies.
Governments and research and innovation funders are reacting to these developments. Notable example include a number of AI focused calls issued by Vinnova in 2023, an earmarked budget of 100 million DKK for AI R&I calls organized by Innovation Fund Denmark in 2024 and a 5 years budget of 1 billion NOK to the Research Council of Norway for a funding scheme with a mandate of research on consequences of AI and other digital technologies for society, digital technology as a research area in itself, and research on how digital technologies can be used for innovation in private and public sectors.
In this context NordForsk is delighted that NORDHORC has tasked us to develop a suggestion for a research and innovation initiative for AI to address shared Nordic concerns and priorities in the area.