Lack of Global Coordination: What Slows Down the Global AI Implementation? 

While most political leaders worldwide embark on bringing Artificial Intelligence (AI) into the public sector, they are confronted with many challenges that could slow down the successful use of this game-changing technology.

In May, the Government Tomorrow Forum’s team conducted a survey among 120 experts in AI about challenges that governments face while implementing AI, followed by 15 deep interviews about the matter. All experts surveyed were asked to rank 8 challenges and identify three that are the most critical:

  1. Lack of global AI coordination

  2. The complexity of technology

  3. Stakeholder engagement

  4. Social - ethical, legal, and societal concerns

  5. Citizens’ education

  6. Data Ownership

  7. Cybersecurity

  8. Productivity, skills, and organizational readiness

  9. Others (we asked experts to name it if it’s not on the list).

Over 70% of experts agree that lack of global coordination is one of the top three pressing challenges governments face. 

Indeed, certain ongoing problems cannot be addressed on a national level. One of the brightest examples is climate change. For example, with its incredible efforts in carbon monitoring and recycling, Sweden cannot save the whole world alone from climate change. The same applies to the regulation of AI: a single virtuous entity, for example, the EU, cannot block the development of negative practices across the planet.

For instance, it is impossible to try to reduce — or get rid of — AI biases in one part of the world when there are replications of all sorts of these biases, from racist to misogynistic, in another. The outputs of intolerant, prejudiced AIs largely influence the minds of people, which brings new concerns and issues that we have to address as soon as possible.

We discussed the strategies for global cooperation in the ethical implementation of AI technology with our survey participants, leading AI experts and futurists, and outlined the very first steps and recommendations.


International Standards

Tackling global issues like digital inequality and AI-fueled misinformation demands international cooperation and standardized regulations. Agreement over the definition of norms, standards, and interests becomes essential. Trying to regulate the scale of a machine's activity is sharing a part of the global task of trying to regulate any other kind of social phenomenon, which can create tension and conflict in quite different faces from one place to another.

Patrick Upmann

“In the globalized world, AI technology crosses borders effortlessly, but the regulations governing its use do not. Without international coordination, AI applications can be misused or manipulated across jurisdictions, leading to global inequities and tensions. Additionally, the lack of a coordinated approach can hinder the global response to AI-driven threats like misinformation or autonomous weapons. Establishing international standards and cooperative frameworks is crucial for leveraging AI's benefits globally while mitigating risks and ensuring ethical practices are upheld across nations.” - Patrick Upmann, Interim Manager, Business consultant at now.digital comments.

As a result of our research, we outlined the main recommendations for addressing this challenge: 

- Defining common standards and coordination mechanisms for developing a global governance of AI. Governments should cooperate and work together to establish common standards applicable worldwide.

- Facilitating international collaboration, ensuring alignment and stakeholder engagement. Governments should enhance collaboration via international working groups and engage different stakeholders, including academia, business, and, in particular, technology companies and citizens. 

- Promoting Data Governance Agreements. Governments should work towards agreements that facilitate the secure and ethical sharing of data across borders, which is crucial for the development and deployment of AI applications at a global scale.

- Monitoring and scaling the best practices. Governments should be on track with the recent developments in AI regulation, monitor for the best practices, and scale them. For example, the European AI Act serves as a prime example of a comprehensive regulatory framework. 

All in all, political leaders should actively take part in international discussions focused on AI to help shape global standards. The European AI Act is a key example of such endeavors. 

Case study: the EU AI Act

It is a legislative proposal put forth by the European Commission to regulate artificial intelligence in the European Union. This act aims to ensure that AI technologies developed and deployed within the EU are safe, transparent, and in line with the fundamental rights and values of the EU. 

The process of developing the EU AI Act began in earnest in April 2018, when the European Commission published a strategy on artificial intelligence. The formal legislative proposal was introduced on April 21, 2021. After nearly three years of negotiations, the Act was finally approved by the European Parliament and the Council of the European Union in May 2024. The legislative process took approximately six years from inception to adoption.

The EU has committed to investing €1 billion annually in AI research and innovation through the Horizon Europe and Digital Europe programs. Over 350 organizations and experts participated in public consultations. Members of the European Parliament (MEPs) reviewed, amended and voted on the proposal. The Parliament's Committee on Industry, Research and Energy (ITRE) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) were particularly influential. 

Achieving consensus among all 27 EU member states on the AI Act was complex. The agreement was facilitated through extensive negotiations and compromises to address each country's diverse priorities and concerns. Key provisions of the act, such as common objectives, risk-based approach, human oversight and accountability, data quality, and stakeholder involvement, are establishing a model for international regulation standards.

Continuous Journey

Mathias Lindbro

‘The global nature of AI technology and its challenges means that actions taken in one country can have far-reaching implications. International cooperation is essential for harmonizing standards, ensuring interoperability, and addressing cross-border concerns like privacy and data protection.’ - Mathias Lindbro, AI Advisor, AI Strategist at Nextevo and Founder at Strategic 9 AB comments.

Artificial intelligence has a limitless number of applications, but its rapid development poses significant threats. 

Over the long journey of human evolution, we have learned how to control land by building fences and maintaining armies at our borders. We created personal identification documents, called passports, to control who can enter our territories. We learned how to manage individual commerce ownership on a global scale: we can buy a small piece of an American or Japanese company while being physically in Paris. 

However, our world is becoming more complex in terms of regulating AI usage, particularly data ownership. Data is the fuel for AI, and controlling data flows has become one of our time's most important political questions. 

Mathias Lindbro: “Addressing these issues isn’t a one-off task but a continuous journey of improvements. The field of AI is fast-moving, demanding our strategies to be both dynamic and adaptable. To illustrate this point, I've attached a model I've created, named “The Continuous Value-Driven AI Journey”, which exemplifies how I teach that this is not a one-off task.

This model highlights the ongoing engagement, evaluation, and refinement process required to ensure that AI technologies serve the public effectively, ethically, and responsibly. This approach underlines the importance of resilience, foresight, and a commitment to sustained development in AI governance.”


Not a magic wand

Leonardo Quattrucci

Leonardo Quattrucci, Senior Advisor at Apolitical, Angel Investor, and ex-advisor to the European Commission, states it is important to establish a local culture of innovation in every region. 

AI should not be seen as a magical solution to government problems, nor should new technology be released without control. Instead, a culture of "trust and verification" should be built. It creates a golden mean because, for example, being too cautious will prevent progress and make it harder for governments to manage AI. 

This culture will help to increase the adoption capacity of governments worldwide.

‘When a new technology like Generative AI emerges, how quickly can governments acquire the skills and the tools to leverage it? This question is the elephant in the room.’ - Leonardo Quattrucci.

As globalization continues to spread, societies and moral norms are becoming increasingly similar to one another. Discussing potential shared objectives for shaping citizens' attitudes toward the latest emerging technologies is essential.

Follow us on X (Twitter) and Linkedin and get more insights about government and technology!

Previous
Previous

Legal framework for AI - where to start, and how to navigate

Next
Next

GTF launches Government Science Hub in New York: uniting academia, policymakers, and experts