Legal framework for AI - where to start, and how to navigate

Government Tomorrow Forum started a new series of articles in May focused on research on AI in the public sphere. 

This article describes challenges in AI regulation. Uncovering both the hurdles and the promising pathways that define the road to a robust regulatory framework in AI, we uncover the balance between technological innovation and societal impact. 

The Evolution of AI Technologies

Today, AI advancements are fueled by progress in technology, data availability, and computational power. With the progress made already, we can expect AI to be part of our professional and personal lives and the public sector. Our study found that in AI implementation in the Public Sector, several challenges stand in the way, particularly those that affect regulatory framework development.

While AI has the potential to transform human lives, it has almost no boundaries in terms of regulation. We will see more and more attempts to regulate AI, but the question is how to do it successfully while mitigating all the concerns.

Challenges of AI Regulation

The number one challenge is rapid technological advancements and the complexity of AI systems. Trying to regulate a constantly changing entity is an enormous task that would require officials to think ahead and try to predict AI's direction.

80% of the GTF's recent survey experts put social concerns at the top, with greater implications for the public and society, which is another specter of challenges altogether. Bias in algorithms, lack of compliance with ethical norms and legal rules, job displacement due to automation, privacy concerns with data collection, and the overall impact of AI on society – all of these concerns must be taken into account in AI advancement and in the regulation process. 

Michael Charles Borelli

“Governments must anticipate and mitigate potential risks associated with AI technologies, including algorithmic bias, privacy infringement, and job displacement. Regulatory frameworks should be flexible, adaptive, and context-specific, balancing innovation with safeguards for human rights, privacy, and fairness.”  - says Michael Charles Borrelli, Director of AI & Partners.

Another challenge is skills. According to Apolitical data, less than 50% of the officials have received training relevant to the policy they are responsible for regarding AI. Without adequate training, officials would struggle to understand the complexity of AI and the evolving threat landscape they aim to mitigate. Their policies would lack depth and effectiveness, and the knowledge gap could hinder officials' ability to anticipate and respond to emerging threats effectively. 

Developing policies with people and ethics in mind, defining responsible AI and what constitutes privacy infringements, should be a priority in policymaking.

Key Components of an Effective Legal Framework for AI

Part of the GTF study was to reveal the critical components for AI implementation. Here, we summarized the results.

So, what are the components of legal AI frameworks?

1. Consistent

Regulation usually streams from the notion of protecting the public from malicious intent. AI is no exception. The beneficial potential of AI is just as high as the danger. And so there needs a quick and effective legal framework that would, ideally, be consistent worldwide.

2. Fast-changing and flexible

Experts agree that the legal framework cannot be obsolete. AI is a technology that is going to need constant change and development, both in deployment and regulation. One of the ways to help that would be to support research initiatives focused on understanding and predicting AI development.

Bruno Silva

“The rapid pace of AI advancement means that regulatory frameworks must be equally agile to effectively govern these technologies without stifling innovation. Regulatory agility is crucial for maintaining a balance between fostering innovation and ensuring public safety and ethical integrity. As AI technologies evolve, so too must our legal and ethical frameworks to address emerging risks and opportunities. '' - Bruno Silva, Head of R&D at Muvu Technologies, Professor, PhD Candidate in AI.

3. Transparent 

This is the base on which the legal framework is built. Trust between the public and policymakers ensures that the regulatory process will have positive results. In our study, we found that failure to address these concerns can erode public trust, and undermine the legitimacy of AI systems.

“Legal frameworks must evolve to address novel challenges posed by AI, ensuring accountability, transparency, and fairness in AI systems' development and deployment. National and regional governments must develop tailored strategies and policies to address AI implementation challenges that reflect local priorities, capabilities, and challenges. ” Michael Charles Borrelli, Director of AI & Partners.

4. Constantly improving

It should begin with education. To make governments competitive in the field of AI, there must be ensured organizational readiness and skill standards that can be applied to government officials responsible for regulation.

Dr. Seth Dobrin

There needs to be a significant up-skilling effort for ALL government employees.” - Dr. Seth Dobrin, CEO of Qantm AI, a Global Transformational Leader and the first Global Chief AI Officer at IBM.

5. Collaborative

One of the most important steps in ensuring regulatory success is stakeholder engagement. It’s crucial in shaping regulatory policies that reflect diverse perspectives and address societal concerns to include all that can provide a unique and useful point of view. By fostering collaboration between policymakers, industry stakeholders, and academia, governments can create an environment where AI can thrive responsibly and beneficially.

Patrick Upmann

“Engaging various stakeholders, including citizens, businesses, and civil society organizations, is crucial for the successful implementation and acceptance of AI. This requires transparent communication and consideration of diverse perspectives and concerns.” - Patrick Upmann, Head of Responsible AI Consulting Services now.digital.

Global coordination is also essential to harmonize AI regulations across borders and mitigate regulatory fragmentation. Setting international standards is important not only to the public but to the government itself while sharing knowledge and working on future developments.

Mathias Lindbro

“Governments should actively engage in international forums and working groups dedicated to AI to help contribute to and influence the development of global standards. The European AI Act serves as a prime example of such efforts, aiming to harmonise AI regulations within the EU and potentially setting a precedent for international standards.” - Mathias Lindbro, AI Advisor, AI Strategist at Nextevo and Founder at Strategic 9 AB.

Creating a regulatory framework for AI is a continuous task, which will require collaboration, foresight, and keeping the public’s benefit at the forefront. A proactive regulatory approach, stakeholder engagement, and ethical standards can create an environment for AI innovation to thrive responsibly. 



Follow us on X (Twitter) and Linkedin and get more insights about government and technology!

Previous
Previous

How to approach AI on national and regional levels

Next
Next

Lack of Global Coordination: What Slows Down the Global AI Implementation?