How governments should tackle data privacy concerns adopting AI? 

We've started a new series of articles focused on the research about AI in the public sphere. In this article, we tackle data privacy and expert recommendations on how to deal with data and cybersecurity concerns while implementing projects with AI.

Neil Oschlag-Michael

AI systems development strongly depends on the data we feed them. The more data they receive, the smarter they become. “Data ownership in this context focuses more on data availability, protection and consent. AI systems need a lot of data for training and there is a correlation between the amount of data used in training and AI system performance.” - Neil Oschlag-Michael, AI and Data Governance advisor, commented.

However, with the increasing intelligence of machines raising the questions of who owns the data and how it is used are becoming much more important, especially as it’s hard for AI systems to automatically distinguish between personal and non-personal data. At some point, AI collects and processes personal data if it’s available publicly. 

What Are the Citizens' Concerns?

Public views on AI’s impact are still developing, but citizens’ concerns about their data privacy are rising. For example,  according to the Pew Research Center survey, more than half of Americans say AI is doing more to hurt than help people keep their personal information private. Though they believe that AI can bring positive results in such areas as managing public traffic and improving healthcare services, citizens are concerned about the data that they are giving to governments and businesses.

With rapidly developing AI, citizens are becoming targets for criminals using the technology to mimic an individual’s appearance, speech patterns, and behaviour. The ability to create a digital twin of a human having several voice examples creates higher risks for financial and other types of crime. 

Dr. James Canton, CEO of the Institute for Global Futures, Futurist, AI Advisor believes that “responsible AI at the government level should be the avoidance of the manipulation of citizens with social controls”. Another concern is data usage by governments. There is no single megapolis in the world without video surveillance and face recognition. With people-government mistrust, citizens, especially in autocratic countries, are becoming more concerned about how governments will use the power of AI and data ownership.


How can governments address data privacy and ownership issues and effectively manage AI projects?

Rethinking of Data Ownership on a Regulation Level

Most of the world regulations focused on data privacy are outdated and do not correspond with reality with AI. Governments have to agree on new ways to control data. The biggest issue with data and AI systems is that once data is fed to AI, it’s difficult to insert it and make sure it’s no longer used without a person's consent. So, the question of who owns the data and what are the responsibilities of the data owner is critical.  

Marcus Schueler

Marcus Schueler, Head of Responsible AI Consulting Services at MHP - A Porsche Company points out the risk of holding AI technology and data in one's hands: ”The entire global economy is becoming very dependent on a few oligopolists who can virtually dictate the rules of the game”. Governments should not allow the situation when a few companies are becoming data monopolists. In today’s reality data is owned by organisations, and it presents a challenge. Even though a person has an opportunity to formally agree and provide certain data to organisations and withdraw consent after some time, there is no guarantee AI systems won’t use it further if they already obtained that information. 

Michael Charles Borelli

“Clear guidelines and regulations are needed to clarify rights and responsibilities concerning data ownership, facilitating secure data sharing, collaboration, and innovation while safeguarding individual privacy and data sovereignty.”- Michael Charles Borrelli, Director of AI & Partners commented.

Patrick Upmann, Interim Manager, Business Consultant, now.digital agrees: “The question of who owns data and who has access to it is central to the implementation of AI. Governments must develop clear guidelines and laws to protect personal data to prevent misuse and maintain public trust.”

It might be necessary to rethink and redefine data ownership on a regulatory level clarifying the rights and responsibilities of each party involved in the process, including citizens, governments, and businesses that collect and process data.

Cyber Strategy Is Crucial

Patrick Upmann

Cybersecurity should be prioritized when it comes to operating and processing private data. At the same time, there are certain “sensitive” areas that require special attention and efforts: AI systems are often integrated into critical infrastructure, healthcare, and national security. A breach in any of these areas could have catastrophic consequences, making robust cybersecurity measures essential. The complexity and adaptability of AI systems make them both a tool and a target in cyber warfare, requiring advanced and continuously updated security protocols to protect sensitive information and maintain public safety.” - comments Patrick Upmann, Interim Manager, Business Consultant, now.digital.

How to address these challenges?

Data privacy and security are considered critical, and governments should have a leading role in solving this challenge by establishing clear regulatory frameworks and continuously improving them. Here are the top expert recommendations for governments on how to tackle data privacy and ownership issues:

  1. Create guidelines and ensure clear rights and responsibilities of those who own and operate with data. Rethink data ownership.

  2. Align AI data guidelines with other existing regulations focusing on data privacy and security data collection and processing. An example of such alignment is the EU AI Act and GDPR.

  3. Establish an AI security strategy: develop a strategy that coordinates efforts across different sectors and levels of government to prevent, detect, and respond to cyber threats involving AI technologies.

  4. Inform citizens about data ownership, rights and responsibilities, security measures.

  5. Collaborate with the private sector: partner with tech companies and cybersecurity experts to leverage their expertise and resources for enhancing AI security. This collaboration can include sharing best practices, threat intelligence, and technologies.






Follow us on X (Twitter) and Linkedin and get more insights about government and technology!

Previous
Previous

GTF launches Government Science Hub in New York: uniting academia, policymakers, and experts

Next
Next

Dr. James Canton: “We Must Manage AI Before It Manages Us”