Leonardo Quattrucci: “I would like for governments to be seen as adopters of technology, rather than just producers of policies and regulations.”

Leonardo Quattrucci is an Advisor to Apolitical, where he focuses on equipping policymakers to lead in the age of AI through the Government AI Campus.

He has over a decade of experience in building and leading innovative programmes across tech and government, from the European Commission’s in-house think tank to Amazon Web Services’ Center for Quantum Networking. His work has been recognised by Forbes Magazine, the BMW Foundation, the World Economic Forum, and the Aspen Institute — among others. He has a Master of Public Policy from the University of Oxford. Leonardo writes 6 Minutes Well Spent: a newsletter about overlooked perspectives in tech.

  • In the Interview, Leonardo Quattrucci discusses the challenges and strategies surrounding the implementation of AI in government. He emphasizes that AI adoption in the public sector is more about organizational change than technology itself. He points out that while Generative AI is accessible and engaging, it requires users to be clear communicators with a precise question in mind. He advocates for a whole-of-government strategy for AI, integrating it into hiring, enablement, upskilling, and procurement strategies, stressing that AI is shaping a new capability rather than just being a tool.

    Regarding the attention AI receives in upskilling programs for public servants, Quattrucci notes a shortage of AI in government. While civil servants are optimistic and increasingly experimenting with AI, there’s a lack of official support for safe and secure adoption, leading to «Shadow AI.» He praises Singapore and the UK for their AI frameworks but acknowledges challenges in translating guidelines into action. Apolitical’s AI Campus helps public servants assess AI readiness and connects them to a global community for continuous learning.

    Quattrucci then draws parallels between AI capability building and broader trends in digital and data literacy, suggesting that AI skills will soon be integral for all public servants. He cites GovTech Singapore’s Data&AI literacy primer as an exemplary model. In terms of AI training, he urges a critical approach, emphasizing the balance of trust and verification. He warns against overly trusting AI without moderation, referencing a case in the Netherlands where an algorithm falsely labeled citizens as fraudsters, leading to severe consequences.

    Addressing the relationship between a government’s AI capabilities and the AI understanding of its citizens, Quattrucci believes AI is a tool for governments to deliver better and faster services, thus building trust. He advocates for governments to be seen as adopters of technology, showcasing the innovation capabilities of civil servants. Quattrucci envisions a more interactive interface between governments and citizens, where feedback plays a crucial role.

    Finally, on the universal rules for upskilling public servants in AI, Quattrucci highlights the importance of predictability for productivity, the principle of «trust but verify,» and the need for experimentation and error-correction. He underlines that AI should be aligned with ethics, responsibility, and public interest, emphasizing that AI adoption in governments is about managing and integrating it effectively.

Governments need to think of AI as shaping a new capability, rather than shopping for a new tool
— Leonardo Quattrucci

— On the one hand, AI promises simplicity and accessibility to virtually everyone. On the other hand, to use AI to its full potential one seem to require specific skills, like prompt mastery. How to reconcile these two extremes in the public sector?

Today, the adoption of AI in government is a question of organisational change more than one of technology.

AI in government is not new. Estonia claims use of AI across 100 applications, for instance. What is new is Generative AI and how ubiquitous it is.

Traditionally, the sourcing of a technology and its development have been the task of the IT department or of the digital service. The use of an application depended on the ability of developers to create a simple and delightful product.

Generative AI is different: it is intuitive, accessible, and engaging. At a basic level, you don’t need to be a prompt engineer to begin benefitting from Gen AI. You do need to be a clear communicator with a precise question in mind. In a way, with Generative AI everyone can become an AI lab. That implies that everyone needs to know about what Generative AI is good at and how to use it safely.

That’s why a Generative AI strategy needs to be a whole-of-government strategy. Gen AI tools are so ubiquitous that they could soon show up as basic competencies in role requirements, like using Word or Excel. As Professor Ethan Mollik of the Wharton School often says: “the AI you are using now is the worst AI you will ever use.” So governments’ AI strategy needs to become their hiring, enablement, upskilling, and procurement strategy.

Governments need to think of AI as shaping a new capability, rather than shopping for a new tool.

— Is AI given enough attention today in the existing upskilling programs for public servants and public sector leaders?

There is a shortage of AI in government. That is what data from civil servants tells us. We polled Apolitical members and asked if they are optimistic about the prospects of Generative AI. The majority of respondents were optimistic and enthusiastic about using AI to innovate their work. Not coincidentally, we found that the proportion of them experimenting with Gen AI tools has roughly doubled over the course of 2023.

The problem is that there is no official supply of AI to help civil servants adopt it safely and securely in their everyday jobs. According to another survey by Salesforce, the majority of users try Gen AI in secret because they don’t know which tools are approved and for what use. “Shadow AI” is a missed opportunity to raise the bar of government competence. In the worst case, it is a security risk because people are going to use AI ad hoc.

Some governments have moved ahead of the pack: Singapore and the UK have developed Generative AI frameworks that cover everything from enablement to procurement, from ethics to privacy and security. However, even in these cases, civil servants report a “blank page problem”: they stare at the guidelines, but they don’t know what to do next. That is why at Apolitical we help them assess their AI readiness and learn continuously through our AI Campus. We also provide access to an AI community of practice, made of global public servants that support each other as peers.

At some point, everyone will be working with AI tools, just as most people in government now work in a data-driven, digital environment
— Leonardo Quattrucci


— What are good and bad examples of AI-related capability solutions?

There are important lessons to learn on building AI capability from more general trends in digital and data. We know that these are no longer specialist skills: at some point, everyone will be working with AI tools, just as most people in government now work in a data-driven, digital environment. That means that governments need programs to enable the whole workforce.

GovTech Singapore created a Data&AI literacy primer which has been completed by 60% of its 150,000-strong workforce. They achieved this through designing special learning pathways for different types of public servants. They worked flexibly with individual departments, and built communities of learners.

We haven’t seen many AI training solutions being implemented yet, because many 2023 learning budgets did not reflect the AI zeitgeist. But one of the principles we emphasise in our AI courses is a critical approach. Think of it through the principle of “trust but verify.”

On the one hand, there are some applications that we can trust the AI more with, based on current technological capabilities. On the other hand, there is the question of verification: how do I know that the responses I get are true? And once I have verified both, there is a question of judgement: do the results I get align with the values and objectives of the organisation?

When you release too much trust without moderation, the consequences can be dire. That was the case in the Netherlands, where an algorithm wrongly labelled citizens as fraudsters, leading to fines, trials, and casualties.

That should not stop innovation because positive examples abound. So, trust, verify, and peel the onion of AI applications as you get good results.

Civil servants report a “blank page problem”: they stare at the guidelines, but they don’t know what to do next.
— Leonardo Quattrucci

— Do you think there is a relationship between the AI-related capabilities of a government and AI-focused skills and understanding of the citizens?

Citizens want better and faster services, the same way that users of Generative AI want better and faster answers. That is not going to change. I believe AI is another tool for governments to deliver in that respect.

There is another point about trust. First, there is evidence that suggests that the number of interactions between a user and a service builds trust. The peculiarity of Generative AI is its interactiveness. So, I would think that the more we adopt Generative AI tools in our everyday life, the more we would want our interactions with government to have the same look and feel. Gen AI can give governments a more responsive user interface.

Finally, I would like for governments to be seen as adopters of technology, rather than juste producers of policies and regulations. I am a former civil servant and I struggle with accepting the learned helplessness that so many people feel about government. As I mentioned before, Apolitical’s data shows that there are thousands of talented, optimistic, and enthusiastic civil servants. Generative AI is an opportunity for governments to showcase their innovators.

— How can they, and should they, educate and influence each other?

Generative AI, thanks to its discursive features, can help governments put citizens more squarely at the core of policy delivery. “Who is the citizen? What is the citizen’s essential problem or opportunity? How do I know that?”

A more interactive interface can make it easier for governments to listen to citizens and work backwards from their needs. And citizens should help governments by providing feedback.

— Can there be universal rules regarding upskilling of public servants? What are they / could they be?

I asked ChatGPT. It was a start: it came up with 12 principles, from “Needs Assessment” to “Encourage Innovation.” That’s something that Gen AI is good at: generating lists, ideas, comparative tables. But a good start is not the whole journey, the same way that a cover doesn’t make a book. So, I would reiterate three organisational principles for AI adoption.

First, predictability is productivity. If Gen AI can allow everyone to run their little AI lab, then everyone needs privacy and security literate. The specific settings will change from government to government, but the requirement is universal. Competency is necessary but insufficient without the confidence that AI is safe to use by civil servants. And for leaders to be confident, governments need to issue clear frameworks. 

Second, trust but verify. There are some applications that the technology is better at than others, like brainstorming or shallow synthesis of information. But application is more than a question of technology readiness. It is a question of ethics, responsibility, and alignment to public interest. These tasks cannot be delegated to the technology, they remain a human role. It takes both: asking the technology the best it can offer and aligning it to values and missions. 

Third: experiment and error-correct. Fear, uncertainty and doubt are not gonna help governments become more confident and capable users of AI. AI is here to stay. And so are governments. Trialling innovation is the best way to manage it better.

GTF Content Team

Government Tomorrow Forum content team

Previous
Previous

The Four E-Pillars of AI Adoption: a GTF Government AI Handbook suggestion

Next
Next

Sandro Saitta, PhD: “Every AI Project is an R&D Project”