Dr. James Canton: “We Must Manage AI Before It Manages Us”

In May 2024, the Government Tomorrow Forum’s team conducted a survey about AI challenges governments face while implementing AI and interviewed 15 world’s top experts in AI and futurists. One of the pressing issues is governing the technology, establishing clear frameworks, and dealing with security and privacy issues. 

We talked about the paths of artificial intelligence development and the way we can take control of AI with Dr. James Canton, a renowned global futurist and advisor to corporations and governments, social scientist, author, and visionary business advisor, CEO and Chairman of the Institute for Global Futures, a leading think tank that advises business and governments on global innovation.


What is AI?

These innovations are about autonomy and discovery. This new era of AI will be transformative for all societies and individuals as AI evolves to become smarter. AI advances in drug development, energy, and transport are, even now, generating results.
— Dr. James Canton

Artificial intelligence or AI is the development of software systems that are widely used in micro chips, telecommunications, computers, robots or satellites, and most machines that mimic human intelligence and may surpass human intelligence in the future. The type of future AI that can operate at a human level is called Artificial General Intelligence or AGI. 

AI can analyze different kinds of visual, video, location data as well as industry specific data like in health care, security, science or logistics. AI is rapidly evolving and can make certain decisions, providing information to humans or other artificial intelligences. 

Recent dramatic advances in new types of AI, machine learning and generative AI have introduced quantum leaps in innovation. These innovations are about autonomy and discovery. This new era of AI will be transformative for all societies and individuals as AI evolves to become smarter. AI advances in drug development, energy, and transport are, even now, generating results.

AI has been advancing quite fast. It is no surprise that we can see that AI is embedded everywhere, from finding the best internet connection for your Zoom meeting to searching or posting to social media to operating ATMs. 

Eventually, in the rapid evolution of artificial intelligence, we will see in our lifetimes super intelligence such as AGI, where new discoveries help transform the future of our planet. These AI advances could also be used for hostility and global conflicts. 

The central challenge facing the future of our civilization is to rapidly learn to control AI before it controls humanity. As with every technology, there is a risk and a benefit. The increasing power and speed of AI should be noted.

The evolution of AI  

Back in the 80s, I was at Apple Computer and did some work on artificial intelligence. Then I founded Umecorp, an early AI company. The AI then was crude and rudimentary. We called them expert systems. You'd have 20 or 30 questions, you'd answer the questions, and you'd program that into an artificial intelligence that would then only be able to answer the questions based on the inputs that you put in. 

In 1997 a breakthrough happened: IBM's supercomputer “Deep Blue” defeated a reigning world chess champion, soviet-born Garry Kasparov. Some time later, Google released a system that could analyze 100 years of human time to solve problems around creating pharmaceuticals and protein folding. It showed the first results in months. That sparked a new conversation around a pre-AGI, artificial general intelligence.

Today, Generative AI models, powered by large language models like those developed by OpenAI and Google, Microsoft, and others, can interact with humans and analyze zettabytes (1 zettabyte = a trillion gigabytes) amounts of data from the open internet in real time.

What makes AI smarter today? 

Rapid innovations in AI have accelerated AI intelligence and adoption around the world. AI today can extrapolate, infer and analyze, almost think, make more independent decisions, create deeper innovations and more accurate analysis. Eventually, AI may be as smart as humans and smarter than humans in the future. 

Emergent behavior is the core of why AI may become much smarter, and fast. It refers to unexpected, creative and original outcomes when the algorithms of an AI system interact with each other. The interesting part is that these behaviors aren't necessarily programmed by the developers. Instead, they may naturally occur as the AI operates. We are still learning what this means. 

Managing AI: three guiding principles 

It is crucial to remember that AI is both capable of creating cancer cures and deadly viruses at the same time. It can both end hunger or create a weapon of mass destruction we could never think of.
— Dr. James Canton

The modern definition of AI is exciting because as AI becomes autonomous there will be vast challenges in our society. There are countless ways in which AI designed machines can combine information to produce creative and unexpected solutions. They are evolving much faster than human cognition.

We've created AI and it's evolving faster, at the same time differently than human beings. As a futurist and scientist, I'm fascinated, but also concerned. My mantra for this new age which I put in a TED talk and the subject of a new book I’m writing, is that we have to learn how to control AI before it controls us. It is essential to ensure that governments, corporations, or rogue special interests do not misuse AI. 

If we establish the values and ethics of creating responsible AI we could shape a better future. The transformation of AI could help solve complex challenges facing humanity.

Consider this: There are a billion people that haven't seen a clean glass of water. There are a billion people who are living in poverty. AI has a chance to become the means of addressing these global challenges. However, we need to have world widely accepted AI laws that guide the evolution of AI.

It is crucial to remember that AI is both capable of creating cancer cures and deadly viruses at the same time. It can both end hunger or create a weapon of mass destruction we could never think of.

I formulated three key guideposts for AI regulation: it should be responsible, explainable, and verifiable. Firstly, the ones in control of the technology must be responsible for the data they feed it and be aware of the risk of potentially harmful outcomes it may generate. At the government level, it means balancing the privacy of citizens with security. 

Besides that, everything created by AI must have a clear path of benefits. We must be able to explain how the AI got to every particular decision. And lastly, verifiable is not just explaining how they got there and what was the decision path, but verifying it. What is that good? Will there be a bad impact? When you verify A, you're verifying the truth or the value?

One of the guardian operating principles is balancing the freedom to create AI with proper regulation and social responsibility. The private sector and government both have a role in maintaining a humanistic approach to the evolution of AI for the benefit of humanity.

Previous
Previous

How governments should tackle data privacy concerns adopting AI? 

Next
Next

GTF Survey: Why governments might fail AI implementation in the public sphere