The future of intelligence
The potential promise and pitfalls of modern AI
This month, two interviews were published with key contributors to the recent advances in artificial intelligence (AI): Geoffrey Hinton and Demis Hassabis. Geoffrey Hinton led research advancing the use of artificial deep neural network architectures that underpin all recent breakthroughs in AI, starting with computer vision. His students included Ilya Sutskever, who went on to co-found OpenAI. Demis Hassabis co-founded DeepMind, later acquired by Google. Google DeepMind’s application of deep neural networks made headline news when their AlphaGo algorithm beat the world’s best player at the game of Go, a game considered too complex for brute-force searches. At one point, the algorithm adopted a strategy that had never been seen before. It had generated its own game playing approaches. The same architecture would lead to AlphaFold solving the 50-year protein folding challenge.
The two talks are embedded below, and I highly recommend finding the time to watch them in full and form your own opinions. You may not agree with everything that is said (I don’t). But there are substantial insights and thought-provoking discussions about where we might be heading. The following are a few soundbites. At a number of points, the talks converged on a common theme.
Intelligence through playing games
Demis Hassabis (DH) described why they chose to focus on game play. How, in the early stages of development, using zero-sum games meant there was a clear win condition, a metric to evaluate performance against. But, more importantly, DH described how DeepMind was specifically focused on games that had potential to generalise to other tasks. Hence AlphaGo led to AlphaFold.
DH described how you want three conditions for adopting the approach used for AlphaFold to other grand challenges:
Data - preferably a lot of it. Increasingly, it is now possible to synthesise datasets through simulation, provided it matches a real-world distribution
A clear metric - you need to be able to define what success looks like
A massive combinatorial space - problems that cannot be solved through brute force searches using classical computers (quantum is a whole other subject). These are ‘needle in a haystack’ problems
DH then described how many real-world challenges can be turned into game structures. Where progress is made not through one single equation, but through step-by-step progress, like taking turns in a game.
P vs NP problem
The discussion about how game play was the key to solving how proteins fold led on to a discussion about the legendary P vs NP problem - what is possible (P, polynomial) to solve in a reasonable time on a classical computer versus not possible (NP, non-polynomial) and would require a different compute architecture, likely quantum.
The achievements of AlphaFold and other algorithms built by DeepMind suggest classical computers can achieve a lot more than was previously thought. This has led to Demis Hassabis forming a conjecture:
Any pattern that can be generated or found in nature can be efficiently discovered and modelled by a classical learning algorithm - Demis Hassabis
If true, then we really are just at the beginning of a new era of scientific breakthroughs. The key is to not require a brute-force approach - that will require quantum computers. Instead, to leverage classical computers, the method is to learn a pattern that can narrow and guide the search, reducing an enormous intractable space to something tractable, solvable…
Digital outperforming analog
Geoffrey Hinton (GH) described how it was now almost inevitable that AI would become superior to human intelligence because it has the benefit of being digital, whilst humans will always be (without machine augmentation) analog.
This links to DH saying that classical computers were now achieving far more than previously had been thought possible. GH described how the artificial neural networks (neural nets, NN) underpinning all recent breakthroughs in vision, language and problem solving, can be cloned and run on multiple digital hardware. Each clone can be tasked with different parts of the same problem. And this becomes possible if adopting DH’s suggestion of applying game play structures to solving the most complex challenges. When one NN makes progress, it can immediately sync its learnings with its clones. And so can accelerate learning far beyond human abilities.
When asked about creativity, GH articulates what many others have said. We rarely see or experience pure creativity. It is virtually always some unexpected combination of analogies from different domains that produce unexpected advances. Analogies enables us to compress information and make connections across domains. GH suggests that large language models like GPT-4 have ‘seen’ many more analogies through its training data than people will ever learn, and can thus generate far more strange combinations that may lead to breakthroughs.
GH goes on to discuss whether or not machines are thinking and can have feelings, and whether or not a machine can be conscious. It’s impossible to shorten this one. I’ve heard this view from him before and disagreed. But the way GH described it this time… well, it was very well argued and has created a brain freeze moment… I do still lean towards John Searle’s definitions of mind and consciousness. He laid out his approach in the 1984 Reith Lectures. Dropping those in at the end of the post if you want a refresher …
Risk of AI to society
Both interviews shared concerns about the societal impact of recent advances in AI, and the lack of preparedness for a future heavily augmented and automated by AI.
Both DH and GH mentioned that this is not a new concern. It was first envisaged by John von Neumann back in the 1950s, who ‘envisaged disaster if people could not keep pace with what they created.’
Both highlighted two inherent risks: bad actors - whether individuals or nations - repurposing these technologies for harmful ends. The second risk is what might happen if AI surpasses human intelligence and becomes more autonomous.
GH went on to also discuss some of the more immediate societal concerns if jobs that ‘only require mundane intelligence’ become fully automated with AI. If the industry is elastic or suboptimal - like much of current healthcare - then AI should lead to a boom, by enabling the delivery of more and better healthcare. However, that is unlikely to be applicable to all industries and roles.
AI safety
Both highlighted the need for research and development in AI safety. DH pondered how do we test for traits that we don’t want within AI, such as deception, and how do we get rid of them. GH mentioned how Ilya Sutskever has long been thinking about AI safety and has received substantial funding to progress his ideas.
Both agreed that international cooperation is needed, at a time when it seems less likely than ever to gain global agreement…
A big concern is the lack of technology knowledge amongst those in positions of power to bring about change. To demonstrate, a clip played during the interview with GH showed the current US Education Secretary referring to ‘A1’ (ay-one) in a recent interview, describing how the US is “is going to have all the kids interacting with A1.”
A further concern was the denial exhibited by some of those funding technology. A clip was played where Elon Musk acknowledged, “I have to have deliberate suspension of disbelief in order to remain motivated.”
Reasons to be cheerful?
GH summed up his interview with a duality I often feel - “When I’m feeling slightly depressed, I think people are toast, AI is going to take over. When I’m feeling cheerful, I think we’ll figure out a way.”
Both interviews ended on a somewhat sobering tone… but the causes for concern are not so much from the AI itself than from human social and political systems.
The talks
Institute for Advanced Study | Sir Demis Hassabis on The Future of Knowledge:
The Diary of a CEO | Interview with Geoffrey Hinton
(not a fan of the clickbait-style title or screenshot of the video)


Whilst both interviews contained substantial insights worth pondering over, one frustration is that, when raising concerns about AI safety, neither mentioned any of the women who have been highlighting this concern for years. Such as Cathy O'Neill (author of the 2016 book Weapons of Math Destruction), Timit Gebru (fired from Google in 2020 for raising concerns), Dagmar Monnett (computer scientist), and Emily Bender (linguist). Whilst nods were given to various men in the field, not a single woman was mentioned.
As mentioned, both Demis and Geoffrey articulated two substantial risks with AI that concerned them: 1) bad actors deliberately using AI to harm people, and, 2) bad AI going out of control and harming people.
There is a third risk - 3) good AI leading to bad outcomes (or unintended consequences) if social and economic systems don't, or are unable to, adapt. A recent UN report takes this path - https://www.un-ilibrary.org/content/books/9789211542639
For a summary of the report (on LinkedIn) - https://www.linkedin.com/posts/sharonr_united-nations-people-and-possibilities-activity-7343173325235404800-Qpd5