Can we have meaning as well as fun? Review of Nick Bostrom’s Deep Utopia

Can we have meaning as well as fun? Review of Nick Bostrom’s Deep Utopia

A new book by Nick Bostrom is a major publishing and cultural event. His 2014 book “Superintelligence” helped to wake the world up to the impact of the first Big Bang in AI, the arrival of Deep Learning. Since then we have had a second Big Bang in AI, with the introduction of Transformer systems like GPT-4. Bostrom’s previous book focused on the downside potential of advanced AI. His new one explores the upside. “Deep Utopia” is an easier read than its predecessor, although its author cannot resist using some of the phraseology of professional philosophers, so readers may have...
Artificial Intelligence and Weaponised Nostalgia

Artificial Intelligence and Weaponised Nostalgia

The first political party to be called populist was the People’s Party, a powerful but short-lived force in late 19th-century America. It was a left-wing movement which opposed the oligarchies running the railroads, and promoted the interests of small businesses and farms. Populists can be right-wing or left-wing. In Europe they tend to be right-wing and in Latin America they tend to be left-wing. Populist politicians pose as champions for the “ordinary people” against the establishment. They claim that a metropolitan elite has stolen the birthright of the virtuous, “real” people, and they promise to restore it. At the heart...
Government regulation of AI is like pressing a big red danger button

Government regulation of AI is like pressing a big red danger button

Imagine that you and I are in my laboratory, and I show you a Big Red Button. I tell you that if I press this button, then you and all your family and friends - in fact the whole human race - will live very long lives of great prosperity, and in great health. Furthermore, the environment will improve, and inequality will reduce both in your country and around the world. Of course, I add, there is a catch. If I press this button, there is also a chance that the whole human race will go extinct. I cannot tell...
The Bletchley Park summit on AI safety deserves two and a half cheers

The Bletchley Park summit on AI safety deserves two and a half cheers

The taboo is broken. The possibility that AI is an existential risk has now been voiced in public by many of the world’s political leaders. Although the question has been discussed in Silicon Valley and other futurist boltholes for decades, no country’s leader had broached it before last month. That is the lasting legacy of the Bletchley Park summit on AI Safety, and it is an important one. It might not be the most important legacy for the man who made the summit happen. According to members of the opposition Labour Party, Britain’s Prime Minister Rishi Sunak was using the...
Arabian moonshots may hold huge implications for the whole world

Arabian moonshots may hold huge implications for the whole world

After Silicon Valley, the United Arab Emirates (UAE) may be the most future-oriented and optimistic place on the planet. Futurism and techno-optimism are natural mindsets in a country which has pretty much invented itself from scratch in two generations. During this period its people have progressed from a mediaeval lifestyle to being 21st century metropolitans. So it is unsurprising that the UAE has been quick to spot the enormous future significance of artificial intelligence to all of us, and to pioneer its deployment. It is not just the UAE. The leaders of all six members of the Gulf Cooperation Council...
The legal singularity. With Ben Alarie

The legal singularity. With Ben Alarie

The law is a promising area for AI The legal profession is rarely accused of being at the cutting edge of technological development. Lawyers may not still use quill pens, but they’re not exactly famous for their IT skills. Nevertheless, it has a number of characteristics which make it eminently suited to the deployment of advanced AI systems. Lawyers are deluged by data, and commercial law cases can be highly lucrative. One man who knows more about this than most is Benjamin Alarie, a Professor at the University of Toronto Faculty of Law, and a successful entrepreneur. In 2015, he...
What’s new in Longevity? With Martin O’Dea

What’s new in Longevity? With Martin O’Dea

Martin O’Dea is the CEO of Longevity Events Limited, and the principal organiser of the annual Longevity Summit Dublin. In a past life, O’Dea lectured on business strategy at Dublin Business School. He has been keeping a close eye on the longevity space for more than ten years, and is well placed to speak about how the field is changing. O’Dea sits on a number of boards including the LEV Foundation, which was set up by Aubrey de Grey with a mission to prevent and reverse human age-related disease. O’Dea joined the London Futurists Podcast to discuss what we can...
Investing in AI, With John Cassidy

Investing in AI, With John Cassidy

Kindred Capital Venture capital is the lifeblood of technology startups, including young companies deploying advanced AI. John Cassidy is a Partner at Kindred Capital, a UK-based venture capital firm. Before he became an investment professional, he co-founded CCG.ai, a precision oncology company which he sold to Dante Labs in 2019. He joined the London Futurists Podcast to discuss how venture capital firms are approaching AI today. Kindred Capital was founded in 2015 by Mark Evans, Russell Buckley, and Leila Zegna. It has raised three funds, each of around $100 million, and is focused on early-stage investments, known in the industry...
The Death of Death. With Jose Cordeiro

The Death of Death. With Jose Cordeiro

An enthusiastic transhumanist One of the most intriguing possibilities raised by the exponential growth in the power of our technology is that within the lifetimes of people already born, death may become optional. This idea was championed with exuberant enthusiasm by Jose Cordeiro on the London Futurists Podcast. Jose Cordeiro was born in Venezuela, to parents who fled Franco’s dictatorship in Spain. He has closed the circle, by returning to Spain (via the USA) while another dictatorship grips Venezuela. His education and early career as an engineer were thoroughly blue chip – MIT, Georgetown University, INSEAD, then Schlumberger and Booz...
AI and professional services. With Shamus Rae

AI and professional services. With Shamus Rae

Collar colour Not long ago, people assumed that repetitive, blue-collar jobs would be the first to be disrupted by advancing artificial intelligence. Since the arrival of generative AI, it looks like white-collar jobs will be impacted first. Jobs like accounting, management consulting, and the law. Who would have guessed that lawyers would find themselves at the cutting edge of technology. Shamus Rae is the co-founder of Engine B, a startup which aims to expedite the digitisation of the professional services industry. It is supported by the Institute of Chartered Accountants in England and Wales (the ICAEW) and the main audit...
AI and new styles of learning. With David Giron

AI and new styles of learning. With David Giron

The education sector may well be impacted by advanced AI more profoundly than any other. This is partly because of the obvious potential benefit of applying more intelligence to education, and partly because education has resisted so much change in the past. 42 as the meaning of … learning David Giron is the Director of one of the world's most innovative educational institutions, 42 Codam College in Amsterdam. He was previously the head of studies at Codam's parent school 42 in Paris, which was founded in 2013, so he has now spent 10 years putting the school’s radical ideas into...
AI-developed drug breakthrough. With Alex Zhavoronkov

AI-developed drug breakthrough. With Alex Zhavoronkov

Healthcare is one of the sectors likely to see the greatest benefits from the application of advanced AI. A number of companies are now using AI to develop drugs faster, cheaper, and with fewer failures along the way. One of the leading members of this group is Insilico Medicine, which has just announced the first AI-developed drug to enter phase 2 clinical trials. Alex Zhavoronkov, co-founder of Insilico Medicine, joined the London Futurists Podcast to explain the significance of this achievement. Idiopathic Pulmonary Fibrosis The drug in question is designed to tackle Idiopathic Pulmonary Fibrosis, or IPF. “Fibrosis” means thickening...
The Four Cs: when AIs outsmart humans

The Four Cs: when AIs outsmart humans

Startling progress On 14 March, OpenAI launched GPT-4. People who follow AI closely were stunned by its capabilities. A week later, the US-based Future of Life Institute published an open letter urging the people who run the labs creating Large Language Models (LLMs) to declare a six-month moratorium, so that the world could make sure this increasingly powerful technology is safe. The people running those labs – notably Sam Altman of OpenAI and Demis Hassabis of Google DeepMind – have called for government regulation of their industry, but they are not declaring a moratorium. What’s all the fuss about? Is...
GPT-4 and education. With Donald Clark

GPT-4 and education. With Donald Clark

Aristotle for everyone The launch of GPT-4 in March has provoked concerns and searching questions, and nowhere more so than in the education sector. Last month, the share price of US edutech company Chegg halved when its CEO admitted that GPT technology was a threat to its business model. Looking ahead, GPT models seem to put flesh on the bones of the idea that all students could have a personal tutor as effective as Aristotle, who was Alexander the Great’s personal tutor. When that happens, students should leave school and university far, far better educated than we did. Donald Clark...
GPT-4 and the EU’s AI Act. With John Higgins

GPT-4 and the EU’s AI Act. With John Higgins

The EU AI Act The European Commission and Parliament were busily debating the Artificial Intelligence Act when GPT-4 launched on 14 March. The AI Act was proposed in 2021. It does not confer rights on individuals, but instead regulates the providers of artificial intelligence systems. It is a risk-based approach. John Higgins joined the London Futurists Podcast to discuss the AI Act. He is the Chair of Global Digital Foundation, a think tank, and last year he was president of BCS (British Computer Society), the professional body for the UK’s IT industry. He has had a long and distinguished career...
Longevity, a $56 trillion opportunity. With Andrew Scott

Longevity, a $56 trillion opportunity. With Andrew Scott

In unguarded moments, politicians occasionally wish that retired people would "hurry up and die", on account of the ballooning costs of pensions and healthcare. Andrew J Scott confronts this attitude in his book, “The 100-Year Life”, which has been sold a million copies in 15 languages, and was runner up in both the FT/McKinsey and Japanese Business Book of the Year Awards. Scott joined the London Futurists Podcast to discuss his arguments. Scott is a professor of economics at the London Business School, a Research Fellow at the Centre for Economic Policy Research, and a consulting scholar at Stanford University’s...
How to use GPT-4 yourself. With Ted Lappas

How to use GPT-4 yourself. With Ted Lappas

The last few episodes of the London Futurists Podcast have explored what GPT (generative pre-trained transformer) technology is and how it works, and also the call for a pause in the development of advanced AI. In the latest episode, Ted Lappas, a data scientist and academic, helps us to understand what GPT technology can do for each of us individually. Lappas is Assistant Professor at Athens University of Economics and Business, and he also works at Satalia, which was London's largest independent AI consultancy before it was acquired last year by the media giant WPP. Head start Lappas uses GPTs...
GPT: to ban or not to ban? That is the question

GPT: to ban or not to ban? That is the question

OpenAI launched GPT-4 on 14th March, and its capabilities were shocking to people within the AI community and beyond. A week later, the Future of Life Institute (FLI) published an open letter calling on the world’s leading AI labs to pause the development of even larger GPT (generative pre-trained transformer) models until their safety can be ensured. Geoff Hinton went so far as to resign from Google in order to be free to talk about the risks. Recent episodes of the London Futurists Podcast have presented the arguments for and against this call for a moratorium. Jaan Tallinn, one of...