Professor Margaret Boden’s talk at the Centre for the Study of Existential Risks

Professor Boden has been in the AI business long enough to have worked with John McCarthy and some of the other founders of the science of artificial intelligence. During her animated and compelling talk to a highly engaged audience at CSER in Cambridge last month, the sparkle in her eye betrayed the fun she still gets from it. The main thrust of her talk was that those who believe that an artificial general intelligence (AGI) may be created within the next century are going to be disappointed. She was at pains to emphasise that the project is feasible in principle, but she...

Technological unemployment

At an event to mark the launch of the FastFuture book, The Future of Business, I gave a short (8-minute) talk on the possibility that automation will create an economic singularity and lead to widespread technological unemployment. If that happens, will we be able to devise an economic system that can cope with some form of Universal Basic Income, and will we be able to get from here to there without social breakdown? Afterwards, Martin Dinov, Computational and Experimental Neuroscience PhD Researcher at Imperial College, gave a commendably clear talk on artificial neural networks. The video is here.  (The sound quality is not...

Future Trends Forum

Last week I was in Madrid, taking part in the 24th meeting of the Future Trends Forum, a think tank set up by BankInter, a leading Spanish bank. The subject was the Second Machine Age - organising for prosperity, and this short video was shot during the meeting: [embed]https://www.youtube.com/watch?t=12&v=-U5UIFgzAXM[/embed] The jumping-off point for the meeting was the eponymous book by Erik Brynjolfsson and Andrew McAfee, so a lot of the discussion revolved around automation and the possibility of widespread technological unemployment. The organisers brought together a fantastic group of smart, experienced people who worked together in a very open and collaborative way during the three...

New book: "Surviving AI". Review copies available

I've just finished writing a non-fiction book on artificial intelligence, called Surviving AI. It starts with a brief history of the science and a description of its current state.  It goes on to look at the benefits and risks that AI presents in the short and medium term, with a short story highlighting the improvements to everyday life that are in the pipeline, and discussions of technological unemployment and killer robots. Then it gets into artificial general intelligence - machines with human-level cognition: whether we can create one, and if so when; whether we will like it if we do, and what we should do about...

The Future of Business

I've contributed a chapter to an interesting new book about the future of business. Edited by Rohit Talwar, The Future of Business looks at the social and economic forces, business trends, disruptive technologies, breakthrough developments in science and new ideas that will shape the commercial environment over the next two decades. It contains chapters by 60 authors -  established and emerging futurists from around the world - and is grouped into ten sections: Visions of the Future - What are the global transformations on the horizon? Tomorrow's Global Order - What are the emerging political and economic transformations that could reshape the environment for society and...

Professor Stuart Russell's talk at the Centre for the Study of Existential Risks

Professor Stuart Russell, computer science professor at University of California, Berkeley, gave a clear and powerful talk on the promise and peril of artificial intelligence at the CSER in Cambridge on 15th May. Professor Russell has been thinking for over 20 years about what will happen if we create an AGI – an artificial general intelligence, a machine with human-level cognitive abilities. The last chapter of his classic 1994 textbook Artificial Intelligence: A Modern Approach was called “What if we succeed?” Although he cautions against making naive statements based on Moore's Law, he notes that progress on AI is accelerating in...

The Economist’s curious articles on artificial intelligence

The Economist is famous for its excellence at forecasting the past and its weakness at forecasting the future. Its survey on AI (9th May) is a classic. The explanation of deep learning is outstanding, but the conclusion that we should not worry about superintelligence because today's computers have neither volition nor awareness is, well, less impressive. The magazine's leader seems to agree, saying that "even if the prospect of what Mr Hawking calls “full” AI is still distant, it is prudent for societies to plan for how to cope". But it then goes on to make the outlandish claim that...
Movie review: Ultron – not the new Terminator after all

Movie review: Ultron – not the new Terminator after all

Film number two in Marvel's Avengers series is every bit as loud and brash as the first outing, and the crashing about is nicely offset by the customary slices of dry wit, mainly from Robert Downey Jr's Iron Man. Director Joss Whedon demonstrates again his mastery of timing and pace in epic movies, with audiences given time to breathe during brief diversions to the burgeoning love interest between Bruce Banner and the Black Widow, and vignettes of Hawkeye's implausibly forgiving family. The film is great fun (especially on an IMAX screen) and does pretty much everything that fans of superhero...

Ultron, the new Terminator?

Avengers, the Age of Ultron opens in the UK later this week, and in the US the week after. Apparently Hollywood can forecast a film's takings pretty well these days (thanks to clever AI algorithms, no doubt) and it seems the studio is quietly confident it's going to overturn box office records. It may also overturn something else: the unwritten law that every article about the future of artificial intelligence has to be accompanied by a picture of Arnold Schwarzenegger, or the killer robot he played. The original Terminator movie was released in 1984, and 31 years is a great innings....

On killer robots

The Guardian's editorial of 14th April 2014 (Weapons systems are becoming autonomous entities. Human beings must take responsibility) argued that killer robots should always remain under human control, because robots can never be morally responsible. They kindly published my reply, which said that this may not be true if and when we create machines whose cognitive abilities match or exceed those of humans in every respect. Surveys indicate that around 50% of AI researchers think that could happen before 2050. But long before then we will face other dilemmas. If wars can be fought by robots, would that not be...

On boiling frogs

If you drop a frog into a pan of boiling water it will jump out. Frogs aren't stupid. But if a frog is sitting in a pan which is gradually heated it will become soporific and fail to notice when it boils to death at 100 degrees. This story has been told many times, not least by the leading management thinker, Charles Handy, in his best-selling book The Age of Unreason. Unfortunately, the story isn't true. It was put about by 19th-century experimenters, but has been refuted several times since. Never mind: it's a good metaphor, and metaphors aren't supposed...

Interview on Singularity Weblog

[embed]https://www.youtube.com/watch?v=zISzqmtojD8#t=1036[/embed] This week I interviewed with Nikola Danaylov, the creator of Singularity Weblog.  It was great fun, and quite an honour to follow in the footsteps of his 160-plus previous guests. We talked about hope and optimism as a useful bias, about the promise and peril of AGI, about whether automation will end work and force the introduction of universal basic income ... and of course about Pandora's Brain.
Science fiction gives us metaphors to think about our biggest problems

Science fiction gives us metaphors to think about our biggest problems

Science fiction, it has been said, tells you less about what will happen in the future than it tells you about the predominant concerns of the age when it was written. The 1940s and 50s is known as the golden age of science fiction: short story magazines ruled, and John Campbell, editor of Astounding Stories, demanded better standards of writing than the genre had seen before. Isaac Asimov, Arthur C Clarke, AE van Vogt, and Robert Heinlein all got started in this period. The Cold War was building up, but the West was emerging from the destruction and austerity of...

Singularity University Summit, Seville, March 2015

Hyatt Hotels has revenues of $4bn and a market value of $8.4bn. AirBnB has revenues of $250m, 13 staff, pretty much no assets, and a market value of $14bn. It will soon be the world’s largest hotel company. Über was founded in 2009 and has a market cap of $40bn, despite – again – having pretty much no physical assets. It has taxi drivers up in arms all over the world. Magic Leap, a virtual reality company, raised $50m in February 2014 and then $550m in October. It persuaded the second set of investors to contribute by showing them a...
Science fiction is philosophy in fancy dress

Science fiction is philosophy in fancy dress

Looking back, I think I have always understood that science fiction is philosophy in fancy dress.  My favourite science fiction stories are the ones that make you think – the ones that ask, “what would it be like if…”  That is what I tried to do in my novel, Pandora's Brain. I started reading the stories of Arthur C Clark, Isaac Asimov, JG Ballard and the rest as a young boy, and that was also when I formed my first lasting ambition – to study philosophy at Oxford.  (I still don’t know where that ambition came from.  Perhaps it was something...

Pandora’s Brain is published!

Pandora's Brain is available today on Amazon sites around the world in both ebook and paperback formats. I'm celebrating by attending the Singularity University Summit in Seville.  The content of this conference has been inspiring and uplifting but also very grounded.  As you would expect, the word "exponential" has been used a great deal, but the presenters - mostly SU faculty - have focused on changes expected in the near term, and have provided solid evidence and examples to support their claims about the future they envisage. I've met some great SU people - including AI expert Neil Jacobstein, medical expert Daniel...

It’s that man again!

OK, I know some people have had enough of Mr Musk lately, but he does keep saying and doing interesting things. In a wide-ranging and intriguing 8-minute interview with Max Tegmark (leading physicist and a founder of the Future of Life Institute), Musk lists the five technologies which will impact society the most.  He doesn't specify the timeframe. His list of five is (not verbatim - it appears at 4 minutes in): Making life multi-planetary Efficient energy sources Growing the footprint of the internet Re-programming human genetics Artificial Intelligence A pretty good list, IMHO. What is very cool is that he...

Attitudes towards Artificial General Intelligence

Following the recent comments by Elon Musk and Stephen Hawking, more people are thinking and talking about the possibility of an AGI being created, and what it might mean. That's a good thing. The chart below is a sketch of how I suspect the opinions are forming within the various groups participating in the debate.  (The general public is not yet participating to any significant degree.) It's conjecture based on news reporting and personal discussions, and not intended to offend, so please don't sue me.  Otherwise, comments welcome. CESR = Centre for the Study of Existential Risk (Cambridge University) FHI = Future of Humanity Institute (Oxford...