The paradox of modern AI

There seems to be something of a paradox in modern AI. In academia and within the tech giants of the US and China, research is galloping ahead, but the deployment of modern AI in industry and government organisations is advancing at a more stately pace.

One of the people trying to change that is Steve Coates. Steve dropped out of college to become a chef, but he switched again and took a degree in computer science, and then spent 20 years developing business strategies with Accenture and the Boston Consulting Group (BCG). Seven years ago he co-founded an AI product company called Brainnwave. (The two “N”s in Brainnwave denote neural networks.)

Decision Intelligence

Steve, a former UK Entrepreneur of the Year, argues that what companies need to solve their problems is not just more and better data, and clever AI practitioners, but Decision Intelligence. He explains what this means in the latest episode of The London Futurist Podcast.

At BCG, Steve was one of the thousands of young consultants labouring into the night to assemble complex data sets and derive valuable insights from them. He began to have an inkling that some of this work could be done more efficiently and more effectively by software. The first incarnation of Steve’s company was a data marketplace, providing corporate clients with innovative data sets like satellite tracking. But back in 2015, few companies had the infrastructure to process and exploit this kind of data. So Brainnwave started to use AI to convert the raw data into something more useful: decision intelligence.

In the early days of the company, the most valuable data sets were public ones, and data from specialist producers of data like satellite companies. The data generated and owned by client companies themselves was fragmented, produced in mutually incompatible formats, and often stored in obscure places. In very large industries like oil and gas, specialist information providers often understood better than their clients where their data was stored and what could be done with it.

Data lakes and data swamps

In the last few years, companies have been getting much smarter about what data they have and what it can be used for. Over the last decade, the concept of data lakes was born and then evolved. Originally, companies gathered together all the data they possibly could, but the data lakes became swamps: massive, messy, and unusable. More recently, companies have been more selective about the data they collect and store, and more effective in its exploitation.

A specific example of a company that Brainnwave has helped to use their data effectively is Aggreko, a global company that provides mobile power sources for use in remote and often hostile locations, often by mining companies. In searching for new prospects, Aggreko’s sales people look for five or so key indicators, including the distance of an operation from a power grid, the maturity of the operation, and the operator’s financial situation. That data often does exist, but it takes expertise and ingenuity to source it and combine it usefully. The result of Brainnwave’s work was a ranking algorithm that accounted for location, proximity to power grids, stage of the assets, financial health of the assets and other factors to generate a list of priority sales prospects around the world.

Another example is gas flaring. For years, the World Bank used satellite data to produce annual reports showing the location of gas flares at night. Brainnwave was able to produce daily versions of that data and combine it with data about the ownership of pipelines, gas fields, and so on, which again yielded ranked lists of sales targets for oilfield supply companies.

Garbage in, garbage out

The growth of data science and AI have been held back to a degree by a natural scepticism about data quality, summarised in the expression “garbage in, garbage out”. In recent years, the ability to compile and use very large data sets is making this less of a concern. When data sets become large enough, input and other errors can be smoothed out. But at the same time, this renders the data more opaque, and less easy to check. Scepticism about data quality morphs into a concern that the system is an un-explainable black box.

Data analysis becomes AI when a system learns as it operates. A shipping company client of Brainnwave’s had 20 years of good quality data about its operations. Brainnwave took half of that data and developed an AI model which could make predictions about future outcomes. The model was tested against the other half of the data and adjusted to optimise its performance.

We humans are far less forgiving of mistakes made by machines than of mistakes made by other humans. This may help explain why remarkable achievements like DeepMind’s optimisation of Google’s server farms’ cooling processes have not been adopted by industries with similar large-scale process operations, like power generation and oil refining.

Good data scientists are story tellers

A recent study by McKinsey found that some companies are using AI to obtain cost savings of 90% and revenue increases of 70%. With this type of opportunity, it is odd that AI systems are not yet more widely deployed. At some point in the not-too distant future, deployment will be widespread, and everyone will have to adopt it. But for the time being, Steve has to do a great deal of explaining of how his algorithms work before clients are ready to trust him to put a system into operation. This job of explaining is made more challenging when it is done by PhDs talking to extremely busy executives who are very bright, but do not understand the maths. A great data scientist is someone who can follow the detailed workings of an algorithm, and can also convert the important insights it generates into a compelling story for non-technical people. In other words, decision intelligence.

Related Posts