The longer term concerns of AI safety and AI alignment
Concerns about artificial intelligence tend to fall into two buckets. The longer term concern is that advanced AI may harm humans. In its extreme form, this includes the Skynet scenario from the Terminator movies, where a superintelligence decides it doesn’t like us and wipes us out. But an advanced AI doesn’t have to be malevolent, or even conscious, to do us great harm. It just has to have goals which conflict with ours. The paper-clip maximiser is the cartoon example: the AI is determined to make as many paper-clips as possible. It neither loves nor hates us, but it thinks it has better uses for the atoms we are made of. This longer-term concern is often referred to as AI safety, or AI alignment.
The shorter term concerns of privacy, bias, and transparency
Shorter term concerns about AI include privacy, bias, transparency, and explainability. One man who has thought a lot about these issues is Ray Eitel-Porter, Global Lead for Responsible AI at Accenture. In the latest episode of the London Futurist Podcast, he explains what conclusions he has come to.
Ray has worked in technology for most of his career, and has always been excited by its potential for improving lives. The realisation of the potential dangers of advanced AI has become apparent more recently.
Accenture is helping clients to deploy AI in a wide range of industries and applications. Projects include the optimisation of production or logistics processes, and improving products and customer service. But Ray is quick to acknowledge that much of this is more advanced analytics than genuine AI in the sense of systems which learn on the job.
Today’s AI paradox
Which points to a paradox in AI today. Progress in AI research is galloping ahead, with so-called large language models – transformer and diffusion models – making breakthroughs most weeks, while the deployment of AI in real-world environments is much slower. One reason for this is the understandable nervousness of executives about placing AI systems in charge of critical processes if those systems will make mistakes. (And they have to make mistakes in order to learn.)
Ray says there are other important reasons. One is that we humans are far less tolerant of machines making mistakes than of humans doing so. Car driving is a classic example: human drivers kill 1.3 million people every year without troubling the world’s newspapers, but if a self-driving car causes a single fatality, it will be headline news.
Another reason why deployment is slower is that large organisations get used to doing things in a particular way, and it takes a long time to change these habits. Thus, for instance, the pharmaceutical industry is proving slow to adopt AI drug discovery, despite its proven ability to dramatically reduce the enormous time, cost, and risk involved.
Companies have to be extremely careful about rolling out systems which may inherit bias from the humans who administered the process beforehand. Machines can scale in a way that humans cannot, and their bias can often be identified and proved more easily. Whereas a couple of years ago, it was mostly data scientists who recognised this danger, Ray notes that today, senior management teams now tend to understand the problem too.
Responsible AI
Ray’s job title references “responsible AI”, which he sees as the implementation of a company’s values with regard to AI. Ideally this would avoid the need for companies to develop ethical positions, since they would already exist. But no company has a comprehensive set of policies addressing every possible ethical issue that AI will raise. In all aspects of life, our ethical positions are generally implicit, fuzzy, and mutually contradictory. The answer, Ray suggests, is to design a governance process which surfaces potential problems quickly and efficiently, and to avoid having relatively junior people make sensitive decisions which could cause major problems later on.
Most companies now have ML Ops guidelines – structured processes for how to develop machine learning systems – in which key questions must be asked at each stage. Questions such as “is this an appropriate use for AI?” “should we be using face recognition software in this context?” and so on.
There also needs to be an escalation process, which specifies how questions are referred to higher levels. Many companies have ethics boards at the top of this process. Some are purely advisory, while others have executive authority. Some include external members, while others are all employees. One of the key features of a successful ethics board is diversity of gender, ethnicity, geography, and lived experience.
AI ethics
Is there a problem with the terminology of “AI ethics”? It is odd that the AI industry is deemed to require an ancillary industry of ethicists, when, say, the bridge-building industry, and the submarine-building industry are not. The word “ethics” brings some unhelpful baggage. We have been discussing ethics since the ancient Greeks, and while most people would agree that our societies have improved since then, we have not reached universal agreement on answers to the fundamental ethical questions they asked, like “what is a good life?”, “when should we lie?”, “when should we kill?”, etc.
Furthermore, if you frame a question as an ethical issue, it means that someone who disagrees with you is not just making a practical error: they are morally wrong, and may well be a bad person. Discussions about ethics can quickly get more heated than they need to.
Ray agrees with this diagnosis, and it partly explains why he uses the term “responsible AI”. The term “AI safety” might be appropriate too, if it hadn’t already been co-opted for the longer-term concerns about AI which this article started with. The term “responsible” also has connotations of trust, which is good because it reminds us that AI can and should be a powerful force for good, as well as a source of potential problems.
Regulation
The need for responsible AI begets the need for regulation, and the biggest piece of AI regulation that is coming soon is the EU’s AI Act. It won’t be in force for two years or more, but it takes large companies a long time to reshape their processes to accommodate complex legislation, so they are engaging with the EU already, even though the Act’s provisions are far from finalised.
The Act is being developed with a risk-based approach. The more risk an application raises, the more stringent the controls will need to be, and the bigger the penalties for failing to take the appropriate precautions. Companies will have to keep comprehensive records about the development, testing, monitoring and government of AI systems.
Companies are also keen to demonstrate a genuine concern for responsible AI because that earns trust among customers, staff, and other stakeholders.
Some people argue that the EU’s approach is going to stifle innovation in Europe, and entrench the already significant lead which the USA and China have in AI research and development. Ray is alive to these concerns, but thinks that like GDPR, the EU AI Act will probably become a de facto standard, and by impacting businesses globally, it will not tilt the playing field against European companies.
Whatever it is called, responsible AI should not be an add-on, but part of the core design. With a bridge, you don’t have engineers design an elegant bridge and then have bridge ethicists come along to check if it will fall down and kill people. An AI system which discriminates against minorities is not an AI system which works well but has an ethical malfunction, it is an AI system which doesn’t work properly.
Google and female CEOs
Whatever the terminology, trying to ensure that AI systems are effective and responsible forces you to make hard decisions. A Google search reveals that the proportion of female CEOs in the FTSE 100 is a measly 8%. But if you search Google for images of UK CEOs, women feature in 20% of the results. It appears that Google has made the decision to show us a less discriminatory world than the one we live in. Their goal is presumably to normalise the idea that CEOs should be diverse, but surely they should notify us when they make decisions like this? Or would that raise so much protest in the Daily Mail and elsewhere that they would have to stop? Perhaps, if forced to explain, Google would claim that the algorithm “knows” that many searchers will be looking for diverse images.
The challenges of making AI systems responsible and effective are only going to get larger and more complex. Ray is confident that the community of people dedicating to meeting those challenges will grow in the coming years, and indeed they may start to specialise in particular areas of AI.