Grace Jones: slave to the algorithm

Powerful new technologies can produce great benefits, but they can often produce great harm. Artificial intelligence is no exception. People have numerous concerns about AI, including privacy, transparency, security, bias, inequality, isolation, oligopoly, and killer robots. One which perhaps gets less attention than it deserves is algocracy.

Decisions about the allocation of resources are being made all the time in societies, on scales both large and small. Because markets are highly efficient systems for allocating resources in economies characterised by scarcity, capitalism has proved highly effective at raising the living standards of societies which have adopted it. Paraphrasing Churchill, it is the worst possible economic system except for all the others.

Historically, markets have consisted of people. There may be lots of people on both sides of the transaction (flea markets are one example, eBay is another). Or there may be few buyers and many sellers (farmers selling to supermarket chains) or vice versa (supermarket chains selling to consumers). But typically, both buyers and sellers were humans. That is changing. 

Machine-made decisions

Robot bossAlgorithms now take many decisions which were formerly the responsibility of humans. They initiate and execute many of the trades on stock and commodity exchanges. They manage resources within organisations providing utilities like electricity, gas and water. They govern important parts of the supply chains which put food on supermarket shelves. This phenomenon will only increase.

As our machines get smarter, we will naturally delegate decisions to them which would seem surprising today. Imagine you walk into a bar and see two attractive people at the counter. Your eye is drawn to the blond but your digital assistant (located now in your glasses rather than your phone) notices that and whispers to you, “hang on a minute: I’ve profiled them both, and the red-head is a much better match for you. You share a lot of interests. Anyway, the blond is married.”

In his 2006 book “Virtual Migration”, Indian-American academic A. Aneesh coined the term “algocracy”. The difficulty with it has been explored in detail by the philosopher John Danaher, who sets the problem up as follows. Legitimate governance requires transparent decision-making processes which allow for involvement by the people affected. Algorithms are often not transparent and their decision-making processes do not admit human participation. Therefore algorithmic decision-making should be resisted.i

Danaher thinks that algocracy poses a threat to democratic legitimacy, but does not think that it can be, or should be, resisted. He thinks there will be important costs to embracing algocracy and we need to decide whether we are comfortable with those costs.

What not to delegate?

Robot diplomatOf course many of the decisions being delegated to algorithms are ones we would not want returned to human hands – partly because the machines make the decisions so much better, and partly because the intellectual activity involved is deathly boring. It is not particularly ennobling to be responsible for the decision whether to switch a city’s street lights on at 6.20 or 6.30 pm, but the decision could have a significant impact. The additional energy cost may or may not be offset by the improvement in road safety, and determining that equation could involve collating and analysing millions of data points. Much better work for a machine than a human, surely.

Other applications make us much less sanguine. Take law enforcement: a company called Intrado provides an AI scoring system to the police in Fresno, California. When an emergency call names a suspect, or a house, the police can “score” the danger level of the person or the location and tailor their response accordingly.i Other forces use a “predictive policing” system called PredPol which forecasts the locations within a city where crime is most likely to be carried out in the coming few hours.ii Optimists would say this is an excellent way to deploy scarce resources. Pessimists would reply that Big Brother has arrived.

AI is already helping to administer justice after the event. In 2016 the San Francisco Superior Court began using an AI system called PSA to determine whether parole should be given to alleged offenders. They got the tool free from the John and Laura Arnold Foundation, a Texas-based charity focused on criminal justice reform. Academics studying this area have found it very hard to obtain information about how these systems work: they are often opaque by their nature, and they are also often subject to commercial confidentiality.iii

There are many decisions which machines could make better than humans, but we might feel less comfortable having them do so. The allocation of new housing stock, the best date for an important election, the cost ceiling for a powerful new drug, for instance. Arguments about which decisions should be made by machines, and which should be reserved for humans are going to become increasingly commonly and increasingly vehement. Regardless whether they make better decisions than we do, not everyone is going to be content (to paraphrase Grace Jones) to be a slave to the algorithm.

Information is power. Machines may intrude on our freedom without actually making decisions. In September 2017 a research team from Stanford University was reported to have developed an AI system which could do considerably more than just recognise faces. It could tell whether their owners were straight or gay. The idea of a machine with “gaydar” is startling; it becomes shocking when you consider the uses it might be put to – in countries where homosexuals are persecuted and even prosecuted, for instance.iv The Stanford professor who led the research later said that the technology would probably soon be able to predict with reasonable accuracy a person’s IQ, their political inclination, or their predisposition towards criminality.

Things are getting weird.

 

Related Posts