The eighth in a series of un-forecasts* – little glimpses of what may lie ahead in the century of two singularities.

This article first appeared on the excellent blog run by Circus Street (here), a digital training provider for marketers.

In the old days, before artificial intelligence started to really work in the mid-2010s, the clients for reputation management services were rich and powerful: companies, government departments, environmental lobbying groups and other non-government organisations, and of course celebrities. The aims were simple: accentuate the good, minimise the bad. Sometimes the task was to squash a potentially damaging story that could grow into a scandal. Sometimes it was to promote a film, a book, or a policy initiative.

Practitioners needed privileged access to journalists in the mainstream media, to politicians and policy makers, and to the senior business people who shaped the critical buying decisions of large companies. They were formidable networkers with enviable contacts in the media and business elite. They usually had very blue-chip educational and early career backgrounds; offering patronage in the form of juicy stories and un-attributable briefings to compliant journalists.

Digital democratisation

The information revolution democratised reputation management along with everything else. It made the service available to a vastly wider range of people. If you were a serious candidate for a senior job in business, government, or the third sector, you needed to ensure that no skeletons came tumbling out of your closet at the wrong moment. Successful people needed to be seen as thought leaders and formidable networkers, and this did not happen by accident.

The aims of reputation management were the same as before, but just as the client base was now much wider, so too was the arena in which the service was provided. The mainstream media had lost its exclusive stranglehold on public attention and public opinion. Facebook and Twitter could often be more influential than a national newspaper. The blogosphere, YouTube, Pinterest, and Reddit were now crucial environments, along with many more, and the players were changing almost daily.

The practitioners were different too. No longer just Oxbridge-educated, Saville Row tailored types, they included T-shirt-clad young men and women whose main skill was being up-to-date with the latest pecking order between online platforms. People with no deep understanding of public policy, but a knack for predicting which memes would go viral on YouTube. Technically adept people who knew how to disseminate an idea economically across hundreds of different digital platforms. Most of all, they included people who knew how to wrangle AI bots.

Reputation bots

Bots scoured the web for good news and bad. They reviewed vast hinterlands of information, looking for subtle seeds of potential scandal sown by jealous rivals. Their remit was the entire internet, an impossibly broad arena for un-augmented humans to cover. Every mention of a client’s name, industry sector, or professional area of interest was tracked and assessed. Reputations were quantified. Indices were established where the reputations of brands and personalities could be tracked – and even traded.

All this meant lots of work for less traditionally qualified people. Clients who weren’t rich couldn’t afford the established consultants’ exorbitant fees, and they didn’t need them anyway. Less mainstream practitioners deploying clever bots could achieve impressive results for far less money. As the number of actual and potential clients for reputation management services grew exponentially, so did the number of practitioners. The same phenomenon was observed in many areas of professional services, and become known as the “iceberg effect”: a previous, restricted client base revealed to be just the tip of a previously unknown and inaccessible demand.

But pretty soon, the bots started to learn from the judgement of practitioners and clients, and needed less and less input from humans to weave their magic. And as the bots became more adept, their services became more sophisticated. Practising offence as well as defence: placing stories about their clients’ competitors, and duelling with bots employed by those rivals: twisting each other’s messages into racist, sexist or otherwise offensive versions, tactics that many of their operators were happy to run with and help refine.

Algocracy

Of course, as the bots became increasingly autonomous, the number of real humans doing the job started to shrink again. Clients started to in-source the service. Personal AIs – descendants of Siri and Alexa, evolved by Moore’s Law, – offered the service. Users began relying on these AIs to the point where the machines had free access to censor their owners’ emails and other communications. People realised that the AIs’ judgement was better than their own, and surrendered willingly to this oversight. Social commentators railed against the phenomenon, clamouring that humans were diminishing themselves, and warning of the rise of a so called “algocracy”.

Their warnings were ignored. AI works: how could any sane person choose to make stupid decisions when their AI could make smart ones instead?

* This un-forecast is not a prediction.  Predictions are almost always wrong, so we can be pretty confident that the future will not turn out exactly like this.  It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning.  Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t.  In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.

Related Posts