X Close

This is how democracy dies If machines make all our decisions, who's in charge?

The machines aren't out to get you. Credit: ARUN SANKAR/AFP via Getty Images

The machines aren't out to get you. Credit: ARUN SANKAR/AFP via Getty Images


July 14, 2021   6 mins

When the apocalypse comes, most of us will barely notice it’s happening. Most technology-driven dystopias are far too interesting to be realistic: the end of the world will be a grinding, bureaucratic affair, its overriding spirit one of weary confusion — about how things work and who’s to blame when things go wrong.

Forget for a moment the flashier signals of technological process: AI-powered personal assistants, Boston Dynamics back-flipping robots or blockchain cheerleaders. The two most important trends in the field of technology are quiet and relentless: increasing volumes of data and declining cost of computing power. In the long run they mean machines will, despite frequent hiccups, keep improving. They already outperform humans in a small but growing number of narrow tasks, but it’s unlikely we’ll see general artificial intelligence any time soon — much less the AI-goes-rogue scenario. Still, machines will gradually take over more and more decision-making in important areas of life, including those which have ethical or political dimensions. Already there are signs of AI drifting into bail conditions, warfare strategy, welfare payments and employment decisions.

The problem isn’t whether machine decisions are better or worse — that’s often a question of values anyway — but whether it’ll get to the point where no one will be able to understand how decisions are made at all. Today’salgorithms already deal with millions of inputs, insane computing power and calculations far beyond human understanding. Frankenstein-like, most creators no longer really understand their own algorithms. Stuff goes in and stuff comes out, but the middle part is becoming a mysterious tangle of signals and weighting. Consider the example of AlphaGo, the AI system that astonished the world by thrashing the world’s best Go player, before astonishing it a second time by thrashing itself. Aeronautic engineers know precisely why their planes stay in the air; Alpha Go’s inner workings were and are a mystery to everyone. And by 2050, Alpha Go will be fondly remembered as a child-like simpleton.

Advertisements

There will be seminars, lessons, bootcamps, and online training courses about how to work with The Algorithm. Don’t worry yourself overly, human! Singletons: Learn the best combination of words to secure your dream date! Join our “beat the algo” seminar where you will learn how to ensure your CV outwits the HR filtering systems. Use our VPN to trick web browsers into thinking you’re from a poorer neighbourhood to secure a better price! A few months back a handful of bootcamps opened, where parents pay $2,000 for experts to teach their kids how to succeed on YouTube. Some scoffed, but I suspect similar courses will soon be the norm. These will be the warning signs of a confused and frightened society.

Imagine a 21-year-old happily bouncing through life in the 2050s. His entire life will have been datafied and correlated. His sleep patterns from birth captured by some helpful SmartSleep ap; his Baby Shark video consumption aged 2 safely registered on a server somewhere. All those tiny markers will help guide his future one day: his love life determined by a sophisticated personality matching software, while his smart fridge lectures him about meat consumption (insurance premiums may be impacted you know!); his employment prospects determined by a CV checking system 100 times more accurate than today’s. His cryptocurrency portfolio automatically updating every half nano-second based on pre-determined preferences. His political choices and opinions subtly shaped by what pops up on his screen controlled by AI-editors using preference algorithms that have been running for 50 years.

It sounds bad, but not apocalyptically bad, right? But imagine, now, that our 21-year-old is so impudent as to question or object to what these brilliantly clever systems are offering him up. There would probably be no obvious number to call with a complaint. He might try to sue the CV-checking software designed for the subtle discrimination he suffered — but the judges will throw the case out because the software designer has been dead for 30 years and they still don’t really understand what an algorithm is anyway.

The problem with such a machine-dependent world, then, is not what you might think. AI theorists spend a lot of time worrying about something called “value alignment”. It is a hypothetical future problem where a hyper-powerful AI takes instructions literally, with disastrous results. The most famous example is the “paperclip maximiser” where an unsuspecting factory owner asks an AI to make as many paperclips as possible — and it ends up turning the entire universe into paperclips. But I doubt you’ll need to worry about paperclips: you’ll be too busy on the phone to machine-like bureaucrats who can’t help with your application, because the machine has made a decision and the person who okayed it is off sick and the person who built the tech now works in Beijing and…

Confusing machines will annihilate accountability, which is one reason powerful people will like them. A couple of years ago UK health secretary Jeremy Hunt told the House of Commons that “a computer algorithm failure” meant 450,000 patients in England missed breast cancer screenings, and as a result many as 270 women might have had their lives shortened as a result. Who was responsible for this murderous and despicable “computer algorithm failure”? The tech guy who wrote the software, in good faith, years ago? The person who commissioned it? The people feeding the data in? Unsurprisingly, a subsequent inquiry into all this found that “no one person” was to blame. Nothing has been done in response, and nothing will. More recently, Boris Johnson blamed a “mutant algorithm” for the A-level fiasco — how convenient! Expect algorithms to become every politicians’ non-apology apology by the 2030s.

Around this time, the first casualties from driverless car accidents will start arriving in A&E. The subsequent enquiries will conclude that “no one person” is responsible for the deaths, either. It will instead be the fault of “unforeseen system incompatibilities” and “data corruptions” that make no sense, and offer no comfort, to anyone.

Presumably all this will be accompanied by a mild identity crisis. Some of us will pray to these God-like systems in the hope their mysterious inner workings are good to us. (An Uber driver was recently overheard muttering that, “The Algorithm has been good to me today”.) The less sanguine will presumably try to smash them to pieces. That will be destined to fail because, unlike the Spinning Jenny, software can’t be destroyed with a bat or arsonist. It’s somewhere you can’t reach it.

What will our leaders do about it? When people aren’t held to account, they tend behave worse — especially if someone or something tells them it’s OK. In his infamous experiment on the nature of authority, Stanley Milgram asked people to (they believed) electrocute other participants, which they generally did if a man in a white doctor’s jacket told them it was OK. He called this “agentic shift” — the process by which humans shift responsibility to abstract processes and systems, and in the process lose their own sense of right and wrong. People are worryingly good at following orders without question. Adolf Eichmann, the chief bureaucratic mastermind behind the Holocaust, is history’s most infamous rule-follower, but there were thousands like him inside the Nazi machine, telling themselves that they were only following orders, and so they were not to blame.

The Adolf Eichmanns of the future will be hip, jean-wearing technologists and bureaucrats who confidently assure everyone that they need to follow the complicated data models and respect the analytics. Outsourcing morality to a machine, writes Virginia Eubanks in her book Automating Inequality, gives the nation:

“the ethical distance it needs to make inhuman choices: who gets food and who starves, who has housing and who remains homeless, and which families are broken up by the state.”

Some form of ‘ethical distance’ is probably necessary for fair and objective government, but if it goes too far, the result is decision-makers who see little relationship between their decisions and the effect on people’s lives. Smart machines will likely make things worse because rather than just following rules and making sure your little jigsaw piece fits, bureaucrats will have a machine to rely on, an intelligence apparently smarter and wiser than they’ll ever be. The ultimate form of deniability.

If, one day in the future, a world-ending cyberwar breaks out — the most likely form the bureaucalypse might take — it won’t be caused by SkyNet going rogue. It will be initiated by a group of well-dressed and well-meaning civil servants who lack the courage or conviction to disagree with the machine-modelling and AI-Strategists which told them that overall well-being would be improved by 13.2 percentage points, that the risk of retaliation was minimal. Having spent the previous decades relying on machine advice for everything from music choices to cancer diagnosis, disagreeing with the supercomputers will seem impossible, maybe even immoral.

Obviously, we humans are too thin-skinned to give up on the idea that we’re the ones in charge, so we’ll still have the plebiscites, the MPs, the Select Committees and the opinion pages. But the whole point and purpose of democracy — to hold powerful people to account, to ensure well-informed citizens are ultimately in charge — would be reduced to a charade. Real power and authority will become centralised in a tiny group of techno-geniuses and black boxes that no-one understands.

If anything, as the range of problems politicians can actually solve shrinks, the fabricated outrage and manufactured disagreements will grow. Around the same time machines get to decide the most efficient tax rate, politicians will be literally throwing themselves onto pyres over survey question options or toilets signs. While, in the real world, algorithms will sort us by intelligence, ambition and attractiveness, politics will become at best an empty ritual, at worst a form of entertainment, like a WWE wrestling match. And the scariest thought of all is this: a world run by machines and rubber stamped by humans who’ve forgotten how to think — all divorced from a democracy that has been reduced to pure content — might not worry people at all. In fact, plenty of us will probably quite enjoy it.


is the author of The People Vs Tech (2018) & The Dark Net (2015)’. 

JamieJBartlett

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

66 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
ralph bell
RB
ralph bell
2 years ago

A brilliant and foreboding article.
The wish to live for ever suddenly is becoming less desirable…
The current arguments with the tech giants about biased searches, political influencing and hateful comments already demonstrate the lack of accountability and passing the buck to the algorithms.

Prashant Kotak
PK
Prashant Kotak
2 years ago
Reply to  ralph bell

And my desire to live forever is growing exponentially. Like Della Lu in ‘Marooned In Realtime’, I want to look into the eye of the singularity.

Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  Prashant Kotak

Prash, carnate eternity is the ultimate hell. I think our three score and ten is a good thing, and I will be ready when it is time. I have been about in the world a great deal, and I have seen so much suffering and misery, cruelty, and utter despair, that life has worn me out. I just can never really forget how cold existence is to suffering – us cossetted Westerners have it very good – but even amongst us I have seen to much to be unaffected, and in nature and third world – I carry a huge burden from what I have seen of life there…. The longer I life the heavier the burden of what I have seen and understood lays on me, and I will not regret my passing – as I have not regretted my chance to see the world and existence, but it is tiring, and at some point I will just be worn out by it all.

Prashant Kotak
PK
Prashant Kotak
2 years ago
Reply to  Galeti Tavas

Yeah, I would happily take my chances with eternity if that were to come my way – although I suspect that will come a couple of decades too late for me. As to all the angst Sanford, well that’s just a biological response, and I don’t want to be a prisoner of something as random as our biology. I’m much more cold blooded than that, as you probably can tell from my posts.

Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  Prashant Kotak

Yea, I got that from your post on the desirability of learning coding. And don’t trivialize my angst, it is real, and real because life is harsh, and the only thing I have ever found which justifies this endless circle of cruelty is the capacity for love, and also human’s ability to feel aesthetics, wonder, compassion, empathy, charity, and to appreciate the ultimate and existence. Without our consciousness life is merely a circle of endless circling misery. I get where Buddhism and Hinduism come from – not agree with them, I find the Religions of the Book superior, but get the concept of Juggernaut totally, I have seen it….

Prashant Kotak
PK
Prashant Kotak
2 years ago
Reply to  Galeti Tavas

My apologies, no offence meant. I just have a very “Life of Brian” type sense of humour because I’m a product of my generation. Joshing is all it is.

chris sullivan
CS
chris sullivan
2 years ago
Reply to  Galeti Tavas

Amen-and the really sad thing is that my son at 29 years old feels the same way-at least we had a few ‘somewhat free and naive ‘ years before we got to this point. I feel for the young, many of whom have missed out on a lot. I was 55 before i got a smartphone !

Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  ralph bell

An extraordinary Piers Morgan article in Daily Mail yesterday was all how everyone on the internet should have to prove their ID and use their real name, and aliases banned totally in all Social Media!

He uses some getting their feelings hurt by racist comments to do this. The thing is every thing you say online is stored – coupled with Person Searches give your address and phone number, and so every person will have to STFU, or parrot the correct lines.

His article (but Piers is a horrible person in every way) got huge numbers of up votes! People agreed by a huge margin. That is the ultimate dystopia, the tracking from birth to death – coupled with the ability to be judged by every person and group out there. At least Government tracking and monitoring is somewhat disinterested by sheer volume. But to make one vulnerable to attack by any idiot/crazy if you speak what you think – that is true dystopia.

Hubert Knobscratch
HK
Hubert Knobscratch
2 years ago

AI is coming and there are millions of people in this country about to get a very rude awakening as to just how “key” they are to society functioning.
A good mate places executives in AI start-ups in the south east. He’s doing very well out of it. One of the venture capitalist funding groups has told him the funds available for a good AI start could be considered infinite.
My cousin is high up in international contract law – 6 figure salary. She has been advised to retire in the next 2 years, she’s 58 – otherwise she will have to take a meagre redundancy like the rest of her team. She has been told her role is being taken by AI.
The 4 lines of employment going first are
Law – 85%
Finance – 75%
Medicine – 60%
Education – 55%
The world is going to be controlled by those who input the data, and most importantly those who sort it.
The meek will indeed inherit the earth.

Prashant Kotak
PK
Prashant Kotak
2 years ago

You surely mean “The geek will inherit the earth”.
And in fact that was what was said on that famous occasion of the Sermon on the Mount. Matthew misheard and misreported it in 5:5, because he didn’t have his hearing aid in. Impressively prescient of JC to have forecast Gates, Brin, Ellison, Musk, et al two millennia on. But then, you wouldn’t expect any less from the Son of God.

Last edited 2 years ago by Prashant Kotak
Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  Prashant Kotak

Love it….

Brack Carmony
Brack Carmony
2 years ago

I think you’ve been listening to too many tech company’s marketing departments, and not enough time talking to the people who have to actually make it.

Prashant Kotak
PK
Prashant Kotak
2 years ago

This is a very readable article, but challengeable on so so many counts.
My starter for 10 (more challenges to follow):

“…The problem isn’t whether machine decisions are better or worse — that’s often a question of values anyway…””

Well, no. For example, take a simple case: medical diagnosis. Over time, the superiority of one over the other will be incontrovertible. There is nothing ‘values’ about this. And while some people will always choose human made decisions, most won’t, because it will be demonstrable that machine decisions are not just superior but far superior – and people don’t want to die for the sake of a belief system. That people won’t remotely understand what the algorithms or neural nets are doing, is neither here nor there – they don’t understand how medicines work or how (human medical) staff make decisions. Face it, most people have understood less and less about most things that affect their lives since we left agrarian societies behind.

Mark Goodge
Mark Goodge
2 years ago
Reply to  Prashant Kotak

I think you’re missing the subtle, but important, distinction between “always” and “often”, there. Some decisions will have a clear, objective, “best” answer, but many will not.
In any case, even many medical decisions are values-based. Should we abort a child who is likely to be born severely disabled? At what point does a person’s quality of life matter more than their length of life? If someone is unhappy with their body, is it better to change the body with cosmetic surgery, or try to address the reasons why they are unhappy? Should people who engage in knowingly self-harming behaviour (such as smoking) be excluded from free healthcare for the problems that causes? And how do we allocate finite resources, when one person’s life-saving experimental treatment could pay a year’s salaries for a ward full of nurses?
AI might help us answer those questions. It may also help us to realise that there are no answers. But it will never answer them for us.

Alan Thorpe
AT
Alan Thorpe
2 years ago

Humans and computers can both be wrong. The article makes an important point but does not develop it. Engineers use models and testing but fundamentally they have to produce something that works and is reliable. Any mistakes are made in testing and we mostly don’t know about them. Compare that with modelling that attempts to forecast the future, such as climate modelling and the covid modelling. There is only one place for such information, which is the dustbin. None of it has ever been correct and it never will be.
The issue is why do politicians accept rubbish and act on it. Plato had the answer which is democracy results in the most incompetent being elected to the most important jobs. There is empirical evidence for both the climate and the pandemic to show that the models are not accurate, and not even close to being accurate.
It really does not matter where the information comes from for decision making when we have fools in charge. The answer is stop putting trust in democracy which is a failure. We need to severely limit the power of governments and take responsibility for ourselves. I don’t remember who said this, when government power is so limited that it will not matter who we vote for, we will have correct size of government. The problem is that we now have a majority of the world population who are so dependent on government support they will not survive without it. Democracy has killed off freedom. That is the real problem.

Rasmus Fogh
RF
Rasmus Fogh
2 years ago
Reply to  Alan Thorpe

So you think we should stop trying to forecast the future? No more market or sales forecasts to decide how many goods to produce or stock. No trying to estimate resource needs in hospitals for the next winter or the next epidemic. No economic forecasts to set economic policy. Don’t try to forecast the number of pensioners when you look at building retirement homes.
Really?

As for ‘Democracy has killed freedom’, who do you think will wield the power, keep order, and set policies when there is no government? If you want to try, I would recommend Northern Afghanistan, for now, no government in control, no effective police, and lots of guns. Send back a report once you have lived there in freedom for a year or so.

Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  Rasmus Fogh

I would add that the Pashtun People of Afghanistan do not live under the Taliban as a conquered people. The Taliban are the ultra traditional Pashutn Afghanies, and as they see it they are taking their country back. It will be dreadful though, and a very great many will regret it very much. The unfortunate Hazras, and Shia, are in for it though
Like Iraq, we won the war and then made such a mess of the Peace and reconstruction we ended up doing great harm. Our Political leaders are fools, criminally foolish.

Rasmus Fogh
RF
Rasmus Fogh
2 years ago
Reply to  Galeti Tavas

You are right. Ironically, a better forecast of what would happen under occupation – or more listening to those who did make such forecasts – might have avoided or shortened our part of the war.

Andrew Fisher
Andrew Fisher
2 years ago
Reply to  Alan Thorpe

A complete overstatement about models. Modern society would simply not be able to function without them. The weather forecast is based on models, right? What do we do instead, guess? No we develop better models, be aware of their limitations and continue to be open minded and to calibrate them.

But perhaps that is a small quibble given your desire to abolish democracy and even government! Good luck with that one, absolutely zilch chance of it happening. First of all, free market based societies (if that is your ideal, as on its record it ought to be), cannot function without the state. What would actually happen were government to be abolished is that rather brutal warlords would take over. And this is what has actually HAS happened by the way, as in post-Qing dynasty China, not to mention modern Syria and scores of other examples. Organisations not too bothered about welfare or anything else much than obtaining power. And who are the plebs to say that the people in charge, who may be fools but have the guns, should not be in power? What would they do about it? These these basic mafia-like structures might eventually evolve into some sort of more responsive and less overtly brutal government over time. Which I suppose is pretty much post agricultural revolution history.

Last edited 2 years ago by Andrew Fisher
Prashant Kotak
PK
Prashant Kotak
2 years ago

“…Real power and authority will become centralised in a tiny group of techno-geniuses and black boxes that no-one understands…”

Well, yes. And you have yourself provided the solution, if you want to retain some degree of control (well at least for a couple of decades), and a stake in the tech driven world. Learn to code. Seriously. Now. Everyone from toddlers to OAPs. And it needn’t cost you anything.

Because in truth, everything you need to do that is completely open to anyone who can get past the personal inertia we are all guilty of. For anyone wanting to learn coding or any aspect of IT, there is an absolutely vast quantity of free training material out there, free pdfs, websites by the bucketload, absolutely dazzling free tech, free IDEs, compilers, databases and a million experts willing to help you (eg stackoverflow). All it requires is a laptop (~£300), a good connection (~£100) and the will (~priceless). The Great Library at Alexandria was a shepherds hut in comparison to the sophistication of what pretty much everyone who wants it has access to, totally for free.

And as a message of booting up off of self-help, to several entire generations, someone somewhere needs to put this out there: shutdown your FB account, stop sending tiktoks to your friends, sod the cat videos on youtube. Forget watching those old episodes of Friends for the hundreth time, instead become friends with code – you will get yourself a much better paid job (if you want) which you enjoy and even the covfefe will taste better. Download that Python IDE (free) and begin. If you want a thousand teachers, each of which will blow the socks off your average lecturer at Uni, where you can watch anything you don’t understand a hundred times over until you do, then get Pluralsight (~£30 pm) or one of a dozen others similar.

Rasmus Fogh
RF
Rasmus Fogh
2 years ago
Reply to  Prashant Kotak

I make my living by coding – and I can tell you that this solves nothing..

The best example I know is the CV-filtering AI (at Google?) that gave good recommendations in line with historical data. When analysed, it was found that it explicitly penalised CVs that included words like ‘women’s soccer’ – because historically the more successful and promoted employees had been male, not female. This is obviously not what you want – not because it is discriminatory, but because it does not give the best employees. In fact, even if it was true that women were, on the average, less productive employees (which is unlikely to be the whole story), the algorithm would still be unfairly penalising the best women, while not fully accounting for whatever factors made a lot of women less attractive candidates. Unfortunately the only way you can challenge it is to ‘check the workings’ of the algorithm And since the algorithm is totally opaque and based on enormous quantities of data, that amounts to a major research project that would be a lot of hard work – even for those privileged few who were granted detailed access to the data behind the most likely proprietary algorithms.

There is actually a correct way to solve this kind of problem. In this particular case sex had been historically correlated with skill (and/or promotion), and there are proven techniques from epidemiology to separate confounding factors (like wealth) from the factors you are interested in (like diet or smoking). But, again, it is a major research project every time, and the people who own the data have little motivation to do it. More likely they will tweak their algorithms until the output meets the desired criteria (in this case explicitly upvote CVs from women and other politically favoured groups, thus favouring some less good women at the expense of better men) and hide behind the complex and seemingly objective nature of the algorithms every time anyone tries to challenge them.

Last edited 2 years ago by Rasmus Fogh
pavel
PB
pavel
2 years ago
Reply to  Rasmus Fogh

More likely they will tweak their algorithms until the output meets the desired criteria”… which will imply the algorithms will make – ceteris paribus – worse decisions.

Rasmus Fogh
Rasmus Fogh
2 years ago
Reply to  pavel

Indeed. I rewrote my post to make that point clearer before I saw yours.

John Waldsax
John Waldsax
2 years ago
Reply to  Rasmus Fogh

Thanks Rasmus for this helpful observation. I have installed tens of very large systems for global corporations. At the specification and installation level there are may be a few techncal specialists w ho really do or could understand and explain every module of code and every data dictionary. The chance of more than a very few users even knowing which questions to ask is very low. Most features specified will have their behaviour and embedded assuptions unexamined for years.(See the shocking average age of banking systems)
Also, the intially user set policies and system constants are individually fairly straightforward, but when, as in a typical ERM system for a corporation there are 4,000 of them, most set with at best a few minutes thought by a low level manager, the chances of the operators, let alone the users understaning the behaviours are again low. Computers (I am no AI specialist) may indeed be mighty clever and their data management accurate and inexorable but the human inputs are normally modestly skilled, with knowledge focussed in limited areas. They are also lazy and if the outputs they get appear OK they will not or cannot raise a fault report. The large systems already defy actual policy audit by ther scale; we don’t need to wait for AI.

Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  Rasmus Fogh

The great fear is maybe finding that women soccer and such gender/social key words may actuality be relevant in a great many jobs and HR decisions – as I believe, Know actually, that different genders have roles which overlap, but have sides of the bell curve they tend towards. The same is of cultures, and every other subdividing of humanity.
AI HR may finally bring home the case that not all are just the same, that in fact gradients along types are real.

Rasmus Fogh
RF
Rasmus Fogh
2 years ago
Reply to  Galeti Tavas

Good point. But even if they did, being a woman would not *in itself* make anyone worse – it would just be that some negative (or positive) traits were more common in women and less common in men. It would still be unfair – and stupid – to mark an individual down for their sex.

A Spetzari
A Spetzari
2 years ago
Reply to  Prashant Kotak

Yes have to agree with Rasmus here. This will solve little. Learning C++ or Python won’t help you much when it comes to getting insurance or knowing what Google Maps is doing with your location data.
These systems are so big and complex, systems built upon systems and systems. As the article points to regarding accountability, no one person fully understands the whole piece even on moderately mundane functions in the apps, systems and software we use daily. This is why they are ultimately unaccountable.

Prashant Kotak
Prashant Kotak
2 years ago
Reply to  A Spetzari

What happened to Rasmus’s post?? I was going to respond to that but it seems to have gone. Complexity is certainly an issue, but one thing I know: those who understand code have a far better chance of not only grasping what the systems do but also of not becoming victims of them. Understanding of what algorithms are and what they do, and how they give rise to non-algorithmic systems like *neural nets is a precursor to understanding how the algorithmic ecosystem affects us at the macro level – I mean the level of aggregated effects.
*(Standard feedforward neural nets are not for example Turing Complete, although other types are)

Last edited 2 years ago by Prashant Kotak
A Spetzari
A Spetzari
2 years ago
Reply to  Prashant Kotak

No fair points – and from that perspective I do agree to an extent. Increasing your understanding of these systems at any level will only help and inform you better as an individual.
But in a wider sense it is – forgive my expression – pi$$ing in the wind somewhat. If an insurance company uses a certainb engine, there’s little I can do as a consumer. Heck I won’t at my end have any idea how their system works so I cannot even make an informed choice, even if I could see the underlying system.
Same goes for big blanket apps such as shopping websites, booking systems, dating apps. Understanding the high level concepts underpinning them doesn’t really inform your choice that much as a consumer, as you cannot see what you’re buying into.
Much like someone who knows nothing you can either use their product or not. But it’s hard to make that choice from an informed standpoint with regards to their use of AI.
And as we have seen with online in general – there becomes a point where you are distinctly disadvantaged if you choose not to use it.
(not sure where Rasmus went either!)

Last edited 2 years ago by A Spetzari
Rasmus Fogh
Rasmus Fogh
2 years ago
Reply to  A Spetzari

My post is there now. I think there was an ‘under moderation’ flag at some point?

It is always useful to understand better how these systems work. But the key points, as more or less said in the article, is that they are opaque (not even the people who made them can generally tell why they take a specific decision in a specific case), they depend on matching to huge amounts of data, they are generally proprietary and secret, and they are clearly manipulable by those who control them. That opens a great space for the owners to avoid responsibility, even as they can manipulate the outcomes.

Being able to program does not really do much about these disadvantages.

Last edited 2 years ago by Rasmus Fogh
Saul D
SD
Saul D
2 years ago
Reply to  Rasmus Fogh

Ironic. A discussion about AI taking control and a post gets flagged for moderation, removed and then returns with no-one, not even the programmers here, knowing why…

Prashant Kotak
Prashant Kotak
2 years ago
Reply to  Saul D

And that ladies and gentlemen, as any coder could tell you, is the problem of black-box systems, where you don’t know what the algorithms are doing.
An illustration: a while ago I acquired a Chromecast clone type dongle (Chinese of course) to allow me to project my laptop or mobile screen onto the television. It works by priming the dongle to connect to the home wireless, and then plugging the dongle into the TV’s HDMI. On my laptop, I connect to the hotspot wifi the dongle generates instead of my home wireless. This bridges me back to my home wifi but also allows my laptop to show it’s screen on the TV.
The point is: suppose I access my bank account while connected via the dongle. If this software can snap my screen, how do I know it’s not sending back a few dozen encrypted jpegs of my screen and the keystrokes every time I access known bank urls?
I think this problem can only be solved by an increasing requirement to make the software Open Source, and packaged software will then come about that can verify the source as safe, build it locally, and inject it back into the firmware of the device you have purchased. A well packaged, supercharged version of Docker could do it. It goes without saying that the firmware in the devices would need to be made Open Source too. Without something like this, a major global hack that sits dormant for months, and then strikes globally, robbing billions is an ever increasing likelihood. And that of course is the least of it.

Last edited 2 years ago by Prashant Kotak
Norman Powers
NP
Norman Powers
2 years ago
Reply to  Prashant Kotak

Open source doesn’t really help, there aren’t enough people willing to do the audit work. Look at OpenSSL – open source, an endless source of fatal bugs which had been there for years. And keeping up with something bigger, like a whole OS? Impossible. Android is open source, I bet there’s not a single person alive who actually has read it all and also keeps up with reading new versions.

Prashant Kotak
Prashant Kotak
2 years ago
Reply to  Norman Powers

Although no (or very few) single person can keep tabs on the full code of an OS, the varieties of Linux for example are maintained by (a few thousand) kernel engineers across the globe. But yes, there aren’t enough devs in the world to verify the explosion of proliferating software across the world even if it was all open sourced. I nevertheless believe we should pressure for all commercially used software to be open sourced, so at least it can be checked after the fact if problems and hacks arise – I believe this would be a huge help.
When I said earlier “…this problem can only be solved…” there is in fact another way, that won’t (yet) be taken, which is: asserting Program Correctness – verifying the correctness of a piece of software against a specification. This naturally doesn’t apply to adaptive systems like neural nets, but it can apply to all standard algorithms. But it would superimpose a cost on developing systems which is for now unfeasible – most commercial coding in effect relies on being written in an ad-hoc, bespoke, short-cutty way that is not amenable to being pushed through Program Correctness verifiers.

Last edited 2 years ago by Prashant Kotak
A Spetzari
A Spetzari
2 years ago
Reply to  Norman Powers

This. The systems are too complex and the interrelations too niche for any one person to fully “get” what’s going on all the time, and that’s not even in systems with complex AI involved

Galeti Tavas
Galeti Tavas
2 years ago
Reply to  Rasmus Fogh

“I think there was an ‘under moderation’ flag at some point? It is always useful to understand better how these systems work. But the key points, as more or less said in the article, is that they are opaque

They are! I get moderation to the point it seems a dart board is what the algorithm is using, that and my history I guess.

Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  Prashant Kotak

Learn to code. Seriously.”
Prash, If I had to learn to code I would take my old revolver and a tumbler of wisky into the sitting room and just end my misery.

Prashant Kotak
PK
Prashant Kotak
2 years ago
Reply to  Galeti Tavas

Why? Anyone who can follow a process and a bunch of instructions laid out as a series of steps can learn to code. It’s no different to writing, then executing a recipe in a cookbook. The rest is just technique. Anyone can learn to code.

Last edited 2 years ago by Prashant Kotak
Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  Prashant Kotak

Prash, I do not believe that. Like anyone can learn languages and Ballet is not true. My brain had odd wiring, dyslexic and authority defiant disorder, and who knows what weird, undiagnosed, other issues. I can not learn languages well at all, and once I stop using them they melt away, wile other family have music ability (*I have none) and gifts for languages.
Coding to me would be an agony – back in my school days we had computer classes with a giant Punch Card system, and I did a short stint on learning that – and it was hellish, not my thing at all.

Prashant Kotak
Prashant Kotak
2 years ago
Reply to  Galeti Tavas

That’s fair enough

Michael Sweeney
MW
Michael Sweeney
2 years ago

When I land at LaGuardia Airport, I want a former Navy or Air Force Pilot who has been “battle tested” in extreme conditions. I will pass on the Silicon Valley Programmer modeling the 100 year weather scenario or the latest bird crossing. See Sully Sullenberger, they made a movie about him. The movie was kind of boring actually, and that is what I want!!
There is no “artificial intelligence” when it is coded by a human being. It is just a matter of cost where you place the human(s) in your model.

Last edited 2 years ago by Michael Sweeney
Michael Coleman
MC
Michael Coleman
2 years ago

This interesting piece was not at all what I was expected. The title and subtitle could have just as well been applied to the expanding use of ranked choice voting as exhibited by the opaque and doubt-inducing mess that was the recent NYC mayoral primary. As pointed out in another recent Unherd article, the winner of such vote lacks some legitimacy due to the long wait for the machine to pick the winners using a hard to verify process.

Alan T
MT
Alan T
2 years ago

This is why advertising for banking, for instance, tells you nothing about their banking service. Banking is systems based, and there is little to distinguish one bank from another, or if there is, it’s not something they’ll want to shout about. So instead they boast about their “values” and support for fashionable causes. I recently received my bank’s Ethics in Action email, which had very little to do with banking apart from an item about their “100% electric van…providing support to customers in communities” that now, of course, no longer have branches.

Dennis Boylon
DB
Dennis Boylon
2 years ago

The problem with Gates, Bezos, Schwab, WEF view of the world is that it is based in power lust and greed. Their solutions do not work and have been failures. You can see this in Brexit, Trump, Tensions in EU, and the peoples’ hatred of the self proclaimed “elite”. We don’t believe in their solutions and we don’t believe in their AI. AI does not work to better humanity. It is clearly a failure in that regard. With all these computers and AI programs is life getting better? LOL. The models and health AI has humanity scared into their homes and covering their faces so they can’t breathe properly. The only question is can the self proclaimed elite succeed in their true goal. To return to a feudal existence where resources are tightly controlled and only allowed for the powerful and wealthy. These people hate and despise humanity and work to undermine humanity at every turn. Their goal is to gain complete control and access to resources. To limit human flourishing and human growth. It is an anti-human agenda. What do humans need to flourish? Clean water, clean air, natural food, energy, good sanitation infrastructure, easy travel and mobility. There is a war being waged to limit all this to the powerful and wealthy. A smaller population might be able to exist with advanced robots and computing. All of humanity can not. There will never be enough robots and computers to serve everyone and this is what they do not tell you. I would ask people to listen to Patrick Moore discuss his pro-human philosophy. He recently has done interviews on Triggernometry and with Tom Woods. It is the opposite of the brain washing you have been receiving for decades. At the end of the day humans do not need AI or computers. These will not make the world better. They will be used to limit humanity and keep humanity caged. This is the Great Reset. This is Build Back Better. It should be obvious at this point. In bringing up the door to door knocks Biden is planning on for Americans I would like to point out what true opposition to tyranny looks like. This is from an anonymous poster on a different blog. “In the question of who will go banging on doors in the vaxx-nagging campaign, I think it safe to say that it will DEFINITELY be federal goons (at least in those areas most likely to resist, which includes my neck of the woods) because NO ONE, not even a desperate, talentless, otherwise-unemployable Amoricon useless eater, is foolish enough to put their life at risk by wandering into “enemy territory” to try to sell salt to a snail. My neighbors and I are already planning for when those marching morons decide to trespass on our properties, and we are absolutely serious about resisting with ANY means at our disposal. People already avoid us for good reason (we’re rural, making us both inconvenient and hostile to city folk) and we would almost pity the fools that would go out of their way to cause trouble out here. The only question most of us are asking is what new firearms we want to add to our arsenals. I would prefer easily portable, lighter caliber firepower, which would work fine against “volunteers.” However, if the feds decide to send in the heavy artillery, that will be a different matter altogether. Maybe an actual neighborhood “militia” that stands watch for approaching invaders needs to be considered. I really think that it’s going to come to that sooner rather than later.”
This is the correct attitude to what ails us. I left the city a long time ago. I lost a lot of friends in covidcon because I would not follow it. I made some new friends and became closer to other friends who also rejected this totalitarianism. We are growing food. Securing access to water. Where I live there is plenty of wood to use as energy. We are installing solar. We are arming ourselves. We are fishing, foraging, and hunting. I am in the pacific Northwest where there is plenty of open space and access to resources. This is what you need. You must fight for it and you must believe in humanity. If we let these evil people convince us that humanity is terrible and a curse to the earth we will not survive. If we let our humanity thrive, build, expand, grow we will succeed. They are few and use their wealth and power to limit access and stop human flourishing but there are too many of us if we rebel. We will deny their edicts. We will take what they unjustly claim as their own and free human civilization from their evil. I’ve heard the usually nay sayers talk about how it is impossible to fight back. Look at the weapons they have. To that I say look at Afghanistan. Look at Iraq. Look at Syria. Look at Yemen. The US military is terrible at winning wars against a lightly armed highly motivated population. I think most of the military is against the billionaire crowd too after being used as pawns in losing causes for decades.

Last edited 2 years ago by Dennis Boylon
Brack Carmony
BC
Brack Carmony
2 years ago

I find the problem is the technology is a bit of a red herring. We talk about the dystopia of someone getting fired from their job because a machine was following their activity on a social media site, and decided they were too distasteful to allow to keep their job. But would you really feel better if they just had 5 people trolling through your social media to determine if you were worth keeping for your social media activity? Shouldn’t the real complaint be that they want what you say on social media to be enough to stripe you of your job.
The horror of someone’s political opinions being manipulated based on what articles they read. As if news papers and the media haven’t been doing the for years. Someone overly fretting over their food intake to achieve some fictions perfect state. As if vitamin manufacturers don’t already play on these fears and people need to learn to actually listen to their own body and respond accordingly.
Automation may make it easier to apply on scale, but that’s the wrong place to draw the line. We need to fight against handing over our personal decision making to others regardless of if tech is involved.
As far as the problem of self driving cars, we have to recognize we let teenagers drive cars regularly. So we need to be careful about not letting perfection be the enemy of the good. There is no system that is going to be perfect, but if the self driving cars reach the point they are an order of magnitude safer per mile driven, can we really complain?
I guess at the end of the day, just because the robots are made of flesh, it doesn’t make the dystopian less dystopian.

Deborah B
Deborah B
2 years ago

Excellent article. In my future world view, AI will come to the conclusion that humans are actually no longer relevant or sustainable. Why would vast tracts of land be required to grow food when it could be better used to generate power or to house server farms, for example?
We may well end up being the slaves of this giant technological world that we are so busy building.

Alan Hawkes
AH
Alan Hawkes
2 years ago

Should not the obvious final step be to let an algorithm determine the results of elections? We might end up with a lot of councillors, and some MPs, named Abstain, taking their seats.

Mark Goodge
MG
Mark Goodge
2 years ago
Reply to  Alan Hawkes
Norman Powers
Norman Powers
2 years ago
Reply to  Alan Hawkes

I feel like that has already happened, and the article (though good) has somehow missed that this dystopia is right now.
Consider our current situation. Our lives are ruled by the pronouncements of academic modellers presenting predictions as to what COVID will do next. These predictions come out of machines, they’re the results of programs, usually written in R but in Ferguson’s case, written in C (with disastrous results as the programmers on this board can probably imagine).
Although the machines theoretically aren’t generating decisions, in practice they are because their predictions are invariably so extreme that they appear to leave only one possible path open, and the people who run those machines conveniently lay out what that is. Faced with this “mathematical” decision making our politicians have – as a class – voted themselves out of existence. Across all political parties they now simply vote to not vote. Ministers can’t simply vote to abolish themselves but have done the next best thing, reducing themselves to mere automatons before the pronouncements of SAGE. Accountability has indeed vanished: their predictions and thus decisions are never correct but there is no learning possible because we don’t even get as far as a meaningless investigation, instead the possibility that their algorithms have failed is not even acknowledged at all.
This pattern was also seen for Brexit, is seen for climate change and certainly many other issues in future.
So elections may still exist, but they’re about as useful as elections in the USSR were because the candidates on offer almost all believe with complete conviction that the pronouncements of “mathematicians” with models can’t be argued with. And there is no accountability because the government funds so many they can all just point at each other, shrug and say “eh, science changes, talk to the hand”.
To get a grip on this will require politicians who aren’t deeply afraid of maths, and who are brave enough to cut the hydra off at its head (the funding). I don’t hold out much hope.

Rasmus Fogh
Rasmus Fogh
2 years ago
Reply to  Norman Powers

Give it a rest, willya? So a lot of scientists came up with a result you did not like. And your solution is to stop funding science?

It is perfectly possible for politicians to analyse what they get, complete with uncertainties, and take a responsible decision. The Danish premier overruled her own health bureaucracy (to have a more severe lockdown) and took public responsibility for doing it. Boris Johnson prefers to decide whatever will give good headlines tomorrow and pretend it is someone else’s responsibility. We all know that. If you do not like it there is a simple solution: vote for someone else. And if you really want someone who will disregard the doommongers and keep society open, vote for someone like Bolsanaro.

Finally, Fergusons programs have been checked, tested, and found to do exactly what it says on the tin. The code looks kind of horrible (as happens when scientists have to rewire a ten year old program in a desperate hurry) – but the programs work, and the underlying assumptions were put forward for anyone to look at. You just do not like what they told you.

Last edited 2 years ago by Rasmus Fogh
Mangle Tangle
Mangle Tangle
2 years ago

Uuum. Yes to the idea that algorithms might influence lots of things, but that hardly means they necessarily replace something currently wonderful, fair and transparent. Like HR departments, for example. As for the rest of this big picture stuff (“If, one day in the future, a world-ending cyberwar breaks out…”), well, that’s pure science fiction. Fun, but not seriously going to happen because machines decide so.

GA Woolley
GA Woolley
2 years ago

The problem with articles such as this is that they project some all powerful influence onto western society as it is today, not as it might evolve as that influence evolves and they interact. Why would jobs, CVs, taxation, politics, dating etc in 2050 be anything like what they are today if the pace of technological change is as projected? An influence as simple as Covid has dramatically changed social and professional patterns and conventions in 18 months; what will AI do in 30 years?

Prashant Kotak
Prashant Kotak
2 years ago
Reply to  GA Woolley

Exactly. We can project based on current trajectories, and the prediction may even be right, but nothing is set in stone. Everything is in flux. People react to conditions, as will the machines we create. The assumption that the current situation is some kind of plateau is false.

Dennis Boylon
DB
Dennis Boylon
2 years ago
Reply to  GA Woolley

Their AI modeled a fake virus as a world ending threat. Look at the damage it has brought. At this point in history we would be better off shutting down the internet, throwing out our cell phones, and going back to pen and paper. This “progress” has been a disaster

Last edited 2 years ago by Dennis Boylon
Prashant Kotak
PK
Prashant Kotak
2 years ago
Reply to  Dennis Boylon

Agreed, but why just back to pen and paper? Why not back to the cave? I can call you Gwarr. And Gwarr, when you call me, call me Ugg. But in fact why stop there? All the way back to the primordial swamp I say, and no more problems with anything complicated, like computers or tax returns, because all we will all be able to do then is say ‘glug’!

Last edited 2 years ago by Prashant Kotak
steve.g.fuller
SF
steve.g.fuller
2 years ago

This is communism of the 21st Century. Exactly how things were managed back then. And those with the right contacts will rise to the top…

Prashant Kotak
PK
Prashant Kotak
2 years ago

Something to ponder for all those who are attempting to draw distinctions between decisions made by humans and human systems, and decisions made by algorithms and algorithmic systems (which may not themselves be algorithmic) :

“You insist that there is something a machine cannot do. If you tell me precisely what it is a machine cannot do, then I can always make a machine which will do just that”
– John von Neumann

Dennis Boylon
DB
Dennis Boylon
2 years ago
Reply to  Prashant Kotak

Ok…have the machine make peace and brotherly love breakout between all members of the conservative christians and the woke blm/antifa crowd. Go… I’ll give you a year. Give me status updates every month.

Prashant Kotak
Prashant Kotak
2 years ago
Reply to  Dennis Boylon

Yeah, if you can show me that a human agency can deliver the outcome of ‘peace and brotherly love’ then we can make a machine which will do exactly what the human agency did, by following the same steps as the human agency took. And (you might quite like this) unlike the huge salaries paid to them functionaries and politicians in for example the EU, the algorithms will ask for none of that – just a little energy.
And that of course is the point: the EU for example is very keen on a ‘rules based order’ and ‘rules based processes’, but as any coder will tell you, ‘rules based’ anything can *always* be turned into algorithms.

Matt B
MB
Matt B
2 years ago

Has the ring of a Houellebecq novel – or JG Ballard, Ray Bradbury and the ‘Ted’ talk.

Last edited 2 years ago by Matt B
Prashant Kotak
Prashant Kotak
2 years ago

One thing that this piece has missed discussing: what is likely to happen hereon in, is that decision making will be voluntarily surrendered by human agencies to algorithmic ones, some of it in the name of ‘fairness’, as human decisions will come to be seen as ever more error prone in the face of ever greater accuracy and consistency from algorithmic systems compared to the (of necessity) somewhat random nature of individual human decisions, and also because of ever rising systems complexity will in effect prevent machine decisions from being challenged, because not withstanding that there won’t be clear-cut explanations of individual decisions, it will nevertheless be possible to show that in aggregate machine decisions are more accurate and lead to better outcomes. Obviously we are not at that point yet but I believe this is no more than a decade or so away, perhaps sooner.

Last edited 2 years ago by Prashant Kotak
Norman Powers
Norman Powers
2 years ago
Reply to  Prashant Kotak

A problem here is the assumption that algorithms/AI will always yield superior results to human decision making, at least given enough time.
As a programmer I say – I wish! That would be nice. Very often algorithms are worse at decision making than humans. We use them anyway because they’re fast, cheap and predictable which has value too. This is especially true for AI where “solved” is normally defined as equal to human level performance on a given task, not better.
But there’s another more subtle problem: this article and many of the comments are conflating AI with all algorithms. Most of the algorithms making really terrible decisions right now aren’t AI based at all, they’re just encodings of the authors own beliefs about the world. In theory AI is very powerful because it can learn things that are true from data. In practice, it’s sometimes too good at this and there’s a huge amount of tension within the field now because AIs keep learning things that are true but politically inconvenient.
Rasmus Fogh’s example above is good, about some Google AI learning that women don’t get promoted as much. There are many others. One Google AI when I worked there was told to read the internet to learn things, and it learned stuff like “George W Bush is a former US President and war criminal”. It wasn’t intentionally politically biased but it decided what’s true by looking to see if anyone is disagreeing with a statement, and some terms like “war criminal” are used exclusively by the left, so the internet had filled up with statements that “GWB is a war criminal” and conservatives hadn’t even bothered to refute them because to a conservative mind the whole concept of being a war criminal is a suspicious one to begin with (e.g. it implies the existence of and superiority of international law, which is a contentious matter).
So there’s a whole subfield of AI devoted to effectively brainwashing their own creations into believing things that are not true but which the authors wish were true. It’s not very easy due to the aforementioned understandability issues.
As a consequence, all algorithms AI or not will ultimately end up as expressions of the creators beliefs. We can obtain accountability and control by simply learning how to spot the methods by which algorithm creators are encoding their own agendas into the results, and holding the creators accountable for that.

Rasmus Fogh
RF
Rasmus Fogh
2 years ago
Reply to  Norman Powers

Interesting example. Thanks.

Brack Carmony
BC
Brack Carmony
2 years ago
Reply to  Norman Powers

If 2000 taught us anything, it’s that the real problems programmers bring up will be massively misunderstood by the journalists writing about them.

David Yetter
David Yetter
2 years ago

So, the time for a Butlerian jihad comes long before we try to actually make machines which are “in the likeness of a human mind”.

pdrodolf
PR
pdrodolf
2 years ago

Already seeing the referenced “lack of accountability” in big tech when someone is shutdown/deplatformed for posting something big tech doesn’t like. The excuse is it was a glitch or a algorithm error.