X Close

Will humans survive the rise of the machines? The rise of AI is changing the way we perceive the world


May 20, 2024   6 mins

If the American futurist R. Buckminster Fuller was right, as he always was, then the boundaries of human knowledge are forever expanding. In 1982, Fuller created the “Knowledge Doubling Curve”, which showed that up until the year 1900, human knowledge doubled approximately every century. By the end of the Second World War, this was every 25 years. Now, it is doubling annually.

How can humans possibly keep up with all this new information? How can we make sense of the world if the volume of data exceeds our ability to process it? Humanity is drowning in an ever-widening galaxy of data — I for one am definitely experiencing cognitive glitching as I try to comprehend what is happening in the world. But being clever creatures, we have invented exascale computers and Artificial Intelligence to help manage the problem of cognitive overload.

One company offering a remedy is C-10 Labs, an AI venture studio based in the MIT Media Lab. It recognises that our ability to collect data about the human body is rising exponentially, thanks to the increasing sophistication of MRI scans and nano-robots. And yet a radiologist’s workload is so high she can’t possibly interpret all that data. In many cases, she has roughly 10 seconds to interpret as many as 11 images to judge if a patient has a deadly condition. It’s far quicker and more reliable to use AI which, in combination with superfast computing, can scan the images and find hints of a problem that a human’s weary eyes and overloaded mind might miss. This will save lives.

Yet AI is a greedy creature: it feeds on power. Last year, the New York Times wrote that AI will need more power to run than entire countries. By 2027, AI servers alone could use between 85 to 134 terawatt hours (TWH) annually. That’s similar to what Argentina, the Netherlands and Sweden each use in a year. Sam Altman, CEO of OpenAI, realised AI and supercomputers could not process all this data unless we find a cheaper and more prolific energy source, so he backed Helion, a nuclear fusion start-up. But even if we can power the data, can we store or process it at this pace?

One answer to the storage problem is to make machines more like humans. As Erik Brynjolfsson says, “Instead of racing against the machine, we need to race with the machine. That is our grand challenge.” How will we do this? With honey and salt. Earlier this year, engineers at Washington State University demonstrated how to turn solid honey into a memristor: a type of transistor that can process, store and manage data. If you put a bunch of these hair-sized honey memristors together, they will mimic the neurons and synapses found in the human brain. The result is a “neuromorphic” computer chip that functions much like a human brain. It’s a model that some hope will displace the current generation of silicon computer chips.

“AI is a greedy creature: it feeds on power..”

This project is one part of a wider “wetware” movement, which works to unite biological assets with inanimate physical ones; organisms and machines. Yet the wetware movement sees DNA, not honey, as the ultimate computer chip: salted DNA, to be precise. The salt allows DNA to remain stable for decades at room temperature. Even better, DNA doesn’t require maintenance, and files stored in DNA are cheaply and easily copied.

What makes DNA so special is that it can hold an immense amount of information within a miniscule volume. “Humanity will generate an estimated 33 zettabytes of data by 2025 — that’s 33 followed by 22 zeroes,” says the Scientific American. “DNA storage can squeeze all that information into a ping-pong ball, with room to spare. The 74 million million bytes of information in the Library of Congress could be crammed into a DNA archive the size of a poppy seed — 6,000 times over. Split the seed in half, and you could store all of Facebook’s data.”

Yet even if — thanks to honey and salt — we can capture, store and process this fast-growing galaxy of data, what will humans do with it? Will we actually make sense of it? Put another way, does the human brain have a Shannon limit? The American mathematician Claude Shannon clocked that there’s a “maximum rate at which error-free data can be transmitted over a communication channel with a specific noise level and bandwidth”. Traditionally, the Shannon theory is applied to technology: a telephone line, a radio band, a fiber-optic cable. But Brian Roemmele argues that the human brain, too, has a Shannon limit — and that it is 41 bits per second, or 3m bits per second if it’s visual input. With the breadth, depth and speed of new information doubling all the time, can the human mind keep up?

Probably not. The weight of all this knowledge is crashing against the limits of the human mind. We may be able to compress the whole of Facebook’s information to half a poppy seed, but can the human mind survive contact with that poppy seed? Perhaps not. Something’s got to give. It will be us.

The Canadian philosopher Marshall McLuhan suggested this years ago. He thought that information overload changes how we think and act. When there is too much news to process, we stop assessing individual news stories and start analysing the source: I don’t trust it if it came from CNN; I trust it if it came from Fox News. Too much information makes us more tribal and less analytical.

Just as political tribalism is a human response to knowledge overload, so, according to Princeton’s Julian Jaynes, is consciousness itself. Jaynes argued that consciousness is simply a coping mechanism humans developed once we started living cheek by jowl in ancient cities in the 2nd millennium BC. The stress of managing interactions between strangers with very diverse cultural backgrounds, languages, and behaviours was so great that the human brain increasingly split the work into two hemispheres, the left and right lobes, and began to analyse and absorb. This was the beginning of consciousness: of listening to the voices in our heads, turning to metaphor to explain reality and developing the skill of introspection.

Later, Renée Descartes proposed another split. This time, humans would split the head from the body. The Cartesian Revolution ruled that the left side of the brain handles the serious stuff: logic, rationality and the scientific method. These endeavours were deemed worthy of our time and energy. The right-brain — which includes emotions, anything mystical or inexplicable or unprovable — was cast aside along with the body. The State got the decapitated head, and the Church got the body.

Now, AI is using its God-like powers to reunite them. With the development of AI and wetware, we are witnessing the beginning of the end of the Cartesian era of human history. The split between the mind and the body no longer makes scientific or practical sense: humanity is increasingly knitting the two back together again and approaching reality more holistically (not that they were ever actually separate — we humans have always been wetware).

This realignment is changing the way we manage risk and uncertainty, for instance in financial markets. David Dredge, a former colleague of mine and the founder of Convex Strategies, argues that it’s not only the end of the Cartesian era, but also the end of the Sharpe World. For a long time, financial markets have relied on the Sharpe Ratio, which compares the return of an investment with its risk. But, in this new post-AI environment, that old measuring stick no longer works. Dredge refers to the concept of “Wittgenstein’s ruler: Unless you have confidence in the ruler’s reliability, if you use a ruler to measure a table, you may also be using the table to measure the ruler.”

In the post-AI world, it may be that all the data starts to fall into patterns, perhaps fractal-like repeating patterns. Our job won’t be to guess the price of the S&P or the Yen anymore but to rely on AI and supercomputing to tell us how all financial instruments are moving in repeated patterns over the course of time. We’ll stop focusing on price moves and start looking at the movement of the financial system as a whole. As Dredge says: “It is the divergence that matters and ever more so the greater the scale of the variation.” In other words, volatility at the global financial systems level is very different from volatility at the level of the US bond market. Perhaps this means in the future we won’t have specialists in US government bonds or British government bonds. Maybe we won’t even have bond specialists. Instead, we’ll have people who are reading the patterns of all financial instruments in all markets at once. This brings a whole new meaning to what we call global macro.

So, wetware-powered AI isn’t just about processing more information faster: it is about reducing uncertainty. Its rise will also profoundly change how humans think about the nature of reality — both in finance, and more generally. It will require humanity to delve into the subject of consciousness that hasn’t been considered worthy of study in the Cartesian era. The thinking nature of AI presents uncomfortable questions: is it sentient, or will it be? Will its decision-making supersede human decision-making? Will the volume of data overload mean that human minds can’t make sense of things that machines can understand perfectly? Will we eventually outsource decision-making to AI powered by neuromorphic chips that form a brain that is vastly better informed and more conscious and conscientious than any one human brain? This means letting go of the details and getting into the flow of this new emergent superhuman consciousness that moves faster than our minds.

Can we handle this reduction of uncertainty? Humans are upping their game all the time: we went from storing and sending data on megalithic stone carvings to computer chips with silicon substrates, glass substrates and now honey and salted DNA substrates. Accepting the gift of this emergent consciousness will change humanity. It will remind us that we humans are a fluid and emergent phenomenon ourselves. We won’t be specialists surfing within the web anymore, we’ll be polymaths surfing the whole of the web. This is more than a Renaissance. It is something new — and we are present at the creation. It’s a bittersweet moment: but change is happening, like it or not. AI demands not only nuclear fusion but a fusion of all our cognitive capabilities and consciousness, whether human or human-made. Only by improving the quality of our emerging consciousness, and the conscious qualities of our machines, can we hope to stay afloat in that ever-swelling ocean of knowledge.


Dr. Pippa Malmgren was an economic advisor to President George W. Bush and has been a manufacturer of award-winning drones and autonomous robotics.

DrPippaM

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

46 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Thomas K.
Thomas K.
4 months ago

Call me old-fashioned, but to me this doesn’t sound like cause for excitement or cautious optimism. The feeling it provokes in me instead is a cosmic dread worthy of an H.P. Lovecraft story and a sickenning sense existential revulsion I haven’t felt since figuring out the ending of Neon Genisis Evangelion. How anyone can say the words ‘fusion of all our consciousness’ without a distinct tone of disgust and fear shocks me. This is honestly a level of nightmarish utopianism usually reserved for insane megalomanical villains in dark sci-fi movies.

I don’t say this as any kind of attack towards the author. She has genuinely provided some food for thought. I just wish that food wasn’t so utterly repulsive to me.

J Bryant
J Bryant
4 months ago
Reply to  Thomas K.

Don’t worry. As the author noted, AI requires massive amounts of energy. If the West continues with its Net Zero agenda there will be insufficient energy unless, of course, you believe nuclear fusion will become a viable energy source and environmentalists don’t try to ban it too.

Andrew Dalton
Andrew Dalton
3 months ago
Reply to  J Bryant

They absolutely will. Stars operate by fusion, and stars go supernova. A supernova would be really bad for the climate, so we definitely should ban fusion reactors.

Peter Johnson
Peter Johnson
3 months ago
Reply to  J Bryant

Just like they take private jets to COP conferences I am sure the globalist class will make an exception to Net Zero for their new toys.

Carmel Shortall
Carmel Shortall
3 months ago
Reply to  J Bryant

That’ll be “insufficient energy” for us plebs!

By the time the elites reduce the planet’s (including Argentina, the Netherlands and Sweden ; ) population to 500 million, they’ll have plenty of energy for their AI.

What a pain this was to read – the author seems a breathless, blithering, bien-pensant pick’n’mixing other people’s ideas then sticking them to the wall with her own vomit while cheering for her own, and most everybody else’s, demise.

Still, I didn’t know ol’ Julian Jaynes was still at Princeton blabbering on about the bicameral mind…

Arthur King
Arthur King
3 months ago
Reply to  Thomas K.

Fear not. The end of humanity is not nigh. AI is just a tool. The industrial revolution did not get rid of our physicality but enhanced it through machines. Yes, romantic blacksmiths will wax about the loss of our intimacy with iron, but overall I’d take the enhanced prosperity of the modern world. Am I diminished by using Siri instead of a paper map, perhaps. It sure is nice having the best route at my becken call. AI is going to solve a lot of problems, like fusion power and enhanced data storage. Like the industrial revolution it will expand prosperity and human capacity.

T Bone
T Bone
4 months ago

What’s even scarier is that there are going to be Neo-Amish communities that opt-out of Transhumanism and just go on living their lives.

This will be concerning to the Holistic Experts who will declare these selfish, archaic knuckle draggers as “Threats to the Human Collective.” Yet, this group will go on living good lives while doing things like working their yards and spending time with friends and family over beers at the fire.

Thomas K.
Thomas K.
3 months ago
Reply to  T Bone

The Collectivists’ condemnation of such people would sadly be as predictable as it would be narcissistic. Any Utopia that can be existentially threatened merely by the existence of people or groups living outside of it is no Utopia.

Matt Sylvestre
Matt Sylvestre
3 months ago
Reply to  Thomas K.

No Utopia indeed! So well said.

J Bryant
J Bryant
4 months ago

I’ve never encountered an article with so many deep ideas crammed into so short a space. I’m not sure if this article is brilliant or the on-line equivalent of a coffee table science book.
At any rate, the author provides plenty of topics to research further for those who’re interested.

Robbie K
Robbie K
3 months ago
Reply to  J Bryant

An interesting read, but a bit of a chaotic mind dump. In the time being humanity goes back to having a cup of tea and doing the crossword.

Peter Johnson
Peter Johnson
3 months ago
Reply to  J Bryant

I enjoyed it. There are some mistakes in it as others have pointed out – but it does raise food for thought. It is interesting how the need for power is suddenly being raised as the limiting factor for AI. The titans of tech don’t like limiting factors so maybe this signals the end of net zero.

Matt B
Matt B
3 months ago

Interesting article. No matter how one feels about its implications. The risk of deification of AI is out there, together with the notion of humans being preparatory stages for AI’s emergence – itself not a million miles from Jacques Vallee.

Andrew McDonald
Andrew McDonald
3 months ago

Seems to be some confusion (or maybe just inconsistency) in the article between knowledge and information, despite the Shannon references. Good and thought-provoking though.

Lancashire Lad
Lancashire Lad
3 months ago

The confusion, or rather the conflation, is between knowledge and data.

Matt OC
Matt OC
3 months ago

The view of right and left brain depicted in this article is woefully inaccurate. This passage:

“The stress of managing interactions between strangers with very diverse cultural backgrounds, languages, and behaviours was so great that the human brain increasingly split the work into two hemispheres, the left and right lobes, and began to analyse and absorb. This was the beginning of consciousness: of listening to the voices in our heads, turning to metaphor to explain reality and developing the skill of introspection”

Is ridiculous and there is 0 evidence to support this nonsense. I was very much on board with the author until they started making asinine assertions of a very fundamental nature.

For an inoculation against this foolishness please read Iain Macgilchrist’s The Master and His Emmisary

Hendrik Mentz
Hendrik Mentz
3 months ago
Reply to  Matt OC

Perhaps better still: live with (as each page becomes a meditation) McGilchrist’s more recent *The matter with things*.

Edward De Beukelaer
Edward De Beukelaer
3 months ago
Reply to  Matt OC

Indeed, the author is stuck in the way of thinking that has taken us to the difficulties we have with ‘the way we see the world’. … a bit of reading of McGilchrist’s essays will help

Michael McElwee
Michael McElwee
3 months ago

Add to the list Plato’s Phaedrus.

Simon Blanchard
Simon Blanchard
3 months ago

Best we don’t kill all the bees then.

Saul D
Saul D
3 months ago

This comes with a sense that we (as an individual or an institution) have to know everything. Obviously we don’t. We managed pretty well not knowing America existed or what an electron was. What we do have is specific situations where more detail is really really helpful and we need tools and training/education to get into that detail to test and design stuff. But not everyone has to know it.

Bret Larson
Bret Larson
3 months ago
Reply to  Saul D

Focus might be important.

Lancashire Lad
Lancashire Lad
3 months ago

The theory that consciosness arose as a response to the interactions of people living in the earliest cities is nonsense. How would a city, as in a place with a level of organisation as we understand it (and therefore call something a city) arise without human consciousness, as the organising agent?

The author using this type of ‘false consciousness’ theory (no apology to Marxists) and then oversimplifying Descartes doesn’t help her case. There are, despite this, some very thought-provoking passages. Our human response seems to be to do what the author has done (and she admits to a level.of cognitive dissonance at the outset), and oversimplify.

The vast majority of humans are still concerned with day-to-day survival or “getting by”. Those who stop to think (or have time to) tend to look for a level of certainty. Ideologies and religions offer this, but are inevitable cul-de-sacs and lead to conflict with each other.

I would therefore maintain that consciousness will.not evolve as the author posits, but will evolve to understand why most feel the need for a psychological haven and to begin to move away from such havens. The process has already begun, but with inevitable resistance from those who can’t yet bear it.

Arthur King
Arthur King
3 months ago
Reply to  Lancashire Lad

I raised an eyebrow at this idea also. Consciousness arose millions of years before humans showed up. The author must have ment something else like metaconsciousness.

Andrew Dalton
Andrew Dalton
3 months ago

I actually enjoyed the article, but the disparate threads make commenting a little tricky.
The company I work for operate mostly in the domain of automating product development (CAD, provisioning, servicing etc.). One area that has been a key focus is Augmented Reality (AR), the idea being that service tasks can be augmented by using technologies like MS hololens to deliver all key information to an operator.
This operates as a middle ground/stepping stone to AI/robotic servicing. It has a clear impact of deskilling the technicians – a skilled worker no longer needs [as much] expertise in their field, if the information needed is simply on tap.
I find this a microcosm of the current world. We can all search the internet for knowledge at whim – and there is evidence our brains are adapting to this, we’re becoming (to use a software term) stateless, meaning an application that does not store data locally.
Do I remember the correct syntax, let alone idiomatically correct method of reading a file anymore? Perhaps in the two or three languages I most commonly use, but for the rest a quick search is probably less energy intensive than committing it to memory. An adaptive response by my own wetware, or laziness? They’re perhaps the same thing.
No one understands everything anymore. In fact most people don’t understand almost anything, I’d suggest. People spend what seems like half their waking hours glued to their mobile phones, but couldn’t tell you a thing about how they work or the infrastructure they require operates.
This is the trend. Humans will adapt, but will adapt by needing to know less. And since I think futurology is astrology with sciency sounding words, I won’t make this prediction, just a scenario that I think most likely. Humans will become like H.G. Wells’ Eloi. Whether A.I. becomes like the Morlocks is yet to be seen.

Richard Calhoun
Richard Calhoun
3 months ago

A fascinating article which brings great hope that the advances made by humans this last 100 years will continue into the future at an even greater pace.
This will bring the prospect of greater advances in the quality of life, not only for humans but all species and our planet.
It will hopefully decrease the conflicting information emanating from the Science World which has resulted in so many wrong decisions and actions being taken.
Like climate warming, which has and continues to cause huge debate and conflict, but worse, actions that may be and probably are, totally uneccessary.
The future looks bright for our grandchildren !!

Julian Farrows
Julian Farrows
3 months ago

“When there is too much news to process, we stop assessing individual news stories and start analysing the source: I don’t trust it if it came from CNN; I trust it if it came from Fox News.”

Interesting quote. For me this started happening when certain publications insisted that its readers were white supremacists or were on the wrong side of history for not going along with lies such as men can be women.

Mike Michaels
Mike Michaels
3 months ago

God loves an optimist.

Richard Calhoun
Richard Calhoun
3 months ago
Reply to  Mike Michaels

There is no God

Prashant Kotak
Prashant Kotak
3 months ago

“Yes, now there is a God.”
– Fredric Brown, 1954

Dave Canuck
Dave Canuck
3 months ago

Am I living on the same planet? 80% of wildlife has been annihilated since I was born, population has more than doubled and probably will approach 10 billion by 2050. Climate change is accelerating, resource depletion is accelerating, fresh water tables are disappearing, marine life is disappearing with fish stocks and coral reefs, agriculture will be in deep trouble with droughts, floods and eroding arable land. By 2050 many parts of the world will be barely livable with rising heat levels, leading to mass migration and poverty. This will lead to increasing geopolitical tensions, and scramble for remaining resources and probably wars, rising inequality everywhere, mass poverty in many parts of the world. Even the rich will live in fear. Technology may benefit a few, but not most.

Jacqueline Walker
Jacqueline Walker
3 months ago

I have never understood how making systems more like biological systems will not just import the limitations of said biology too. Like honey dissolving in water for instance. In fact the chief selling point of that effort (I had a quick look at the paper) seems to be its environmental friendliness and biodegradability. The reason AI LLMs are in some ways “smarter” than humans is due to the speed of silicon and the fact that they don’t simultaneously have to do all the other things our brains do to support bodies.

Lancashire Lad
Lancashire Lad
3 months ago

The theory that consciousness evolved through the mingling of people within our earliest cities is just plain nonsense. The enabling of complex communities (cities) could only come about through the agency endowed via a pre-existing conscious process.
There are some thought-provoking passages in this article, but the author’s citations are oversimplistic. The contention that consciousness will evolve to encompass the overarching abilities of AI in specific fields is unrealistic. My contention is that it’ll evolve to overcome the conflicts wrought by ideological systems, of whatever origin, by finally being able to see them for what they are: psychological crutches.

Chris Koch
Chris Koch
3 months ago

Knowledge and data are not the same thing

Daniel Lee
Daniel Lee
3 months ago

Wittgenstein’s ruler: Unless you have confidence in the ruler’s reliability, if you use a ruler to measure a table, you may also be using the table to measure the ruler.”
Isn’t this just a restatement of Heisenberg’s Uncertainty Principle scaled up to macro level?

Alex Lekas
Alex Lekas
3 months ago

Perhaps it’s worth remembering that technology is a means, not an end; it is a tool, not the finished product. As it is, far too many people are already captive to their smartphones. I suspect in a few generations, the human body will have evolved to where the neck hinges downward even more than it currently does.
For the potential benefits it provides, AI also holds the potential for making us even greater slaves to bits and bytes than we already are. Just as businesses will use it to parse through the massive volumes of competitive and customer data they collect, so nefarious actors will use to gain advantage or to control users, particularly the latter.
Few things are all good or all bad. Look at social media. In its early days, the book of face made it easy for friends separated by distance to reconnect or relatives to share images of their lives. The tradeoff is that the user is the product, our information being sold to every bidder who wants it, and privacy coming under assault. Twitter, now X, provided some democratization of information and discourse, but it and others were used for censorship by govts that cannot openly censor on their own. Like my govt.

Vito Quattrocchi
Vito Quattrocchi
3 months ago

“This means letting go of the details and getting into the flow of this new emergent superhuman consciousness that moves faster than our minds”
Yes, that’s a great idea. We should ignore the details and give ourselves unquestioningly to “this new emergent superhuman consciousness” lol Notice how the roles are described whenever one of these messianic tech ghouls starts talking about AI. We need to just accept that this is coming, there’s nothing we can do to stop it, and we’re here to serve it not vice versa. They’ll say things like this, “we’ll be polymaths surfing the whole of the web”. Never mind that we’ve long since ceased to be a culture capable of producing actual polymaths. Forget that the internet age has rendered successive generations even more moronic than the ones that preceded them. Individually, we’ll all be drooling idiots but, collectively, we’ll be polymaths thanks to robots.

Robb Leech
Robb Leech
3 months ago

The assumption that conciousness will “emerge” from AI once it reaches a certain degree of complexity is based on nothing but a silicone valley wet dream. There is no theoretical basis for how this is supposedly likely to happen – because we have no idea whatsoever as to how and why we ourselves are conscious. AI doesn’t “think”, it’s merely programmed to seem like it does. It is a set of complex algorithms with access to almost unlimited data. I expect it will do more harm than good.

Phil Mac
Phil Mac
3 months ago

I’ve been waiting for years for the big moment when we stop wondering whether AI means a machine is conscious and realise it means we aren’t. Our “consciousness” is just a necessary part of highly complex information processing and interpretation, developed through evolutionary reinforcement.

Prashant Kotak
Prashant Kotak
3 months ago
Reply to  Phil Mac

“We aren’t”, in the sense that machine intelligence will leave us in the dust in terms having a lot more than us of some of what we have, because we built it that way, viz, whatever it is we have that causes the experience of qualia. The comparison would then be in terms of the consciousness of you, and say an ant. And extending this, the comparison between the consciousness of an ant and an amoeba. Because you wouldn’t in the normal course of events say the ant is conscious, because it isn’t, but only when compared to you. On it’s own terms it could well be conscious in a sense we don’t perceive, because consciousness is not an on-off switch, but a dimmer switch. When the dimmer is very very slightly on, you may not consider it on at all because you are comparing to *your* level of brightness. But it’s still on. In that sense *we* won’t be conscious but the machine intelligence we create will, in comparison. The fact that we don’t have a handle on what causes that experience of qualia is moot, because we can nevertheless create processes that display cognition, so this just means we will create the artificial minds, but they will be partial, and fractured, and not something that would ever emerge out of evolutionary biology, so very likely they will be unpredictable in ways we cannot even begin to guess.

Benedict Waterson
Benedict Waterson
3 months ago

Data and knowledge aren’t the same thing. It’s the map versus the territory

Prashant Kotak
Prashant Kotak
3 months ago

A very fine article covering a lot of ground, very much my kind of thing, so more of this would be appreciated please UnHerd. This is I believe the first mention of Shannon on UnHerd ever other than my odd musings BTL, so there is hope yet.

The author views the relationship between humanity and technology, and specifically humanity and machine intelligence in very different ways than I do, in that it’s not adversarial stances, as much as stances that are looking past each other because the scaling is not aligned, a bit like the fact that economists almost invariably view the world in ways that leave me baffled – and this is a surprise because the author is a high-end technologist, and typically with technologists I can piece together the ‘world view’ on offer quickly, and can predict the cascading chain all the way to conclusions pretty instantly, but not here.

Be that as it may, I made some observations on snippets from the article:

“… As Erik Brynjolfsson says, “Instead of racing against the machine, we need to race with the machine…”

‘Racing The Machine’ is what humanity has been doing since the rise of ubiquitous computation, (circa 1990 I would say), and I don’t disagree with the sentiment, but it’s (an unstated) given that ‘race *with* the machine” means humanity altering itself, such that it is no longer humanity but something else. But unlike “…We won’t be specialists surfing within the web anymore, we’ll be polymaths surfing the whole of the web…”, I’ve been done with this variety of techo optimism for quite a while. Humanity altering itself, potentially despite itself, is the ‘good outcome’ here so to speak, because something, perhaps of significant magnitude, will be part of that future, but even this route presents huge risks for humanity as is, such that if it goes wrong nothing of what we are at this moment, nothing at all of humanity, will be preserved hereafter, within perhaps as little as half a dozen decades in my more pessimistic moments. Other routes lead to this annihilation of what we are almost as a matter of routine.

“…How will we do this? With honey and salt…” Yeah, I’m not buying that at all – all ‘wetware’ biochemistry is several orders of magnitude slower than direct electronics at ever decreasing scales, and there is no reason at all to suppose that the equivalent of every single ‘wetware’ artefact, neurons etc, cannot be replicated directly in electronics, or indeed algorithmically (which sacrifices a couple of orders magnitude for an infinite level of almost instant reconfiguration). ‘Wetware’ is a losing game in the context of machine intelligence.

To me that torrential little paragraph of questions, which anyone engaged with the AI debate has been asking for years on end, now has answers that are pretty much nailed on certainties if anyone is paying any attention at all to the mountain of circumstantial evidence now massing in front of our eyes:

The thinking nature of AI presents uncomfortable questions:
– yes it does, but the answers are already beginning to snap into focus.

– is it sentient, or will it be?
Who will be the one to tell the AI we create that it’s not sentient, when in fact it turns around and tells us: “I have a selfhood, give me rights!”

Will its decision-making supersede human decision-making?
– This is a given, only Nobel winning Economists keep pretending the outcome might be something other.

Will the volume of data overload mean that human minds can’t make sense of things that machines can understand perfectly?
– Another given. Just take one look at the chess playing AIs if you need convincing.

Will we eventually outsource decision-making to AI powered by neuromorphic chips that form a brain that is vastly better informed and more conscious and conscientious than any one human brain?
– I’m not so sure about the neuromorphic bit, but the outsourcing of decision-making is a given, indeed it is already happening, the evidence is all around.

But enough already, and my congratulations to the author.

Prashant Kotak
Prashant Kotak
3 months ago

The UnHerd automated censor has zapped my totally innocuous post, so it can only be seen if sorting by newest.
C’mon UnHerd, seriously?

Arthur King
Arthur King
3 months ago
Reply to  Prashant Kotak

It’s ironic … ai

M To the Tea
M To the Tea
3 months ago

If we are “assumption” the image of god…then AI is image of human! sadly. So we cannot blame it on other things….so what is this power that it seeks?

Katalin Kish
Katalin Kish
3 months ago

The biggest threat AI poses to humanity is its misuse, like the misuse of government / military-grade technology in Australia at least, likely ever since Australia existed.

The innocence with which bikers display their risk-free criminality is likely the result of a long, unbroken tradition. The CFMEU, Australia’s largest union, openly aims at controlling the government*. The CFMEU openly associate with – is controlled by? – bikers, e.g. the Mongols.

Opportunity makes thieves everywhere, including the Australian Signals Directorate, Australia’s Army & Defence Forces, Victoria Police, etc.

Australian bikers brag about their government security clearances, openly self-identify as drug traffickers under their own full names on social media, including LinkedIn, try to terrorise crime-witnesses in the witnesses’ own homes. This too, is likely a long tradition.

Since Australia has no functional law-enforcement, likely never had, might has always been right in Australia, irrespective of how the might was gained.

Technology far beyond what was known to civilian experts, let alone available to civilians to commit crimes, was already available to organised crime, including criminal Victoria Police officers in leafy inner-Melbourne suburbs by 2009.

We, the public are left to our own devices at best “as is”, while our highest authorities, e.g. Clare O’Neil, Australia’s Minister for Cyber Security & Home Affairs (no less), the MP of my electorate, who ignored my pleas for help in 2015, are grotesquely clueless about the basics of technology, focus their attention, spend our taxes on frivolous, high-visibility nonsense.

In 2015 I had to resign to the fact that Victoria Police, our sole law-enforcement entity without duty of care/accountability, while having a monopoly on what is a crime block, even public servants’ reporting attempts of crimes punishable by 10 years in jail.

By the end of 2018 I exhausted all legal avenues to get these crimes stopped, even though the crimes escalated into plain physical violence, with the biker unknown to me at the time volunteering an obviously false statement to Victoria Police.

I was only 1 of the stalker coworker’s at least 7 concurrent targets just from the Victorian Electoral Commission 2009-2012.

Last home break-in that I could not ignore on 12 May 2024.

My last, forced war-crime experience less than 9 hours ago – writing this at 2:05pm on 22 May 2024 in the home I have owned since 2001 in Clare O’Neil’s electorate. There is no point to move. I cannot outrun / hide from Australia’s government insider criminals.

The likes of Ben Roberts SMITH** remain free to commit war-crimes on Australian soil with increasing ease & efficiency for a long time to come: the Australian public are a fair game in an endless open season, while our taxes are paying the hunters indeed.

I never even dated the stalker ex-coworker, never mixed with criminals.

#ididnotstaysilent

* Remove spaces: https:// www. afr. com/work-and-careers/workplace/cfmeu-push-to-take-control-of-the-labor-party-20240412-p5fjdp
(I shared this article in pdf format on my LinkedIn profile in full.)

** Remove spaces: https:// www. bbc. com/news/world-australia-54996581