X Close

GPT-4 couldn’t resurrect my dad Weirdness could shield us from the AI apocalypse

My father was one of a kind. Credit: 1958, WVU Athletic Department via/ Oliver Bateman

My father was one of a kind. Credit: 1958, WVU Athletic Department via/ Oliver Bateman


April 20, 2023   6 mins

Over the decade before he died, in 2014, my father sent me thousands of emails. Carefully-crafted little gems, their subjects ranged from general advice, drawn from his own hardscrabble existence, to musings about the matriarchal nature of orca society, the baldness of Catholics, and his deeply-held belief that the original World Trade Center in New York had never existed at all. Even now, his salutations — which mirrored how he would address me on the phone — linger in my mind: “Hey scholar of scrotes and the scrotum”, “just a thorn in the side of Christ here and now”, “sonnyboo u do not know the pain of a hernia nor 3-to-4 as I have had”, “late one huh CasaNova…..out pettin poose I guess and no time for Granddaddy……”

The messages had a brief heyday in 2016, making the rounds of New York City editors and literary agents, in advance of a public reading I gave in the East Village. In the cultural moment just before Donald Trump’s emergence, these half-baked far-Right musings were a novelty; alas, Trump’s presidential triumph scuppered plans to make an eBook out of them. But I still re-read them, when I want to remember the old man. (He had, of course, hoped that I would: “I send these because life and death is about MIND over MATTER…….&& I want to reMIND u that it don’t matter…..”) Perhaps it was loneliness that made me wonder if his voice could ever be resurrected.

In my day job, as a senior content manager for a research consultancy, I often use GPT-4. It is competent at various brute-force operations — turning lengthy transcripts into notes, proofreading content — even if the inputs require constant fine-tuning and the outputs require careful attention to ensure accuracy. But could it write content that would bring back the dead? Could GPT-4, if sufficiently trained, analyse my father’s emails — and perhaps even write new ones?

Advertisements

“Well well well, my BOY, let ol’ FOG lay down some KNOWLEDGE on the import-ants of self-defense!” So began one of the emails produced by GPT-4, after I fed my father’s archive into it. As an opening it feels slightly more stilted than the source material, and I can’t recall my father ever referring to me as a “BOY”, but he did call himself the “FOG” (short for “effing old guy”) and randomly capitalise entire words and hyphenate others (“import-ants”), though he would never have bothered with the apostrophe after “ol’” (he’d merely write “ol”).

I know my father’s voice when I see it — it is a script that runs in my own head, like a built-in AI. And as GPT-4 generated more missives, I was increasingly certain: it could not replicate my father’s erratic punctuation usage. The mere ellipse was never enough for him; he would sometimes type out a dozen full-stops or more. These time signatures often appeared in the long, quiet sections in the saddest emails he wrote: “Go out there and just outwit the bastards………Nothing else to Life…..every body loses but some people stay in the game for a long time and have a happy life and family….I never did…..oh well…I saw it and I read the writing on the wall……” It was these inconsistencies that enabled me to hear his voice, these peculiarities that made his work human.

I doubt that a machine will ever be able to mimic his syntax. But given the right prior inputs, GPT-4 could respond to questions or develop political platforms (or children’s stories) in a manner that eerily resembled my father’s thought process. Was this a true reanimation, or merely what University of Washington linguist Emily Bender, a critic of equating AI outputs with human reasoning, might describe as “stochastic parroting”? The AI certainly captured subtleties in my father’s work that I and others overlooked during that reading in the East Village. Listeners then likely saw him as some outsider-artist variant of Alex Jones, spewing rote conspiracy theory. But ChatGPT, when asked to summarise his politics, cut through the outlandish expression to a more comprehensible core: “Your father’s concern for the environment and the need for sustainable practices aligns with the environmentalist movement [while] his preference for local communities and self-sufficiency has some connections to localism.”

This was, of course, true, though I never thought about it in this way — nor did I consider that his “focus on gender equality and women’s empowerment aligns with progressive politics”. My father was the sort of man who moved to a “mountain house” in Montana to live out his final days; he was far closer to the libertarian Right of Karl Hess — who also retired to the wilderness — than the more culturally conservative, evangelical Right that dominated his era (or the MAGA movement that dominated mine). The AI analysed and reproduced a version of my father distinct from who I remembered. But perhaps there were things I had missed.

The progress AI has made in understanding and generating human-like text over the past six months is impressive — a great feat of engineering that will, in time, be remembered alongside the construction of the Pyramids or the Great Wall of China. However, AI models are still unable to mimic the voices of highly idiosyncratic, distinctive writers. The polymath University of Paris professor Justin E.H. Smith, for instance, has found AI largely incapable of simulating his unique voice. But in his case the content, rather than simply the form, of his communication is highly complex.

The writing of Right-leaning political blogger Curtis Yarvin, on the other hand, appears stylish on the surface, but is often considered lacking in substance — a fellow editor noted that removing every other sentence from Yarvin’s excessively ornamental prose would not change its meaning in the slightest. Yarvin believes that GPT-4 cannot reason or think, merely discern and reproduce patterns. His writing, however, is undeniably replicable; a sentence such as “the only way to write about finance is to be either neither bull nor bear, or both” seems sophisticated at first glance, but it exemplifies what one of my old writing mentors might call “all sizzle and no steak”. In other words, Yarvin, like many commentators succeeding on the internet, can skilfully assemble sentences that can be skimmed for hours but forgotten in minutes — meaning that imitating his work is the ideal scenario for an LLM that ceaselessly creates a persuasive imitation of style, without any regard for substance.

The general public, most of whom skim content rather than paying attention to form, may struggle to differentiate between authentic and AI-generated content. More perceptive readers can sometimes detect minor discrepancies that reveal the artificial nature of the imitation — but even they aren’t perfect. In one study, researchers investigated if LLMs could be as good as humans at creating philosophical texts, by fine-tuning GPT-3 with philosopher Daniel C. Dennett’s works. While experts and philosophy blog readers performed above chance level in distinguishing Dennett’s answers from the model’s, they still fell short of the expected success rate.

The implications here are alarming. If most people are unable to distinguish between human-generated and AI-generated content, creativity and critical thinking will become rarer attributes. A new class divide could spring up, between the privileged few capable of discerning the nuances of AI-generated content, and a growing mass of individuals left to consume, without question, whatever is presented to them. The “priestly class” would consolidate power by reading between the hieratic lines of AI-generated content — just as the literate elites in ancient Egyptian and Sumerian civilisations did, by controlling access to sacred texts and legal knowledge. Their ability to recognise genuine information would give them a competitive edge in everything from financial markets to politics, further widening the gap between the informed and the uninformed. Meanwhile, the vast majority — an ever-expanding pool of digital-age helots left to hew wood and draw water — would become increasingly vulnerable to manipulation and misinformation. The forces that shape our lives would be less accountable, and it would be ever harder to address ethical concerns or make well-informed decisions.

Unfortunately, it is difficult to even imagine the mass education needed to achieve the necessary levels of discernment — much less its effectuation. Most people would rather be watching 15-second TikTok videos than close reading. And while a significant percentage of the world’s population will always, unfortunately, lack the cognitive skills or cultural capital to navigate our swiftly-changing content ecosystem, even the well-educated are in danger. We are witnessing a decline in the humanities, which traditionally turn future workers into critical thinkers, capable of discerning the idiosyncrasies in human expression.

As AI-generated content becomes increasingly sophisticated, we must prize those idiosyncrasies. They are, as I saw when I attempted to replicate my father’s emails, what make each writer’s voice distinct and authentic; they are the reason discerning readers might pay to read a mind-expanding Substack instead of boring, one-note op-eds or paint-by-numbers YA fiction. By embracing the unique aspects of the way they communicate, writers may create work that resonates on a deeper level, appealing to an audience that still wants to read the best work that humans can produce.

My father’s voice is one of a kind. But to find out if my conclusions are replicable by a machine, I decided to ask ChatGPT to provide its own opinion on the likelihood of it fully “replacing” a given writer. “It’s important to consider that my responses are based on patterns and structures found in the data I’ve been trained on, rather than personal experiences or emotions,” it replied. “As a result, even though I can generate text that appears to be in the style of a specific person, it is still an approximation and not a direct reflection of their thoughts or ideas.” Of course, when I am communicating with a machine that reproduces the thoughts of my dead father with a reasonably high degree of precision, it’s pretty to think it might be otherwise. But more alarming is the question I am left with: Could I have put it better myself?


Oliver Bateman is a historian and journalist based in Pittsburgh. He blogs, vlogs, and podcasts at his Substack, Oliver Bateman Does the Work

MoustacheClubUS

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

11 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Nik Jewell
Nik Jewell
1 year ago

Much of the content that GPT-4 can generate is ‘good enough’ (and output will improve rapidly). This will inevitably lead to humans largely trusting the baby AGIs that will soon be embedded everywhere. It is hard to avoid the conclusion that they will be used to indoctrinate people. How many will research for themselves ‘good enough’ content?
For example, GPT-4, ChatGPT and Bard all spout palpable nonsense about sex and gender or state that there is a climate emergency but can’t evidence this statement scientifically (though all will admit to suffering from the is-ought problem, Hume’s law, after making flailing responses to further prompts).
I don’t know whether Musk can catch up now with TruthGPT, which it is to be hoped, will be free(r) from ideological bias (whatever its other risks).
The future masters of the world (being a human master of the world might be a somewhat temporary honorific in the circumstances) will be those who curate the training data of GPTs, because they will determine the ‘truth’.
Even if most AI researchers agree that the latest algorithms are too dangerous to open source (I am inclined to agree), the selection of training data should not be shrouded in secrecy but should be auditable by the rest of the human race for truth and bias (however impractical this may be). Too much is at stake here.
But it won’t be, and humans will become ever more credulous and biddable as technology very rapidly improves now.

Nik Jewell
Nik Jewell
1 year ago

Much of the content that GPT-4 can generate is ‘good enough’ (and output will improve rapidly). This will inevitably lead to humans largely trusting the baby AGIs that will soon be embedded everywhere. It is hard to avoid the conclusion that they will be used to indoctrinate people. How many will research for themselves ‘good enough’ content?
For example, GPT-4, ChatGPT and Bard all spout palpable nonsense about sex and gender or state that there is a climate emergency but can’t evidence this statement scientifically (though all will admit to suffering from the is-ought problem, Hume’s law, after making flailing responses to further prompts).
I don’t know whether Musk can catch up now with TruthGPT, which it is to be hoped, will be free(r) from ideological bias (whatever its other risks).
The future masters of the world (being a human master of the world might be a somewhat temporary honorific in the circumstances) will be those who curate the training data of GPTs, because they will determine the ‘truth’.
Even if most AI researchers agree that the latest algorithms are too dangerous to open source (I am inclined to agree), the selection of training data should not be shrouded in secrecy but should be auditable by the rest of the human race for truth and bias (however impractical this may be). Too much is at stake here.
But it won’t be, and humans will become ever more credulous and biddable as technology very rapidly improves now.

polidori redux
polidori redux
1 year ago

Artificial intelligence is an oxymoron. Just the latest money-making scam to come out of Silicon Valley.

polidori redux
polidori redux
1 year ago

Artificial intelligence is an oxymoron. Just the latest money-making scam to come out of Silicon Valley.

Allison Barrows
Allison Barrows
1 year ago

Oh no! It’s Donald Trump’s fault again!

Richard Pearse
Richard Pearse
1 year ago

Exactly! I wonder if Bateman used an AI chat program for the article – mimicking his own “style” and lack of substance?

Always hidden or obvious in his articles, is the knee-jerk-hipster nuance to attack Trump (and maybe Yarvis – there are hundreds of left-wing authors who should be called all style and no substance (scan the NYT apart from Ross D.) or capitalism – but Bateman’s AI program always puts its thumb on the left side of the scale.

Allison Barrows
Allison Barrows
1 year ago
Reply to  Richard Pearse

Tedious, isn’t it?

Richard Pearse
Richard Pearse
1 year ago

Yes. Just when you think it might be interesting – BOOM

Richard Pearse
Richard Pearse
1 year ago

Yes. Just when you think it might be interesting – BOOM

Allison Barrows
Allison Barrows
1 year ago
Reply to  Richard Pearse

Tedious, isn’t it?

Richard Pearse
Richard Pearse
1 year ago

Exactly! I wonder if Bateman used an AI chat program for the article – mimicking his own “style” and lack of substance?

Always hidden or obvious in his articles, is the knee-jerk-hipster nuance to attack Trump (and maybe Yarvis – there are hundreds of left-wing authors who should be called all style and no substance (scan the NYT apart from Ross D.) or capitalism – but Bateman’s AI program always puts its thumb on the left side of the scale.

Allison Barrows
Allison Barrows
1 year ago

Oh no! It’s Donald Trump’s fault again!

Adam Bartlett
Adam Bartlett
1 year ago

Enjoyed this much more than the Bateman articles I read last year. I wonder if it’s due to all the interaction he’s having with GPT-4? If one spends a lot of time with smarter friend or work colleauge, it raises your game. That’s a well know and consistently reproduceably fact. GPT-4 has to be having that effect on many. Not a guaranteed effect of course – as the article suggests, it depends on how you prompt it…

Warren Trees
Warren Trees
1 year ago
Reply to  Adam Bartlett

Perhaps he found a cure for TDS?

Warren Trees
Warren Trees
1 year ago
Reply to  Adam Bartlett

Perhaps he found a cure for TDS?

Adam Bartlett
Adam Bartlett
1 year ago

Enjoyed this much more than the Bateman articles I read last year. I wonder if it’s due to all the interaction he’s having with GPT-4? If one spends a lot of time with smarter friend or work colleauge, it raises your game. That’s a well know and consistently reproduceably fact. GPT-4 has to be having that effect on many. Not a guaranteed effect of course – as the article suggests, it depends on how you prompt it…

Gordon Arta
Gordon Arta
1 year ago

It seems that we humans are now the ‘god of the gaps’, clinging desperately to the belief that ‘AI can’t do this now, so it never will be able to’, as the gaps between what it can and can’t do continue to shrink. But I doubt that the last gap will be that superior discernment the author claims. More likely, I suspect, will be those, such as those on the spectrum of autism, whose raw unfiltered brain power sidesteps the linear intelligence patterns of man and machine.

Last edited 1 year ago by Gordon Arta
Gordon Arta
Gordon Arta
1 year ago

It seems that we humans are now the ‘god of the gaps’, clinging desperately to the belief that ‘AI can’t do this now, so it never will be able to’, as the gaps between what it can and can’t do continue to shrink. But I doubt that the last gap will be that superior discernment the author claims. More likely, I suspect, will be those, such as those on the spectrum of autism, whose raw unfiltered brain power sidesteps the linear intelligence patterns of man and machine.

Last edited 1 year ago by Gordon Arta
Steve Jolly
Steve Jolly
1 year ago

Western civilization at the end of the Renaissance through the Enlightenment period asserted man’s capacity to reason to be his most important attribute. Reason, rationality, intelligence, objective truth were held up as man’s defining characteristic, separating him from the animals (and the machines). For good or for ill, AI will probably destroy this notion, because the Enlightenment concept of ‘reason’ is little more than rote inductive empiricism practiced en masse in a systematic way, another aspect of industrialization. It is not that conceptually difficult (in my thinking at least) to reduce enlightenment reason to a set of discrete steps and mass produce it. I’m actually mildly surprised it’s taken them this long. Naturally it follows that AI can copy things very well. Industrialization is, at bottom, simply the mass production of standardized things. We’ve just added bad literature and pointless small talk to the big book of things man can produce cheaply and poorly. In my view, what defines humanity is not reason, but imagination. It is creativity, the ability to imagine the world as it would be under different circumstances and work backwards to try to bend the laws of nature toward that vision. For as long as man has looked up at the birds in the sky, he has imagined what it might be like to fly. In 1903, in a field in North Carolina, his level of ‘reason’ had advanced sufficiently that he finally succeeded. Could an AI do this? Surely it could design an airplane based on existing models. Maybe it could even use the principles of aerodynamics to design one from scratch. But could it supply the why? Could it imagine a reason to build an airplane, or decide that such a thing should be built? Could it craft its own vision of the future? It can write copycat fiction but could it develop its own style which couldn’t be reduced ultimately to what it was fed by its human masters? I don’t know enough about the technology to say for certain, but I suspect the answer is no. I applaud this author for getting beyond the usual ‘rogue AI destroys mankind’ trope that has infected even some of our more enlightened thinkers (looking at you Elon), and imagining some more realistic concerns.

Last edited 1 year ago by Steve Jolly
Steve Jolly
Steve Jolly
1 year ago

Western civilization at the end of the Renaissance through the Enlightenment period asserted man’s capacity to reason to be his most important attribute. Reason, rationality, intelligence, objective truth were held up as man’s defining characteristic, separating him from the animals (and the machines). For good or for ill, AI will probably destroy this notion, because the Enlightenment concept of ‘reason’ is little more than rote inductive empiricism practiced en masse in a systematic way, another aspect of industrialization. It is not that conceptually difficult (in my thinking at least) to reduce enlightenment reason to a set of discrete steps and mass produce it. I’m actually mildly surprised it’s taken them this long. Naturally it follows that AI can copy things very well. Industrialization is, at bottom, simply the mass production of standardized things. We’ve just added bad literature and pointless small talk to the big book of things man can produce cheaply and poorly. In my view, what defines humanity is not reason, but imagination. It is creativity, the ability to imagine the world as it would be under different circumstances and work backwards to try to bend the laws of nature toward that vision. For as long as man has looked up at the birds in the sky, he has imagined what it might be like to fly. In 1903, in a field in North Carolina, his level of ‘reason’ had advanced sufficiently that he finally succeeded. Could an AI do this? Surely it could design an airplane based on existing models. Maybe it could even use the principles of aerodynamics to design one from scratch. But could it supply the why? Could it imagine a reason to build an airplane, or decide that such a thing should be built? Could it craft its own vision of the future? It can write copycat fiction but could it develop its own style which couldn’t be reduced ultimately to what it was fed by its human masters? I don’t know enough about the technology to say for certain, but I suspect the answer is no. I applaud this author for getting beyond the usual ‘rogue AI destroys mankind’ trope that has infected even some of our more enlightened thinkers (looking at you Elon), and imagining some more realistic concerns.

Last edited 1 year ago by Steve Jolly
B Davis
BD
B Davis
1 year ago

Programs are not intelligent. So called ‘artificially intelligent’ programs are, however, becoming increasingly effective at creating the illusion of intelligence….in much the same way that increasingly sophisticated AI imagery, deep fakes, etc. create the illusion of life.
But it is not life.
To view the images output by such programs is to witness a seduction.
The algorithms blend colors, sketch patterns, dazzle light reflections, and weave textural simulations so well that — for a moment — we are fooled. We think, who is this woman with the sparkling eyes and Gioconda smile, hand askew on silk-draped hip looking into the camera? But then (at least today) we realize, a beat or two later, that something is not quite right. She is too perfect, the background too seamless; the lasting impression leaves just the slightest taste of ‘plastic’ and nifty packaging. But that’s today.
Tomorrow the illusion will become that much more complete, eventually, probably, undetectable to the naive eye: catfishing not with a random photo stolen from the net, but with a unique & personally crafted image of a 27 yr. old ‘Diane’, who’s studying geology, at the University of Wyoming, and loves fly-fishing the Wind River as her Grandpa taught her. She can even ‘write’ you tender love notes, shy ponderings, salacious offers (all you have to do is ask).
One step further… and Diane can Skype & FaceTime with the best: tell you about the movie she saw, the game she watched, her last gymanastic meet. You fall in love (who wouldn’t love their own particular Diane?).
What then?
She still is not real, not live … but the program creates quite beautifully the illusion of Diane’s life. And in a world which is increasingly lived almost entirely through the glowing screen, what difference does it really make? How different this ‘Diane’ from the long-distance/social media’d relationship (‘we love each other!’) of two people who have never, ever met? What would Dear Abby say? What would you say?
If I dream we kiss, and it’s a vivid dream… a full technicolor, cinemascoped IMAX kind of dream, that dream creates an equally vivid memory. Wait two weeks, two months, two years. What’s the difference, between my vivid memory of a dreamed kiss….and a vivid memory of a real kiss?
The difference, of course, is not within me (for my mind has already decorated that dream with all kinds of scented, textural cues, your L’Air Du Temps!), but within the world which surrounds the two of us. IRL you would either recall that kiss within that shared moment…or you would look at me as a stranger. The difference is the ‘reality path’ created by the Real vs. the Dream….or, in a post-AI world, the constructed ‘dream’.
The question is: how many of us will care that these two paths diverged….or not?

B Davis
B Davis
1 year ago

Programs are not intelligent. So called ‘artificially intelligent’ programs are, however, becoming increasingly effective at creating the illusion of intelligence….in much the same way that increasingly sophisticated AI imagery, deep fakes, etc. create the illusion of life.
But it is not life.
To view the images output by such programs is to witness a seduction.
The algorithms blend colors, sketch patterns, dazzle light reflections, and weave textural simulations so well that — for a moment — we are fooled. We think, who is this woman with the sparkling eyes and Gioconda smile, hand askew on silk-draped hip looking into the camera? But then (at least today) we realize, a beat or two later, that something is not quite right. She is too perfect, the background too seamless; the lasting impression leaves just the slightest taste of ‘plastic’ and nifty packaging. But that’s today.
Tomorrow the illusion will become that much more complete, eventually, probably, undetectable to the naive eye: catfishing not with a random photo stolen from the net, but with a unique & personally crafted image of a 27 yr. old ‘Diane’, who’s studying geology, at the University of Wyoming, and loves fly-fishing the Wind River as her Grandpa taught her. She can even ‘write’ you tender love notes, shy ponderings, salacious offers (all you have to do is ask).
One step further… and Diane can Skype & FaceTime with the best: tell you about the movie she saw, the game she watched, her last gymanastic meet. You fall in love (who wouldn’t love their own particular Diane?).
What then?
She still is not real, not live … but the program creates quite beautifully the illusion of Diane’s life. And in a world which is increasingly lived almost entirely through the glowing screen, what difference does it really make? How different this ‘Diane’ from the long-distance/social media’d relationship (‘we love each other!’) of two people who have never, ever met? What would Dear Abby say? What would you say?
If I dream we kiss, and it’s a vivid dream… a full technicolor, cinemascoped IMAX kind of dream, that dream creates an equally vivid memory. Wait two weeks, two months, two years. What’s the difference, between my vivid memory of a dreamed kiss….and a vivid memory of a real kiss?
The difference, of course, is not within me (for my mind has already decorated that dream with all kinds of scented, textural cues, your L’Air Du Temps!), but within the world which surrounds the two of us. IRL you would either recall that kiss within that shared moment…or you would look at me as a stranger. The difference is the ‘reality path’ created by the Real vs. the Dream….or, in a post-AI world, the constructed ‘dream’.
The question is: how many of us will care that these two paths diverged….or not?