William S. Burroughs loved a bit of literary jiggery pokery. (Credit: jean-Louis Atlan/Sygma via Getty)

January 26, 2024   4 mins

You have to salute the brass balls of the Japanese literary novelist Rie Kudan. She was accepting one of Japan’s most prestigious literary awards, the Akutagawa Prize. The judges were hosing her new novel, The Tokyo Tower of Sympathy, down with treacle, one member of the committee announcing that: “The work is flawless and it’s difficult to find any faults.” Then, right at that moment, live in the room, and all casual-like, she announced that a large chunk of the book had been written by ChatGPT. “I made active use of generative AI like ChatGPT in writing this book,” she said. “I would say about 5% of the book quoted verbatim the sentences generated by AI.”

At once, a Japanese prize barely heard of in the Anglosphere was making headlines around the world. Has the worst nightmare of the gatekeepers of high literary culture come true? Do the judges of the prize have faces dripping with egg, like a Master of Wine who picked the Blue Nun in a blind tasting? And is Rie Kudan, for her part, cheating? Well, no… and no.

To the latter question, we could counter that she was using AI-generated language precisely to make a point about AI-generated language: that is, that she wanted (as she puts it) to test the way “soft and fuzzy words” can obscure our ethical clarity. We could counter, further, that ChatGPT didn’t decide which 5% of the novel was going to be by ChatGPT, nor which bits of ChatGPT’s language were going in there.

And to the former question — whether the judges were just wrong to think the novel good — we can say: surely we’re past all that. If they thought The Tokyo Tower of Sympathy was flawless, then by their own lights — and if they’re decent critics we can expect readers to agree — it was. Didn’t we all learn at GCSE that the reader’s interpretation of the poem is much more important than trying to figure out what the author meant by it? Didn’t Roland Barthes pronounce The Death of the Author as long ago as 1967?

The idea of the single, inspired author originating a text of near-sacred originality is itself a hangover from the Romantics. It’s a two-century blip. Before then, fiction-writers often did their damnedest to pretend they were copying from someone else, even when they were making it up. Chaucer was forever talking about “myn auctor”, and a text that came adapted from a precedent was seen as more trustworthy and high-status than one that didn’t. Milton reworked the Bible, Shakespeare reworked Holinshed, and so on and so forth.

In more recent times, experimental and modernist authors have been using randomness, or the home-made equivalent of algorithms, to generate their texts for 100 years or more. In 1920, the Dadaist eminence Tristan Tzara announced that poetry could be written by taking a newspaper article of the length you wanted your poem to be, cutting it into its constituent words with a pair of scissors, shaking them about in a bag and then transcribing them in the random order in which they emerged. There’s a funny bit about it in Tom Stoppard’s play Travesties.

That was only the starting gun for all manner of literary jiggery-pokery. William S. Burroughs and his collaborator Brion Gysin picked up Tzara’s baton in the Sixties, experimenting with “cut-ups” (much like Dada poetry) and “fold-ins” (where you would fold two pages of an existing book together so the edges met, and read across the fold to make a new text). The fantasy writer Jeff Noon’s 2001 book, Cobralingus, presented a set of algorithmic instructions for transforming a text through what Noon called “filter gates”, something analogous to a DJ remixing a record. The children’s writer Andy Stanton recently published Benny the Blue Whale: A Descent into Story, Language and the Madness of ChatGPT, a serious/silly account of his experiments getting ChatGPT to write a novel about a blue whale with a micropenis.

Arbitrary literary constraints or outright randomness, then — which is to say, things outside the author’s control which help determine the final text — have a very honourable place in literary history. You could even see the sonnet form or the villanelle as a species of algorithm. The mid-century Oulipoians sought out baroque formal constraints (most famously, Georges Perec’s managing to write a novel without the letter E) by way of liberating their creativity rather than stifling it. Italo Calvino wrote a novel — The Castle of Crossed Destinies (1973) — around a tarot pack. B.S. Johnson’s The Unfortunates (1969) presented the reader with loose pages in a box and invited you to read the novel in any order you liked.

None of which is to say that Ms Kudan is necessarily an avant-gardist, or needs to be. Only that it’s a very narrow and regressive view of literature to see handing over control of some of your text to chance or to an algorithm as “cheating”. It’s what you do with the result that counts, and what Ms Kudan did was apparently, well, flawless.

There are, no question, literary-ethical problems with ChatGPT. If the algorithm has been “trained”, as some hefty lawsuits are currently complaining it has been, on vast screeds of copyright text without permission or compensation for the authors, that’s a violation deserving of redress. You could even make the case, perhaps, that 5% of Ms Kudan’s prize money should rightfully be distributed to every Japanese language author on whose copyright work the algorithm was trained. But that question is a business and intellectual property issue: to one side of the purely literary question of its part in the creation of The Tokyo Tower of Sympathy.

Indeed, if you were of a literary-theoretical cast of mind you could point out what Tzara and his successors were gesturing to in an oblique way anyway: intertextuality. Every text, in the end, is made of other texts. Every word in a novel or poem is a borrowing: it depends for its meaning on the vast constellation of other contexts in which it has appeared, and through which its reader will understand it. Every author, in his or her individual way, is a meat-brained ChatGPT, “trained” on a lifetime’s reading of classics and copyright works.

That’s really where the case of Ms Kudan really does probe an anxiety in the culture. What if the writer is (in the phrase Martin Amis used of V.S. Pritchett) a mirror, not a lamp? The Romantic model of the artist is underscored, after all, by a reassuring idea about humanity: that we are creators not creations, that what makes us distinctive isn’t simply a neurological compost of our inputs but some ineffable inner essence that can only be captured in the act of expression.

Is it possible, then, that we so fiercely police the distinction between what Large Language Models can do and human creativity because we’re… touchy about it? That we’re worried it may be a temporary distinction of degree rather than a fundamental difference of category; which is to say, no distinction at all?

Sam Leith is literary editor of The Spectator and the author of Write To The Point: How To Be Clear, Correct and Persuasive on the Page