On not using AI as a writing tool

An absolutist case for never using AI to assist in the writing process

The question about whether we should use AI writing tools is ultimately the question of why we write at all.

Good writing is hard. It takes time to consider what to say, how to say it, and how to capture it in words. Many of us don’t have the time or inclination to do it well.

So it is no wonder that the emergence of the likes of ChatGPT, Copilot and Jasper has been perceived as the solution to a problem: the time consuming business of researching, drafting and editing.

Overblown commentary following the launch of ChatGPT that AI heralds the death of writing may have subsided, but a consensus seems to be emerging that it can and should be used to abbreviate the writing process, generating content to be refined by a human editor.

That doesn’t sound unreasonable, perhaps. Let AI do the hard work by researching the material and churning out the first draft, and leave the interesting part, embellishment, to us.

But I want to be unreasonable about this. I don’t think we should use AI for any of the major steps in the writing process. Perhaps we might use it as a sophisticated search tool to unearth possible source material. That, however, should be as far as it should ever go.

For writing is the means by which we think, by which we make sense of the world, make sense of ourselves. By outsourcing writing to a machine, we outsource thought itself. To see that, consider the difference between artificial and human intelligence.

Artificial and human intelligence – the fundamental difference

The large language models (LLMs) that power systems like ChatGPT are remarkable technological achievements. LLM engines trawl vast oceans of digital text to recognise patterns in language: ChatGPT was ‘trained’ on nearly 100 trillion words. By breaking text into units, known as ‘tokens’, and converting them into arrays of numbers that machines can understand, they allow LLMs to form probabilities as to how words should be ordered in sequences. By grasping how words are commonly used LLMs can make sense of the questions we put to them and formulate seemingly intelligent responses.

This accounts for the uncanny sense we have have when interacting with AI that we are conversing with another person. The impression is reinforced by the interface through which the technology is presented.

When inviting a question ChatGPT refers to itself in the first person. The cursor blinks when we enter our question, giving the impression of cogitation. When responding it types one character at a time, just like we do. It appears to engage in conversation, refining its answers in response to our follow up questions.

LLMs can even be tailored to emulate our own writing style. One Silicon Valley start-up, Writer, develops bespoke apps trained on the words of individuals, brands or organisations. Big companies like Vanguard, Accenture and Dropbox are already using custom-built versions of Writer to draft corporate communications such as blog posts, news releases, CEO messages and product descriptions.

All very impressive. And yet behind the sales pitch and the pleasing interface the absolute difference between the way we and the way AI composes text remains.

At no point do even the cleverest AI systems do any actual thinking. Quite simply AI lacks the force that drives language: sentience, human subjectivity, our inner life.

The words we use are compelled by our life as embodied beings, our senses, desires, fears, hopes, dreams. Machines, networks of circuits and transistors, can only access the surface of language, only imitate its use.

Artificial intelligence can only ever be a stylised version of human intelligence. Computers can emulate aspects of human thought. They parse, deduce, calculate, and follow routines supremely well – far faster than we do. But they quite literally do not know what they are doing.

This absence of consciousness, this profound lack, goes to the heart of AI’s limitations. AI is necessarily backward looking. Trained on existing data, on what has already been said, it cannot look to the future and form purposes like we do. And it lacks judgement.

AI has a tendency to hallucinate, to see patterns in data that are not there. It cannot detect bias, but only reflect the prejudices of the data it has been trained upon. And it is blissfully aware of copyright issues, simply stealing from the material on which it feeds without discretion.

ChatGPT as poet

AI’s limitations are most starkly exposed when it tries to move from rule bound to creative uses of language. Consider, for example, its attempts to write poetry.

Certainly, AI can emulate the technical aspects of poetry. It can identity form, producing sonnets, quatrains and tercets, and recognise rhyme, meter, alliteration, assonance and chime. It can even identify common metaphors.

But the poems it generates are just collections of words arranged in a certain order, organised into certain forms. They are not inspired by vision, what writers call poetic truth.

Indeed, whereas AI looks for the expected word, poets look for the unexpected. As W.B. Yeats, quoted by the war poet Sidney Keyes put it, the poet searches for ‘the intellectually surprising word which is also the correct word.’ That’s because poets do not seek to explain the world, but capture something of its strangeness, a strangeness only we can experience.

Whatever intellect machines might have, it is fundamentally different from that of humans. Computers, plants, animals and people have their own logics, their own ways of being in the world, which do not intersect. As the philosopher Ludwig Wittgenstein said ‘If a lion could speak, we could not understand him.’

Our capacity and desire to use language to explore the world indicates the most serious consequence of farming out writing to AI. Doing so severs the intimate link between thought and language. When we write we test the limits of our understanding, open up meaning through metaphor, develop our own voice, our own way of seeing.

That applies to every kind of writing. The poet attempts to capture something of the complexity of experience, the journalist to record events, the columnist to comment on public life, the diarist to reflect on the events of the day, the business writer to think through a company or organisation’s purpose.

AI abbreviates or simply cancels the process – research, drafting, editing – through which meaning emerges. By entering a question and clicking a button we hand over all of that to the machine. We submit to the algorithm and accept whatever it produces.

Writing requires work, but that is the point: it is the work of being fully human. Writing has a moral dimension. As the writer Tom Albrighton puts it in AI Can’t Write But You Can, a sharp polemic with which I am very much in sympathy: ‘To give up the labour of writing is to give up the labour of living.’

A battle worth fighting

Unfortunately it seems that ever more use will be made of AI writing tools. The promise they hold for increased productivity is too enticing. One striking new survey finds that 92% of UK undergraduates now use AI to some extent, up from 66% last year.

Those of us who want to defend the art of writing are going to have to make the case for it, repeatedly. We have to hope that the limitations of AI writing apps will become evident, and that they will take their place in an ecosystem of digital tools that complement real writing, like spell checkers and online dictionaries. I can see that ChatGPT has its uses as a sophisticated search tool for dredging up raw content.

But if we allow it to do more than that the quality of the written word will degrade, tending, at best, towards a bland pastiche of what has been written before. And we will weaken our capacity to construct an argument, to respond imaginatively to the world, to find our own voice, to think for ourselves.

The image above is based on a detail from The Passion of Creation by Leonid Pasternak.

This post is also available on Substack.