Will Artificial Intelligence Replace Writers?

We assure you a robot did not write this article.
AI writers represented by a group of people gathered around a table with their laptops.

Automation is coming for us all. While the sci-fi vision of robots doing hard labor for us presented a utopian vision of a life filled with leisure, the reality has been that computers are taking jobs and the workers are left in search of other ways to make money. For a while, artists considered themselves the exception to this rule. Surely a robot could never paint the Sistine Chapel, and an AI writer could never replace Shakespeare. Yet as technology has progressed, it’s become less clear whether those statements are actually true.

Sure, there are plenty of philosophical questions about whether a machine could ever create “art,” but artificial intelligence has made strides in the realm of writing recently. Looking at the past, present and future of machine-generated writing presents new, strange possibilities at the intersection of art and technology. Even if robots don’t entirely replace writers, there’s a good chance it’ll change how a lot of them do their jobs.

A Brief History Of AI Writers

Early computer history tended to be very technical. In the days when computing meant working with massive machines and the idea of “programming” was relatively new, it was only scientists and researchers who had access to them. It goes to show just how intertwined computers and text are that as soon as a commercial, accessible computer was put on the market, text generation wasn’t far behind.

That commercial computer was the Ferranti Mark 1. In 1953, a programmer at Manchester University named Christopher Strachey used this machine to create what is likely the first AI writer ever: love letters. As you might imagine, they were pretty simple. Each letter had the exact same formula: “You are my [adjective] [noun]. My [adjective] [noun] [adverb] [verbs] your [adjective] [noun].” The computer would then plug in words to these bracketed slots using a preset list of vocabulary. While it was simple, the machine could theoretically create 300 billion unique letters. Not all of them made much sense, but it was an interesting early experiment.

By the early 1960s, text generation among machines was commonplace, if glitchy. In 1962, CBS was putting together a TV spot called The Thinking Machine spotlighting a machine called SAGA II. In it, the programmers had the SAGA II produce screenplays, which CBS then had actors perform. The writing that SAGA II was capable of was entirely rule-based, so the output was limited. The programmers had to create the setting and characters — in this case it was an Old West story with a robber and a sheriff — and the machine would generate a story from a set of possibilities. It was the equivalent of a machine going through a choose-your-own-adventure story. While probably not impressive by today’s standards, it was the first TV spot ever written by a computer.

The next big leap forward in computer writing was ELIZA, a prototypical therapy app designed by Joseph Weizenbaum at MIT in 1966. It was essentially a chatbot that let users talk to it. While the Manchester University computer had a preset list of words to draw from, ELIZA was able to take user input and identify keywords. An input like “I’m having a problem with my father” might prompt ELIZA to respond with “Tell me more about your father,” mimicking a specific kind of conversational therapy. ELIZA was an early test case for the Turing test, which was designed to see if a computer could communicate realistically enough to convince people into thinking that it, too, was a human. ELIZA didn’t pass the Turing test — to date, computer scientists say nothing has passed the Turing test — yet many people still found it compelling. Some people would continue using it to talk through their issues, proving that even simple chatbots can make influence humans. 

Once again, years passed with people creating fascinating but severely limited text bots. In 1976, there was an exciting leap forward with James Meehan’s Tale-Spin. This machine was able to take variables — the characters, the setting and the “goal” — supplied by a user and generate complex stories (albeit within certain constraints). It’s somewhat like a choose-your-own-adventure story, except that changing any one variable changes the entire story. The output was more like a children’s book than a literary novel, but it paved the way for countless advances in artificial intelligence telling stories.

In the 1980s, one of the leading trends in text-generation was “travesty generators.” These generators used a probabilistic model called the Markov chain. Essentially how these worked is that they would analyze a massive set of data to create probabilities as to which word is most likely to follow the previous one. For example, it’s more likely for “cold” to be followed by “feet” than by “sun.”

These early travesty generators spit out largely nonsensical information, and were often used to mock jargon-heavy scientific writing or impenetrable authors. Yet this same kind of Markov chain is still in use in technology today. When you’re texting on an iPhone and it provides you options for what you might type next, that is a Markov chain looking at a huge number of probabilities to predict the next term. If you only used the predictive test, however, you’d still end up with nonsensical sentences.

The Arrival Of GPT-3

At the risk of oversimplifying, we’re jumping from the 1980s all the way to the present. While there have been countless other text generators developed over the past few decades, the entire conversation right now is dominated by a single device: OpenAI’s Generative Pre-trained Transformer-3, or GPT-3 for short. It’s one of the most advanced systems to date, and its effect has been both incredible and a bit frightening.

GPT-3 is the third of its kind, as you might guess from the name, and it uses a kind of artificial intelligence known as deep learning. If you’ve read anything about artificial intelligence, it’s likely about deep learning, as it’s one of the most promising developments in making machines really “think.” The idea is based on replicating the neural networks of the human brain, which means it can draw from huge datasets. Whereas SAGA II’s Old West scripts discussed above were developed from just dozens of parameters set by the programmers, GPT-3 has 175 billion. It’s been trained on a massive corpus that includes English-language Wikipedia, millions of books and more.

The result is that more than anything that came before, GPT-3 is able to create convincingly human writing. In 2020, the New York Times had GPT-3 write a series of pieces for Modern Love, the paper’s column about relationships and feeling. The results are somewhat hit-or-miss. The biggest, most obvious miss, is that in the last column it got caught in a loop writing “dinner and drinks and dinner and drinks” over and over again. Still, with just a little prompting it was able to spit out readable stories. It’s not Shakespeare yet, though perhaps someday it will be.

GPT-3 is not available for anyone to use, but you can get a taste of it through Sudowrite. This is an app that pitches itself as a way to bust through writer’s block. You start telling a story, and Sudowrite uses GPT-3 to produce suggestions for where your tale might go next. It can also rewrite stuff you’ve already put down, or help you brainstorm new ideas.

As of now, GPT-3 is being used to power hundreds of apps. As for its ability to write things from scratch, however, it’s still more of a novelty than a novelist. The writing that it creates is at times surreally good, but it too often falls into a kind of uncanny valley where readers can tell something is off. It’s close enough to good writing to start a lot of conversations, without yet having replaced any actual writers (that we know of, at least).

The Future Of Writing

Since the creation of GPT-3, discussions about how it will affect writing have popped up. There are some who say that GPT-3 can never replace human writers because it’s not really creating anything “new,” it’s just rearranging what’s been written before (which could arguably be said about human writers, to be fair). Others fear that automation will indeed put countless writers out of work. Literary authors might be safe, but journalists and technical writers could be heavily affected by this development. The reason OpenAI hasn’t released GPT-3 to the general public is out of a fear of bad actors using it to create massive amounts of fake news.

Between those two extremes is a third option: the coexistence of writers and AI. One of the most fascinating uses for GPT-3 so far has been by writer Vauhini Vara for Believer. In her essay “Ghosts,” she used GPT-3 to write an essay about her sister who died from Ewing sarcoma when she was younger. Vara had avoided ever writing about the topic before, and she wanted to see if this new tool could help her process the feelings she was afraid to confront. In the end, she didn’t create just one essay but nine different ones, each time inputting more details from her life as she remembered them. By the end of the process, she had written an entire piece almost entirely by herself.

Early versions of Vara’s were strange, adding in random details that had no basis in fact. The more information she added, however, the closer and closer GPT-3 came to writing something that resembled reality. In a This American Life interview, Vara talked about this odd co-authorship, and how even though her final version was written almost entirely by herself, GPT-3 had a hand in it. She said, “Even though it would feel good to say, like, yes, my conclusion here is, I was able to do this entirely on my own. I didn’t need the AI at all. It’s like, the reality is more complicated than that.”

If we remain optimistic, it might seem that artificial intelligence will ultimately be a tool for human writers to use. One of the creators of Sudowrite has suggested that the job of a writer may be more like that of an editor, crafting AI-generated text into works of art. This may not sound ideal to everyone, but it is one alternative to the black-and-white “AI will replace writers” or “AI will never replace writers.”

All that said, the next decade is likely to completely transform writing, which will create both helpful solutions and new problems. While we haven’t discussed it much here, the use of AI to create mountains of disinformation is a real threat. This computer writing is only going to get better as the models absorb more data and set more parameters. The question we have to answer now is: How can we use this to make the world a better place?

Learn a new language today.
Try Babbel
Share: