Will GPT-3’s AI make writers obsolete?

The Guardian newspaper in the UK assigned GPT-3, an AI writer from OpenAI, to write an essay. The results are pretty good. But as long as writing requires creativity, AI won’t replace human writers.

GPT-3’s essay, and how it was created

Here’s the start of what you can now read at The Guardian:

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.

Every writer has to start with something. Typically, it’s an idea from your own experience, combined with some research. So this was mystifying to me — what did GPT-3 start with, and how did it know what to say?

(Incidentally, that’s why I wrote the heading on this section in the passive voice — “how it was created” — because who the “writer” is here is a philosophical conundrum.)

The editors at The Guardian explained the process clearly later in the piece (a human wrote this part):

This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.

For this essay, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

The difference between GPT-3 and an actual writer

While I am no expert in AI natural language, I am a thougthful analyst of writing. I found GPT-3’s essay in The Guardian to be a little choppy, and not particularly original. The question of what’s really going on in this prose is what’s interesting to me.

When I previously wrote about machine-generated articles, the results were not just poorly strung together, but borderline plagiarism. It was clear that the AI was just assembling bits and pieces it had cribbed from elsewhere — like a really bad and unethical college sophomore writing an essay on the day before it is due.

GPT-3 is a lot better. But I wonder about how The Guardian‘s editors “chose . . . to pick the best parts of each article” and “cut lines and paragraphs, and rearranged the order of them in some places.”

Human writers tend to have a specific set of problems — a blind spot about parallelism, a tendency to assume knowledge the reader doesn’t actually have, or an inflamed ego that makes them write too much about themselves, for example. These are human frailties, and editors are used to correcting for them. But human writers also have the ability to be creative, to assemble ideas in new ways, to have original insights, and to cleverly use humor and allusions. This is why it is worth editing to fix their writing flaws, because their ideas make it worth it. It’s also why editing is fun — because you’re dealing with the raw material that is generated by the ever-entertaining human psyche.

Despite The Guardian’s statement that the editing was no different from editing a normal article, I wonder. Where humans make grammatical errors, robots this advanced don’t. Their errors are more likely to spring from a lack of actual understanding of the source material, and a lack of originality, since they get their ideas by matching patterns in existing text. An editor can fix some of these problems, by spackling things together and rearranging them (and clarifying “original” ideas that may not actually be in the text). But that’s not the same as editing a creative human.

You can’t trust an AI writer

AI writer and researcher Janelle Shane put GPT-3 through its paces, and found it was often wrong. When she prompted it with whale facts, both true and false, it made up stuff like this (with Shane’s assessment of truth included in brackets):

  • Whales all have two blowholes. [False]
  • Whales can easily be identified by the pigment of their skin. [False]
  • Whales are very intelligent. [True]
  • One of the smallest whales measures 3 feet in length as an adult. [False]
  • The life span of a blue whale is 100 years. [False]
  • The oldest known whale is 211 years old. [True]
  • Whales consume 3,000 pounds of food per day. [True (but only for some)]
  • There are two main types of whales: toothed whales and baleen whales. [True]
  • Whales can hold their breath for up to 4 minutes when resting. [False]

What’s the future of machine writing?

So long as writing is based on originality, I don’t think an AI will replace a human. Maybe I should hedge that a bit. Unless AI’s become much more sophisticated, they won’t replace idea-driven writers.

There is a place for AI writing. It may become a crutch that weak writers depend on to get them started. It may replace boring, silly, and unoriginal text that is now written by humans (like the ones that are always bugging me to write articles for this blog as a link-farming exercise).

As in any other working environment, we will automate rote tasks.

But if you have any creativity in your approach to writing, I think your job is safe for a while.

That leads me to another question. Has anyone had an AI attempt to write a romance novel? Because if you’re a writer of formulaic romances, perhaps you should be a little more worried.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

7 Comments

  1. One area where tools like GPT-3 may be very helpful to writers is something similar to code-completion for programmers. In other words, you start writing, and GPT-3 continuously creates the rest of the document according to some style guide (which eventually will be something like “all of your previous writing”). Then you just edit the results as needed (and it constantly recomputes things based on your edits).

    It’ll be a shift in process similar to what pilots have undergone in the last few decades, from actively flying most of the time to a higher-level supervisory role most of the time. Pilots can definitely hand-fly the plane very well, but it makes more sense to use cognitive capacity elsewhere.

    Drivers who own cars with advanced driver-assist features (like Teslas) are getting an accelerated introduction to this shift – you quickly move away from micromanaging the controls to a more supervisory/situational awareness role.

  2. I don’t know Josh. It could be wishful thinking, since this AI is pretty new and GPT is better than professional writer push out every day. You don’t need to be creative to do lots of business, journalistic and scientific reporting. I think GPT may someday be able to give a pretty good imitation of human creativity, perhaps in a plagiaristic sort of way. will continue to get better, perhaps by earning the language of human creative types.

  3. It seems that the future of Artificial Intelligence is human augmentation; or a “human first” approach. That’s what I’m seeing throughout my research into where AI is heading.

    People are increasingly being assisted by AI, not being replaced by it. And I’d think this is all especially applicable to writers, whom are leagues above ANIs like GPT-3.