|

AI tools write with an “accent.” Here’s how to detect it

AI tools like Anthropic, ChatGPT, and Perplexity generate text intended to mimic human writing. They’re pretty good writers. But they write with telltale signs that they’re not human; you might say they write with an “accent.”

Any native speaker can tell when another speaker’s first language doesn’t match the one he’s speaking. If you’re speaking English, I can make a pretty good guess from your accent if your first language was French, Spanish, German, Dutch, Mandarin, or Hindi. (And of course, anyone listening to my atrocious French is never going to mistake me for a Frenchman.)

Similarly, I’ve reviewed and edited millions of words of prose. I’ve also reviewed thousands of words of AI-generated text. As a result, I can nearly always tell the work of a decent human writer from those of an AI. There are still some elements of writing like a human that AIs can’t mimic — that’s their “accent.”

Telltale signs that you’re reading AI-generated writing

These are the signs that you’re reading the work of an AI:

  • There’s no drama. Good writing sets the reader up, takes them on a journey, and lands them at a destination. It is persuasive because it is both surprising and, ultimately, satisfying. Even nonfiction writing, like reports, is best when it pulls the reader along. AI writing never seems to do this. This is the biggest reason that human writers aren’t going away: all decent writers know how to do this, and no AI tools do.
  • The sentences and paragraphs are tonally even. Most AI writing uses paragraphs of similar length. The sentences are typically of similar length, too. Human writers will often insert a single short sentence or paragraph for dramatic emphasis — for example, after describing a common misconception, “This is a dangerous delusion.” I’ve never seen an AI write that way.
  • There are obvious mistakes in meaning. As investigative reporter Maggie Harrison Dupré discovered in her analysis of AI-generated product reviews, AI tools make mistakes no human would make. For example, she describes a review of weight-lifting belts, a gym accessory, that suddenly starts talking about fashionable belts by companies like Gucci. It’s a philosophical question whether large language models (LLMs) actually understand meaning, but even if they do, they make errors due to limits of that understanding.
  • Ideas are repeated. LLMs tend to get stuck on an idea and then explain it repetitively. For example, when I asked ChatGPT to explain why brevity in writing was important, it told me, in successive paragraphs, that “Brevity in writing is essential because it helps to maintain the reader’s attention,” “Concise writing ensures that the reader remains engaged with the content,” “When writers eliminate superfluous words and focus on the core message, it becomes easier for readers to understand,” and “Brevity respects the reader’s time.” This is basically one idea repeated four times.
  • There are no grammatical errors. All writers make grammatical errors. In some cases this is on purpose (see David Sedaris’ Me Talk Pretty One Day). But AI writing always follows the grammar rules slavishly.
  • There’s no humor. AI is really bad at making jokes. Satire is lost on it. It has no wit.
  • The word choices are not creative. All human writers have words they’re in love with. One writer I edited was enamored with “leverage.” Another loved “loquacious.” And good writers often choose words as much for the sound as for the meaning: they want their passages to sound interesting in your mind’s ear. As a result, when you read a human writer, you often think, “Ah, interesting choice of word for that.” AI tends to stick to a common vocabulary that’s not creative and sounds even and uninteresting.
  • There are internal inconsistencies. Oddly, given their perfect grammar, LLMs will often contradict themselves in a single passage or conversation. They’ll say that a peregrine falcon isn’t a mammal, then imagine that it is. All writers make mistakes. But humans rarely state contradictory facts in the same piece of writing, because they’re writing from their internal understanding of truth, not to mimic human speech and knowledge patterns.

Taken together, these are indicators of a robot “accent” — the telltale signs that you’re not reading something written by a human. To be sure, none of this is foolproof. AI writers are improving rapidly. And some human writers make these same mistakes. (I’m reminded of the Yosemite National Park ranger who, explaining the challenge in designing critter-proof trash receptacles, explained that “There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.”)

(Ask yourself, would an AI ever think of including a Yosemite reference in an article about writing? Or generate a self-referential comment like this one?)

At this point, AI-generated writing still lacks important human qualities. And I don’t think that’s ultimately going to change.

AI tools will continue to be useful

None of this is intended to dissuade you from using AI tools to improve your research and your writing. Despite their well-documented problems with “hallucinations,” AI tools can ease a lot of the drudge work. I use them frequently, to summarize dense content, identify sources worth pursuing, and solve writing problems.

Just don’t try to pass off their work as your own finished product. We’d all rather read work with wit and drama in it — and humans remain better at that than any machine.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

5 Comments

  1. I agree with all.that you wrote, and yet, not only do AI tools fail at accurately differentiating between AI and human writing, at least.one study shows human teachers.don’t do that well either (“In summary, the finding that teachers cannot differentiate between student-written texts and AI-generated texts underscores the need for a thoughtful and ethical integration of AI in education”). https://blog.ohheybrian.com/2024/04/research-can-teachers-identify-ai-writing/

    1. This is at least in part because student writing has many of the same flaws as AI writing – especially when we train the students to produce cookie-cutter content like 5-paragraph essays.

    2. That’s also because today’s teachers are products of a declining education, and they haven’t received enough training and experience, either. They can’t pass on to students what they don’t know. A friend of mine, a retired 5th grade English teacher in her early 40s, writes middle-grade and young-adult fiction. In her writing, she constantly confuses present and past tense, confuses homophones with homographs with homonyms, and so forth. I don’t blame her – I blame the education system for failing her.

  2. This is a valuable post, for it clearly articulates the differences to watch out for.

    It’s similar but not quite the same as the defining-obscenity issue of 60 years ago:

    [Associate Justice, US Supreme Court] Justice Potter Stewart famously opined in Jacobellis v. Ohio (1964): “I shall not today attempt further to define [obscenity] … and perhaps I could never succeed in intelligibly doing so. But I know it when I see it …” (Thank you, FreedomForum.org)

    When issues like this arise, it is often hard to identify specifics until we’ve all had some experience with the situation.

  3. I have shamelessly stolen your Yosemite park ranger story about the overlap in intelligence of the smartest bears and the dumbest tourists. Recounting that today to my patent attorney, he shot me a link to the patent he helped his clients obtain for the GrubCan. Attached is a link to their website complete with video of bears testing the GrubCan.
    https://tuffystuffy.com/