AI is screwing up my editing ecosystem
I write. I edit. I ghostwrite. I also use AI — carefully — to help me with ideation and summarize research content. But I don’t substitute AI for own abilities.
Unfortunately for the quality of the text I work on, my clients have made different choices.
Here are two examples — an author that used AI to write content, and a publisher that used AI for copyediting — and what they taught me about what parts of writing can never be replaced with AI.
The problem with AI writing
An author recently hired me to do a developmental edit of a highly technical book of about 45,000 words. This is a typical job for me, and I priced it to reflect the higher degree of difficulty and specialized needs demanded for editing technical content.
Part way into the book, I began to wonder if it had been written with AI. The text reflected the flat and even “AI accent” and was remarkably free from grammatical errors. I asked, and the client confirmed that part of the book had been written with AI. I ran it through an AI detector, which noted that much of the content was AI-generated. (Such detectors are flawed, but I already had the author’s testimony about it the source of the content.)
My job as a developmental editor is to ensure the the text does the best possible job of accomplishing the author’s goals. You might think that the job would be the same whether the writer was a human or a machine. And to some extent, it was. The text had some typical organizational problems (topics covered in multiple places that should be combined, for example) and issues of formatting consistency. As in many technical works, there was a great deal of passive voice that, once converted to active voice, would make the content much clearer.
However, there were other problems that weren’t typical. One whole chapter appeared to be AI-written in response to a simple prompt about historical context; in my opinion, the manuscript would be better off if the client deleted or severely condensed that fairly generic chapter. I found a few phrases repeated verbatim in different chapters, an error that is unusual for humans and very hard for an editor to catch, since I don’t keep an eidetic memory of every phrase in my head. And I was at a loss to recommend how to edit the too-even sentences. “Rewrite to be more interesting” isn’t a helpful editorial comment. I also noted that AI-created text is not protected by copyright.
I normally attempt to identify why an author has a writing weakness and, as sensitively as I can muster, explain how they can learn to be better. But in this case, I threw the sensitivity out the window — after all, the machine isn’t going to get offended — and just made clear recommendations.
My edit memo about the document was 2,000 words long, twice what’s typical for a project like this. It addressed inconsistent tone and formatting, the use of “we” in a book with only one author, wordiness, organizational issues, and overuse of vague words like “involve,” “deep,” and “leverage.” My main concern now is how the client will solve these problems, because the solutions demand a human approach.
The problem with AI editing
My second encounter with machines in the editorial process was when I received the copy edit review of a book I’d spent the last year ghostwriting. The manuscript had already been reviewed by a developmental editor who had contributed very few comments, calling it “the cleanest manuscript I’ve ever seen.” So I expected few problems with the copy edit.
I value copy editors not just for their ability to spot errors, but for their judgment. A machine can identify where I misspelled a word or put the period on the wrong side of the quotation mark, but a human copy editor can see where I’ve written in a way that is misleading or unclear, or where I used a wrong word.
In this case, when I got the copy edits back from the publisher, I could tell something wasn’t right. The edits included changing “percentage points” to “%”, which was odd (“an increase of 3 percentage points” does not mean the same thing as “an increase of 3%,” but this edit actually would change it to “an increase of 3% points,” which makes no sense). The publisher had clearly used an automated tool to identify and fix problems like extra spaces after periods and placement of footnotes; I wondered if the percentage problem was automation gone wrong.
More troubling to me was what the copy editor and their machine helpers didn’t catch. I’d referred to an application called “StreetBump” but had written it wrong as “SmartBump” in two places; they didn’t notice. And I’d written “Securities and Exchange commission” and the copy editor didn’t note that “commission” should be capitalized. There may be other errors like this, but who knows, since neither I nor the copy editor can identify them at this point.
I am certain that there is a human copy editor in the publisher’s loop here, because I can see some edits and comments that reflect human judgment attributed to an editor’s name. But I also worry that, because of the process augmented by a machine, the publisher may have used a less skilled copy editor or forced them to work too quickly.
At this point, machines are damaging the humanity of writing
I don’t blame the AI. I blame the people who are using it wrong.
There are two problems with AI in the writing process. First, since AI doesn’t understand meaning, it sometimes writes content and makes suggestions that do violence to the text’s intended meaning. And second, since AI is not human, it removes the human element from prose — creativity, imagination, personality. In a word, “wit.”
I value the humanity of the writers and editors I work with. They understand that writing a fundamentally human process of communication between a writer and a reader, and that connection is what makes all writing — fiction, advice, how-to, humor, essay, and everything else — so wonderfully evocative.
I also value the humanity of the writers and editors I work with because of my human-to-human relationship with them. We are both using our tools to hammer words into shape. That’s an enjoyable, collaborative process. I like to make them smile or get a little upset, and I like it when they make me smile or get a little upset.
I’ll keep getting paid to clean up machines’ mistakes, and that’s a fine way to make money. But I’d rather get paid to help humans get their meaning across.
Keep using machines to make your writing better. But recognize that while the machines are better than people at rote, automated work, the people are better at connecting with readers. No matter how good AI gets, that’s unlikely to ever change.
Thank you, Josh, for your breath of fresh air commentary on the usefulness and pitfalls of automated tools in writing, including AI. You are tuning our minds to discern the too stilted, overwritten, and hallucinated products of AI while we still can.You are also celebrating the nuance of human writing until it will just be bots writing to each other, and then what? I appreciate every word of your writing from the inside out.
Excellent post that I can tell no AI can write now—and maybe ever.
Excellent article – thanks for this, Josh. I totally agree about editing material that has been generated by AI. Tell-tale signs for me – a noticeable shift in the author’s voice, a lack of logic and flair, overuse of certain words and phrases (why does AI love the word ‘leverage’ so much?), over-complicated sentences. AI has remarkable capabilities alongside a horrible ability to destroy.
Great post – thank you for putting this so clearly. I have certainly come against both of these problems in my own work as a nonfiction editor. Yesterday’s news about Spines just shows how the dumbbell of quantity and quality is becoming more lopsided than ever.
Thanks Josh. I’ve never used AI for a writing tool and I don’t even know how to start, but I’ve been trying to figure out if it’s all hype and this useless or it is a potentially useful tool.
Based on your post, I conclude it is the latter, and that there’s an opportunity to serve as a human AI editor as you are doing.