ChatGPT pretends to be me. And that’s scaring the crap out of me.

I am going to try to be as analytical as I can about this. The topic is “Can a machine — ChatGPT — simulate everything about me that is important to how I do work . . . and if so, am I obsolete?”

To be fair, though, that’s not a topic on which I can write without bias. I’m still on team human.

What does ChatGPT know about me?

I recently became aware of a tool someone had developed that promised to rewrite any text you entered as Josh Bernoff would, using an interface to ChatGPT. I’m not going to tell you how to get to that tool, because I am profoundly ambivalent about its existence, but it took less than five minutes for that developer to create. (And no, he didn’t ask my permission, he just did it.)

What intrigued me about this was the simple idea that ChatGPT could simulate me. How could it do that? And was it any good?

So I decided to ask ChatGPT a few questions, starting with this dialogue (my prompts are in bold italic, everything else is ChatGPT’s responses.)

Have you read “Writing Without Bullshit” by Josh Bernoff?

Yes, as an AI language model, I am capable of processing and analyzing written text, including books like “Writing Without Bullshit” by Josh Bernoff. This book is a guide to help people write more clearly and concisely, avoiding the use of jargon, buzzwords, and other unnecessary filler words that can make writing confusing and difficult to understand. It provides practical tips and examples to help writers communicate their ideas effectively without wasting the reader’s time.

Like so much that ChatGPT says, this is sort of right, but not quite. My book is about clarity and concision and avoiding jargon, but I never use the word “buzzwords.” The book certainly includes practical tips, and my central idea, the Iron Imperative, is about not wasting the reader’s time.

Did ChatGPT actually read the book? I’d like to answer that, but it’s not clear.

How did you read a book that is not posted anywhere online?

As an AI language model, I don’t have the ability to read physical books. However, I can access a wide range of digital texts, including books that are available online.

While I cannot confirm whether “Writing Without Bullshit” is available online, it’s possible that it can be found as an e-book or in digital format. Alternatively, I may have access to a digital copy of the book through my training data or other sources of information.

Regardless, my responses are based on general knowledge and common themes related to the book, rather than specific quotes or passages.

Do you read Kindle books?

As an AI language model, I don’t read books in the traditional sense as humans do, and I don’t have the capability to use a Kindle device. However, I can process and analyze the text of Kindle books, as well as other electronic and print formats.

Do you have access to the Kindle book of Writing Without Bullshit?

Yes, I do have access to the Kindle version of Writing Without Bullshit.

What is the first sentence of Chapter 2 of Writing Without Bullshit?

The first sentence of Chapter 2 of “Writing Without Bullshit” by Josh Bernoff is: “To write without bullshit, you need to understand your audience.”

No, the first sentence in Chapter 2 is “Bullshit in writing isn’t new.”

My apologies for the error. You are correct. The first sentence in Chapter 2 of “Writing Without Bullshit” is “Bullshit in writing isn’t new.

What is the first sentence in Chapter 10 of “Writing Without Bullshit”?

I’m sorry for my earlier mistake. The first sentence in Chapter 10 of “Writing Without Bullshit” is: “Rewriting is the essence of writing.

That sounds like William Zinsser. Are you sure that was Josh Bernoff?

Apologies for the mistake. You are correct that the sentence “Writing is thinking on paper” is a well-known quote from William Zinsser, and is not from Josh Bernoff. I do not have access to a copy of “Writing Without Bullshit” to provide the correct first sentence of Chapter 10

You said you had access to the Kindle version of “Writing Without Bullshit”

I apologize for the confusion. As an AI language model, I don’t have access to a specific copy of a book, including the Kindle version of “Writing Without Bullshit”. I rely on my pre-existing knowledge and general research to answer questions to the best of my abilities.

Already, in my first inquiries, the strengths and weaknesses of ChatGPT are on display. Its description of my principles reads like someone who read the book and read some other similar books and jumbled those principles together. It says it has access to the book and that it doesn’t in the same conversation. And it can’t tap into it to recall specific passages. To test further, I asked it to give me the book’s table of contents.

What is the table of contents of “Writing Without Bullshit”?

Here is the table of contents for “Writing Without Bullshit: Boost Your Career by Saying What You Mean” by Josh Bernoff:

Introduction
Part One: Why Bullshit Is a Problem
Chapter 1: What Is Bullshit?
Chapter 2: Bullshit in Writing Isn’t New
Chapter 3: Why You Should Avoid Bullshit
Chapter 4: How Bullshit Undermines Your Career

Part Two: How to Write Without Bullshit
Chapter 5: Clarify Your Objectives
Chapter 6: Know Your Audience
Chapter 7: Define Your Key Points
Chapter 8: Organize Your Ideas
Chapter 9: Write with Clarity
Chapter 10: Edit Like a Pro
Chapter 11: Publish without Bullshit
Chapter 12: Collaborate Effectively

Part Three: Mastering the Skills of Writing Without Bullshit
Chapter 13: Email Without Bullshit
Chapter 14: Writing Reports and Proposals Without Bullshit
Chapter 15: Presenting Without Bullshit
Chapter 16: Writing for the Web Without Bullshit
Chapter 17: Writing Books Without Bullshit

Conclusion: Say What You Mean
Appendix A: The Unbearable Bullshit of Corporatese
Appendix B: How to Write a Style Guide
Notes
Index

At that point, I nearly fell on the floor. Because that is indeed a table of contents of a book I’d like to read, a book that is my book in some alternate universe, but it is not remotely the table of contents of my book. I did indeed write about 20 chapters with short titles. But “Publish without Bullshit” isn’t one of them, and neither is “Presenting Without Bullshit.” I never wrote “The Unbearable Bullshit of Corporatese,” although I’d love to read it. The whole thing is a fabrication. In some meta-fashion, ChatGPT is bullshitting its way through the concept of knowing what is in my book, presenting this as the actual table of contents although it is completely fabricated.

Analytically, what’s happening here is that a large language model is using what it knows about my work and how books are constructed and making a table of contents that would plausibly fit the pattern of books I would write.

But, if you are tempted to anthropomorphize this, which is an irresistible temptation, ChatGPT is a goddam liar that makes persuasive shit up and you can never trust it. So remember that for the rest of this conversation.

Can ChatGPT edit with the same skill as me?

Now to the main point: could ChatGPT replace me?

To understand that question, I think it’s worth putting down a few words about me and why I think I am at the top of my game as a writer, editor, and writing coach. I’ve been writing nonfiction as a professional, mostly on technical and business topics, for more than 40 years. I used the intense training of 20 years as an analyst at Forrester Research to hone that craft exquisitely. I’ve edited many dozens of books now along with dozens of research reports. But I have been passionate about writing since I first put pencil to paper as a child. I love well-written prose the way other people might love ballet or symphonic music or basketball. I love not just consuming it, but knowing what works, why it works, how different practitioners use different techniques, and everything that goes into making prose clear, engaging, powerful, and great.

Writing and editing books is the pinnacle of the skills I have developed in that time. It’s not a coincidence that my first book as a solo author was on writing. When I edit someone’s work, I take everything important into account — who they are, what they are trying to say, how well defined their idea is, how well their prose expresses that idea, what flaws the prose has, why they have those problems, how they can fix them — everything. I can teach you to notice passive voice and repeated words, to avoid exclamation marks and jargon, to tap into the power of stories, and to infuse drama into your writing without becoming breathless. My goal as an editor is to help you be the version of yourself that is the best writer you can be.

I am really good at this, and people pay serious money for it. Could a machine really do what I do?

There is an example in Chapter 1 of my book, also here in this blog: the corporate description of a company called Inovalon. Could ChatGPT rewrite it, not just clearly, but just as I would? As you can see, it took a little encouragement.

Edit the passage below according to the principles of Josh Bernoff in the book “Writing Without Bullshit”

Inovalon is a leading technology company that combines advanced cloud-based data analytics and data-driven intervention platforms to achieve meaningful insight and impact in clinical and quality outcomes, utilization, and financial performance across the healthcare landscape. Inovalon’s unique achievement of value is delivered through the effective progression of Turning Data into Insight, and Insight into Action®. Large proprietary datasets, advanced integration technologies, sophisticated predictive analytics, data-driven intervention platforms, and deep subject matter expertise deliver a seamless, end-to-end capability that brings the benefits of big data and large-scale analytics to the point of care.

Inovalon uses cloud-based data analytics and data-driven intervention platforms to improve clinical and quality outcomes, utilization, and financial performance in healthcare. By Turning Data into Insight, and Insight into Action®, the company delivers value across the healthcare landscape. Inovalon leverages proprietary datasets, integration technologies, predictive analytics, intervention platforms, and subject matter expertise to seamlessly bring the benefits of big data and large-scale analytics to the point of care.

Now write that in the first person, using “we” to refer to Inovalon and “you” to refer to its customers.

At Inovalon, we combine advanced cloud-based data analytics and data-driven intervention platforms to provide you with meaningful insight and impact in clinical and quality outcomes, utilization, and financial performance across the healthcare landscape. Our unique value proposition is achieved by turning data into insight, and insight into action. We leverage large proprietary datasets, advanced integration technologies, sophisticated predictive analytics, data-driven intervention platforms, and deep subject matter expertise to deliver a seamless, end-to-end capability that brings the benefits of big data and large-scale analytics to the point of care for you.

Get rid of the jargon.

At Inovalon, we use technology to help improve healthcare outcomes and financial performance. We analyze data and provide insights that help you make better decisions. We have deep expertise in the subject matter, and we use advanced tools and techniques to turn data into meaningful insights. Our goal is to make it easy for you to access and use this information, so you can provide better care to your patients.

It’s still not good enough. “We have deep expertise in the subject matter, and we use advanced tools” is vague. My rewrite, from my book, is still better:

Inovalon has more insight into health data than anyone else. We analyze that data and apply the knowledge to help you improve care options, reduce costs, and improve compliance. We help hospitals, doctors, insurance payers, and patients. We identify gaps in care, quality, and data integrity and apply our unique capabilities to resolving them.

But you have to ask — is it better enough? ChatGPT’s description is still an improvement, especially with a little prompting — and we could train it to incorporate those prompts into whatever it does. Does ChatGPT understand Inovalon? Or is it just faking it? Is there a difference?

These kinds of questions come up over and over. I asked ChatGPT about my Iron Imperative and this is what it said:

What is the iron imperative?

The Iron Imperative is a principle in writing that suggests writers should focus on the needs of their audience above all else, and communicate their message as clearly and concisely as possible, without any fluff or unnecessary language. This principle is discussed in the book “Writing Without Bullshit” by Josh Bernoff.

Sort of right. Sort of not right. The actual Iron Imperative is “You must treat the reader’s time as more valuable than your own.” ChatGPT is pretty good at faking it, but lacks wit.

I’ve written about how bad ChatGPT is at humor. So this dialogue hit me right in the solar plexus:

Give an example of how Josh Bernoff uses humor to make points about clear writing

In “Writing Without Bullshit,” Josh Bernoff uses humor to make points about clear writing. For example, he writes, “Every time you use a passive construction, the ghost of William Strunk will visit you with a blunt instrument.” This humorous statement emphasizes his point that the passive voice can make writing weak and confusing. By using humor, Bernoff captures the reader’s attention and makes his message more memorable.

Ha! I never said this. I never wrote anything about Strunk’s ghost or blunt instruments.

But I did write about how I told my colleague Augie Ray to slap himself each time he wrote a passive sentence, and marked his manuscript with “slap” on each instance, and he has said this made an enormous impression on him. Why is ChatGPT inventing its own version of this, courtesy of Strunk’s ghost? The words “the ghost of William Strunk” appear nowhere on the web, and the words “blunt instrument” appear together with Strunk’s name only in an Amazon review of a book called Spunk and Bite that is about restoring life to prose drained of vitality by over-compliance with rules (like Strunk’s and, presumably, mine). Where in hell did this image come from? And why is ChatGPT attributing it to me?

The things (I think) I have that ChatGPT lacks

I feel a little like the radiologists who say that they can see things on mammograms that computers miss, even though the AI mammogram readers have about the same track record. Or the drivers who say they can avoid accidents that autonomous driving programs miss.

Of course I believe I am better than ChatGPT, even when it is attempting to be just like me. But why?

Ninety percent of what I do when I edit is to fix passive voice, fix subject-verb agreement, add paragraph breaks, break up long sentences, remove jargon, replace em dashes, and retrieve buried ledes. Clearly, we could train ChatGPT to do all of that . . . and very easily.

But the other 10% is wit. Elevate what’s most interesting. Identify catchy turns of phrase. Reorder text elements to make better logical sense. Figure out what’s fun and interesting and funny and insightful and recommend ways to put it front and center. That’s what wit is.

I think — imagine? hope? — that my experience, my knowledge, my skill, and especially, my wit is worth paying for. You can’t get my wit anywhere else.

The difference between what I do and what a machine does is getting harder and harder to discern. After 40 years of developing my skill as a writer and editor, that chills me to my soul.

I do have a soul. At least I think I do. Or that just my own neural net making things up to justify itself?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

10 Comments

  1. When asked whether it had read your book, ChatGPT dodged and dissembled. Its answers reminded me of the nondenial denials given by George W. Bush’s secretary when she was repeatedly asked, in various ways, “Did the President actually sign these letters with his own hand?”

  2. I certainly agree ChatGPT cannot replace you. What you produce is demonstrably better than what ChatGPT produces. Today.

    None of us knows whether this will still be true next month, next year, in five years.

    Large Language Models are really very new, at least for all us potential end-users. It’s not hard to imagine that LLMs will become a lot better in the future, both in the quality of what ChatGPT or other LLM writes and in the great reduction of its current habit of spouting utterly fictional statements as if they were facts. And the future probably isn’t that far away.

    FWIW, I asked ChatGPT about myself. 2/3 of what it wrote about me was accurate or at least close enough to accurate for government work, and 1/3 was utterly untrue, just fiction. Whether that 1/3 should be called hallucinatory or just factually false, I dunno. Either way, it’s bad news and potentially dangerous. That ChatGPT response was one more proof point that today no one can trust a ChatGPT response.

  3. All of these LLMs are trained on vast amounts of text written by humans. As we become more reliant on these tools, more and more of the text online will be generated by AIs rather than people, and this text will in turn be used to train the next generation of LLMs. As a result of this textual incest, will LLMs actually become more generic and lacking in what you call “wit” over time? Perhaps better at boilerplate writing but increasingly worse at doing anything unique or insightful? I’d love to hear your thoughts on this.

  4. The AI cannot be you just as one person and his or her twin is not the same person. Rest assured that you are safe.

    You have found some interesting things from your experiment. (1) The AI states everything as fact even though it is not true (ego of the designer?). (2) The AI blends data together (you and William Zinsser, others?). (3) The AI is averse to violence (“visit you with a blunt instrument” as opposed to “strike”).

    There are probably other interesting observations, some good and some bad.

  5. I still don’t get it. Is it because the technology is still maturing or is the ‘bluff’ or ‘con’ always going to be the way chat GPT will work? If so, will a future version be able to remember its earlier lies?

    Reminds me of myself as a young man trying to build relationships on stacks of white lies. I only matured when I realized it was easier to remember the truth.

  6. I think Synthetic Intelligence might be a better rubric. This is you, seen though the multifaceted eyes of a neural fly, responding with singular purpose to every swat, elusive, and spoiling everything human it touches.

  7. ChatGPT’s seemingly-original use of that wonderful witticism “visit you with a blunt instrument”, rather than the prosaic – and not at all funny – “strike”, was the point in this experiment where I first saw evidence that a Large Language Model AI was not instantly detectable when it came to humour.

    It was the point I felt the cold hand of redundancy on my shoulder.

  8. I’m guessing you’ve seen it, but if not, you should look at editGPT. It’s a browser extension that formats ChatGPT’s responses into a markup showing changes from the original text you submit for editing, and lets you accept or reject edits just like Word’s review mode. And if cut and pasted into Word, you can accept or reject the text in Word. https://www.editgpt.app/