Charlene Li and Katia Walsh demonstrate the right way to build a book with AI help

Excerpt of Appendix C of Winning with AI

Charlene Li and Katia Walsh’s Winning with AI: The 90-Day Blueprint for Success, just published this week, is a fascinating case study in how authors can use AI to create better books. While Charlene and Katia are more sophisticated with AI than the average author, they didn’t use AI to write prose. They used it to save time and propel their thinking.

Luckily for us, they documented exactly what they did and published it as Appendix C in their book. This ought to be best practice for authors. If your book isn’t about AI, you don’t need to go into nearly as much detail as these they did, but people will want to know just how AI contributed to what they’re reading.

What worked and what didn’t for writing Winning with AI

Charlene and Katia shared this at the start of Appendix C:

Throughout this book’s development, we used AI extensively—not as a ghostwriter, but as a notetaker, research assistant, and brainstorming partner we asked to challenge us. Indeed, we consider AI to be our third collaborator.

They describe how they used Claude to structure content, ChatGPT to create a custom GPT with four years of their work, Perplexity to do research, and Gemini to search a vast collection of interview transcripts.

Charlene and Katia list these areas where AI was helpful:

  • Transcription, note-taking, and summarization. This included key takeaways and actionable steps from working sessions.
  • Research. While AI replaced what a researcher would have done, the authors fact-checked everything.
  • Initial structure and outlines. AI helped turn research into suggested content organization.
  • Finding what they’d forgotten. AI identified useful content that fell on the floor from earlier drafts.
  • Illustration ideas. AI created initial sketches based on chapter text.
  • Brainstorming. They used AI to challenge their thinking.

But it’s perhaps more instructive to learn what a couple of true AI experts and excellent writers could not get AI to do well. These are tasks that authors still need to do themselves. Below I excerpt the portion of the appendix in which they shared what AI failed at, with my commentary in brackets.

  • Coming up with new ideas. Because AI is trained on existing knowledge, it cannot produce new ideas. Even when we prompted it to think hypothetically and about the future, it came up with content limited to today’s reality. In our experience, AI clearly failed to generate unique ideas on its own. [You can’t outsource ideas.]
  • Writing. While AI could generate very rough initial outlines, it failed to “write” in a way we could ever use, despite working with multiple style samples, guides, and prompts from us. In fact, our revelation is that it took more time to rework AI-generated text than to write from scratch. AI “writing” is notoriously bland, full of repetitive patterns (“It’s not this—it’s that”) and brimming with corporate jargon—like “alignment,” “leverage,” “utilize,” and more. [I know that Charlene at one point thought she could train AI to write in her voice. But she and her coauthor had to backtrack on that. And the “it’s not this, it’s that,” formulation has become a tiresome and recognizable trope of AI writing.]
  • Editing. We experimented with AI to see if it could edit content. It could not. We found that AI consistently defaulted to its poor writing style (see above). One frustrating example is that AI replaced every instance of the simple word “use” with “utilize,” a basic no-no in writing. We relied on our very human editors to do this work. [This is heartening for those of us that make our living as skilled editors.]
  • Creating illustrations. While AI could quickly create crude charts, it often suggested cliché icons that, at times, had nothing to do with the content or misrepresented its meaning. Our human illustrator created all the illustrations in this book. [There’s no way around it: AI graphics are trite.]
  • Fact-checking. As Barbara Cominelli said (see chapter 11), “AI has no truth function.” We exercised constant critical thinking and vigilance to verify AI-offered references and list only validated facts. [AI hallucinations are real, and they’re insidious. If you use AI for research, you must budget time to verify anything it comes up with.]
  • Finessing final work. All the little tweaks and iterations that pull a book together, from adding an anecdote to including a recent study to writing detailed endnotes, are ours too. They represent our experience, thinking, and yes, our dreams of seeing the book in the hands of our readers. AI lacks the very human traits of character and last-minute aha moments to inject our personalities into the book. [Learn from this. Regardless of how you write a book, getting the last editing pass right is a fundamental author responsibility. It’s what turns a compilation of facts and ideas into your book.]

A note for publishers

I’m so glad that Charlene and Katia wrote this up so clearly. There’s a lesson here for traditional publishers. Your best authors are going to use AI. But they’re not going to use it to write things.

Asking authors to avoid the use of AI in their work is like asking them to avoid spell-checking and web search — the technology is there, and it is essential to getting the work done quickly and right. But when there’s evidence that lots of the book was drafted with AI, that’s when the problems arise. Such work is not only subject to copyright limitations, but will be boring and tedious to read.

Charlene and Katia’s hybrid publisher Amplify deserves credit here for trusting these authors to create responsibly with powerful AI tools, and to reveal clearly what they did. The list in Appendix C of Winning with AI is a pretty good guide for how authors can use AI responsibly.

The bottom line for authors

Charlene and Katia write, “our collaboration worked—not because AI is a good writer (it’s not), but because it helped us become better thinkers.”

AI tools can speed up a lot of the drudge work. But if you want your book to represent you — you ideas, your insights, your experience, and your wit — you’re going to have to do a lot of that work yourself.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

2 Comments

  1. It’s interesting that the authors could not get AI to stop using jargon. I built a Claude and Notion skill that rewrites others’ text. It kills the passive voice, run-on sentences, cleft sentences, and the like. I would never dream of using it to write for myself or a client, but it absolutely avoids utilize, leverage, and other terrible words.

  2. I’m skating happily through the first three of Charlene and Katie’s six “helpful” to dos; looking forward to employing next three. Now in early stages of researching a biography. Using Otter.ai ($16.99/mo) for dozens of interviews with the subject and key people in his network.

    Most sources are South Asians, with ESL-related accents. Transcripts have to be cleaned, line by line, to correct errors in specific words, phrases, and proper nouns. Otter’s audio capture is especially valuable for getting things right. All this initial fact-checking of course is tedious. Yet it deepens my immersion in material, and will save time when writing. I expect it also will elevate brainstorming outcomes soon with AI.

    (This is my ninth book, first AI-assisted.)