AP issues guidance on AI content. Basically: “Don’t publish bullshit.”

The Associated Press issued a set of guidelines for journalists regarding the use of AI. They should be a model for all news organizations as well as anyone else who publishes nonfiction content.

The danger here is that AI text generation tools like ChatGPT are, basically, bullshitters. They make up plausible text that has no provable connection to the truth. So a writer who posts or publishes AI content without vetting it may be spreading lies.

Here are the relevant portions of the AP guidelines:

  • AP has a licensing agreement with OpenAI, the maker of ChatGPT, and while AP staff may experiment with ChatGPT with caution, they do not use it to create publishable content.
  • Any output from a generative AI tool should be treated as unvetted source material. AP staff must apply their editorial judgment and AP’s sourcing standards when considering any information for publication.
  • In accordance with our standards, we do not alter any elements of our photos, video or audio. Therefore, we do not allow the use of generative AI to add or subtract any elements.
  • We will refrain from transmitting any AI-generated images that are suspected or proven to be false depictions of reality. However, if an AI-generated illustration or work of art is the subject of a news story, it may be used as long as it clearly labeled as such in the caption.
  • We urge staff to not put confidential or sensitive information into AI tools.
  • We also encourage journalists to exercise due caution and diligence to ensure material coming into AP from other sources is also free of AI-generated content.
  • Generative AI makes it even easier for people to intentionally spread mis- and disinformation through altered words, photos, video or audio, including content that may have no signs of alteration, appearing realistic and authentic. To avoid using such content inadvertently, journalists should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image’s origin, and checking for reports with similar content from trusted media.

Human journalists and their outlets are essential in the age of AI

Any text you read could be made up bullshit. Those who want to lie — or don’t care if what they write is true or not — now have a tool that makes it easy to create plausible text. This is lethally dangerous to the idea of truth.

As a reader, you have only one defense: consider the source. If the source is a trusted individual, journalist, or media outlet, then there’s a high probability that what you are reading is true. If it is not, then who knows?

As a writer, if you use AI content without vetting it for accuracy, you are responsible. Your readers should point out your errors. And by publishing unverified bullshit, you are rapidly ruining your reputation.

Check facts. Don’t publish unedited AI-generated content. Both the idea of truth and your credibility are on the line, and once those are destroyed, you cannot get them back.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.


  1. Students as well should heed your advice as well. They may not be publishing their papers and assignments per se, but principled professors should view AI BS in the same manner. At least, I hope so.

    1. I’ve heard the problem, currently, is folks are rejecting human-written documents as AI, as both are terrible

  2. I wonder how AI BS compared to PR BS, which currently provides a vast majority of “news” published in print and online.