The rules for using AI to do research

When I’ve shared that I often use AI for online research, I’ve come under attack from those who feel any use of AI is wrong, and that everything AI coughs up is suspect. But lots of people use AI for online research, despite its shortcomings.
Is there a “right” way to use AI for research? I’m not talking about prompting tips. I mean something more basic: how can you research safely with AI, if at all?
Why AI is popular for research
AI is not a good tool for primary research. Primary research is research intended to generate new facts, such as surveys, scientific experiments, data analysis, and interviews. Large language models like ChatGPT operate on existing content, rather than generating new facts.
But secondary research is still essential. This is what we used to call Web research (or if you’re old enough, library research): finding, aggregating, and citing primary content that other people have already created. If you’ve ever searched for a statistic online, you’ve done secondary research. (It’s also, sadly, what people mean when they tell you not to believe the conventional wisdom and to “do your own research,” which boils down to finding stuff that reinforces your existing point of view.)
Google and its direct search competitors were an incredibly blunt instrument for secondary research. Google searches (at least until the AI summaries started to appear) were limited to the exact keywords you used. Worse, SEO practitioners gamed the algorithm so that the top search results were not the most useful, but those that were most skilled at playing the game.
Secondary researchers needed something better. This is where AI tools like Perplexity took off. Like Google, Perplexity will generate as series of links with its answers. It’s takes off from what you write, cognitively mapping it to other similar meanings, not just the literal words you use. And you can engage it in a dialogue, telling it to generate answers that are successively more useful to you.
ChatGPT and its competitors can do the same thing, but are more focused on generating answers than links unless you tell them to behave more as a search tool. Google search now includes Gemini AI, that does pretty much the same thing.
Statistics on AI use for research
In my recent survey of 1,500 writers, 61% of writing professionals used AI at least sometimes, and 73% of them used it as a replacement for Web search. Rolling that up, 45% of writing professionals used AI as a replacement for Web search.
Here’s how that looks for some subgroups of writers:
| Type of writer | % who use AI at least sometimes | % of users who use it to replace Web search | % of writers who use AI to replace Web search |
| Nonfiction authors | 58% | 82% | 48% |
| Journalists | 44% | 69% | 30% |
| Book ghostwriters | 68% | 77% | 52% |
| Content marketing writers | 73% | 68% | 50% |
There are many ways to measure the impact of AI on Web search, but there’s been a clear decline in click-throughs as people get the answers they’re seeking from AI overviews.
The right way to do research with AI tools
AI tools may increase efficiency, but they also increase risk. They generate confident answers that are sometimes wrong (“hallucinations”). You can no more depend on what you read in an AI summary than you can trust text in Wikipedia, or, for that matter, random links that come up in a regular Web search.
But that doesn’t make AI tools useless. In fact, one doctoral student I talked to said that AI tools had saved her years of work in reviewing existing research content in her field. It’s no coincidence that the heaviest users of AI in my survey had $47,000 higher incomes than the nonusers. Busy, highly paid writers need to be efficient.
It’s best to think of secondary research as consisting of five steps, each of which has been changed by AI:
- Find sources.
- Vet sources.
- Understand what sources say.
- Analyze and synthesize facts and content.
- Cite facts and content.
Here’s how AI changes those steps:
- Find sources. AI tools like Perplexity make this much more efficient. But it’s important to note that effective research means using AI to find sources, not believing what AI tells you as a summary of what “the web” as a whole says.
- Vet sources. AI hallucinates sources (I’ve seen it generate links to New York Times articles that weren’t real). Where there are no definitive sources, it’s quite likely to pull information from random Reddit posts or biased sites full of false information. This was always a problem: Web searches had many of the same drawbacks. But in AI summaries, it’s very hard to tell the difference between a number from a legitimate, dependable source or one pulled from a random site or completely made up. This makes it even more imperative that researchers verify the links and content of what they’re finding. Don’t just quote AI, it could very well be wrong.
- Understand what sources say. AI is excellent at summarizing content. But here again, it can make mistakes and show you what you want to see. So it’s effective to use content summaries to help get the gist of long papers, for example, but before citing what the AI has shown you, you have to check that it’s an accurate description if what the paper says. (Papers have abstracts, which function as far more effective, presumably human-generated, summaries.)
- Analyze and synthesize facts and content. This is the core of what researchers do. It’s a fundamentally human activity, combining insight and judgment with existing content. While you can engage an AI in dialogue to examine different approaches to synthesizing content, it’s a fatal mistake to outsource this intellectual activity to a machine.
- Cite facts and content. This is where you link to what you found. AI can help format the citations, but which ones you include will reflect the quality of your thinking as a researcher.
It sure would be easier to just let AI do the work. But that’s how crappy content gets recycled.
AI can make you more efficient. But until you internalize that you are responsible for the accuracy of every piece of research you cite, you run the risk of spreading erroneous material.
AI is a fine research tool, just as Google was before this. Just remember that your human intelligence is what makes the difference in the value of the result.
Excellent (as usual) post, Josh. Very informative and helpful. Thank you.