In Tech America, AI Fact-Check You

Published by

on

This happens to scientists fairly often: You remember some finding you came across a few weeks ago. You don’t remember where you read it, but if you want to include it in a paper you’re writing, you’ll have to cite the source. It’s frustrating.

LLMs like ChatGPT are helpful in this situation. You ask, “Find me that paper about endpoint X in a clinical trial of Y,” and within a few seconds you have the answer. Except the other day, ChatGPT-5 couldn’t find the publication I was after and kept pointing out others that were similar but definitely not the one I remembered. I asked it to check the preprint literature. Nothing. I asked it to include different keywords in its search. Again, nothing. I restated what I remembered in several different ways. Still nothing. ChatGPT was adamant that the publication I was after didn’t exist. I went back to old-school literature search and spent half an hour on it. Slowly, it dawned on me that ChatGPT wasn’t the problem. It was me: I had hallucinated the paper.

This is a complete role reversal. It used to be that ChatGPT was the one hallucinating. Just as ChatGPT used to apologize when it got caught making a mistake, now I felt compelled to apologize to ChatGPT. I stopped myself from actually doing so, hoping to preserve the last remnants of my dignity. There’s an “in Soviet Russia” joke somewhere in there somewhere.

Previous Post
Next Post