LLMs are great at analyzing information and summarizing it in technical language, but they’re bad at writing.
Humans reading a text are much better than could reasonably be expected at intuiting the writer’s personality. Philip Rosedale, talking with Jim Rutt, notes that people who have communicated over text chat for a long time without seeing each other, when they finally meet in person, have no problem recognizing each other. There’s something in the way we write that makes us recognize in person.
LLMs write, but there is no personality behind the writing. Adam Mastroianni has written a post about this ineffable quality and its absence behind AI writing. His prose his excellent, and he’s confident enough to predict it will be better than that of AIs to threaten to drown himself if that ever changes. Here is how he describes AI’s blindness to meaning:
The computer doesn’t know any of this. It can’t know any of this. It can only read the cookbook; it can’t taste the meal.
Writing with soul isn’t the only thing AIs still can’t do. They’re still not great at visual thinking, for example. To make them more useful, improving the abilities they already excel at, such as reasoning, matters less than improving those abilities they still lack, such as learning between training runs.
Here’s another observation by Mastroianni, justifying his AI skepticism:
When my PhD advisor was in grad school, he literally had to call people on the phone and ask them if they’d like to take part in a psychology study. If he could get 30 participants in a semester, he was cookin’. Participant pool management software like Sona made this process go twice as fast, and then Amazon Mechanical Turk made it go 1000x as fast. Meanwhile, Google Scholar turned a half-day spent in the library into a two-second search, and stats software like SPSS and R made data analysis go lickety-split.