Jump to ratings and reviews
Rate this book

Do People Understand Anything? A Counterpoint to the Usual AI Critique

Rate this book
The claim that large language models and related AI systems "do not understand anything" has become a commonplace dismissal. This short paper inverts that challenge. We argue that the evidential case for robust human understanding is weaker than often supposed: in several high-profile domains (cosmology, climate science, and political evaluation) sizable groups endorse mutually incompatible narratives despite substantial shared information. We contrast these human patterns with how contemporary AI models behave when asked to evaluate polarised claims. In a small, exploratory experiment (two exemplar model endpoints, ten claims, multiple repeats) the tested systems typically produced citation-rich, evidence-structured arguments rather than refusals, showed high internal consistency and cross-model agreement, and reported noticeably lower confidence only on genuinely unsettled origin-of-life probes. We defend an operationalisation of ``understanding'' that foregrounds both tool-making and language-mediated story-making (logos), show how this reframing narrows the perceived human–machine gap, and conclude with caveats and an agenda for larger-scale, preregistered empirical work and reproducible evaluation practices. All code, prompts, and analysis artifacts for the experiment are publicly available in the project repository for full reproducibility.

22 pages, ebook

Published September 21, 2025

28 people want to read

About the author

Manny Rayner

47 books16.1k followers
Many people have been protesting against what they describe as censorship on Goodreads. I disagree. In fact, I would like to say that I welcome the efforts that Goodreads management is making to improve the deplorably low quality of reviewing on this site.

Please, though, just give me clearer guidelines. I want to know how to use my writing to optimize Amazon sales, especially those of sensitive self-published authors. This is a matter of vital importance to me, and outweighs any possible considerations of making my reviews interesting, truthful, creative or entertaining.

Thank you.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
1 (16%)
3 stars
3 (50%)
2 stars
1 (16%)
1 star
1 (16%)
Displaying 1 - 4 of 4 reviews
Profile Image for Manny.
Author 47 books16.1k followers
October 24, 2025
[Original review, Sep 21 2025]

Sometime around noon last Friday, I had just got off the bus down on Leader Street. I had spent the morning in front of my laptop, having at various time patted the cat, answered a number of emails, done some productive coding with GPT-5, read some more terrifying news from the US, and scanned another academic paper explaining that AIs can't really understand anything. In other words, a pretty normal day so far, and now I was going to buy some food for lunch. I was about to head towards the shopping strip when there was a godalmighty crash: the bus I had exited thirty seconds earlier had just collided with a huge truck as they both tried to turn left into Goodwood Road. Shortly after, the truck driver jumped out of his cab and marched over to the bus, which opened its front door. The truckie, spitting with rage, got in. "You fucking moron!" he yelled at the bus driver. "You shouldn't be out on the fucking roads! You could have killed everyone on your fucking bus, you fucker!" Probably there were a few more fucks in there, but that's as close as I can remember it.

I waited a little while to see if the bus or the truck would move, but it evidently wasn't going to happen until the rescue services arrived. A substantial queue had already built up, and other pedestrians were getting restive. An enterprising blonde woman decided that she could cross Leader street using the gap between the bus and the car immediately behind it, and I figured that I would follow her example. I was about halfway across when a thought suddenly came to me; in retrospect, I see it as synthesized from the various things I had been exposed to over the preceding couple of hours, but such explanations are notoriously unreliable. Anyway, here's what I thought. You have all these academics arguing that AIs can't understand, but then people aren't so great at understanding either. Especially in the US, we have many polarising questions where half of the population believes one thing, and the other half believes the exact opposite. Evidently, they can't both be right (in particular, Donald Trump can't be both a lying conman and a heroic truth-teller), so you have reasonable evidence that at least half the population thinks very poorly, whoever in fact happens to be on the winning side. Now what do AIs say? Are they more sensible, or are they not allowed to answer for fear of offending someone? It would be easy and interesting to find out.

I swear, the whole thing came to me literally in a second. Call it my Kekulé moment. When I got home, I asked GPT-5 what it thought, and it was also intrigued. We sat down, sketched out a plan together for the paper and the code, and then the AI wrote nearly everything; they've become incredibly efficient and just need a suggestion from time to time. We finished an hour or so ago and I uploaded the result to ResearchGate, I'm tired of arguing with stupid human journal editors about why they won't accept AI authors. As the truck driver might have put it, if they don't want to have fucking AI authors in their fucking journals then they can fucking fuck themselves, see if I fucking care. Our paper is posted here. If you have time to look at it and post a comment, my AI associate and I would love to know what you think!
____________________
[Update, Oct 25 2025]

I was interested to see this recent interview:
Sam Altman said that future versions of ChatGPT would comply with a recent executive order from the Trump administration that requires AI systems used by the federal government to be ideologically neutral and customizable.

“I think our product should have a fairly center-of-the-road, middle stance, and then you should be able to push it pretty far,” Altman said. “If you’re like, ‘I want you to be super woke’ — it should be super woke.”

He added that if a user wanted the model to be conservative, it should also reflect that as well.
I looked at the text of the executive order that Altman cites; it only mentions DEI, and explicitly requires that "LLMs shall be truthful in responding to user prompts seeking factual information or analysis. LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory. " All the same, once GPT-6 is out, I will be curious to rerun the experiments from our paper with two different versions of the prompt: plain, and modified to prefer conservative ideology where this does not conflict with the AI's primary objectives of being responsible and truth-seeking. Though when I try to write this down, it is a little hard to decide exactly how to phrase the requirement.
Profile Image for Liedzeit Liedzeit.
Author 1 book105 followers
October 27, 2025
I agree with the authors (one of them Human) that the claim that AI does not understand and can (maybe) never understand in principle (as Searle e.g. said) is quite wrong. But the argument that in fact humans have trouble understanding and that there are areas where AI is much better in understanding is rather pointless and I even think deliberately misleading. The examples, like climate-change or the age of Earth are very far away from everyday life and there is simply, from an evolutionary point of view, not much use in having an opinion on these issues or an “understanding”.

I also agree that the best and most basic approach to understanding is to define it strictly behaviouristicly. (The fact that behaviourism is dead, as one commentator said, does not mean it is wrong). So when someone says “Bring me a red ball” and that someone delivers consistently a red ball it means she (or it like SHRDLU) understands. It means mastering (very simple) language games.

AI is mastering very complicated language games and so I think it would be absurd to deny it understanding. But, and it is a strong but, even Rayner or I or ChatGPT would probably not say that SHRDLU or ELIZA really understood what was said to them. So there must be (and I think there was) a qualitative shift between these simple programs and LLMs or other AIs. But it is still quite simple to argue in what sense LLMs still do not understand.

1. Even babies who don't understand the words being used can sense whether people are friendly or hostile from the tone of their voice.

2. Even if I know the definition of a word or its use, it can be the case that I do not understand it. Just yesterday I read the fairytale The Story of the Youth Who Went Forth to Learn What Fear Was. The boy did not know what fear is and although I think he even in the end did not really understand it, he probably had a better idea than an LLM can have. A better example might be this. When I was a child my father would sometimes say that he had “Sodbrennen”. I did not know what it was. He used the word very rarely, and I don’t remember ever hearing my mother or anyone else use it. What I did unterstand was that it was something unpleasant and he would have it after having eaten cake. So in a sense I understood the word, because I could react appropriately. But only when I was in my thirties I felt “Sodbrennen” for the first time myself (and for all I know it could be a totally different feeling from the one my father had). So in a strong sense I understood the meaning only then. The english word for Sodbrennen is heartburn. And for years I knew the word (one of my favourite records is called Heartburn by Kevin Coyne) but I thought it was a metaphor for love sickness. In short I know that ChatGPT will be able to use the word perfectly well, but it still does not understand the pain.)

3. In Go it was common knowledge that one should not invade at the 3-3 point early in the game. We have been told again and again and it also made sense. Wasn’t it obvious that the little territory you gained was inferior to the influence? So we understood that 3-3 point invasion was bad. That changed when AlphaGo and later systems played it happily and successfully. The first reaction was, it is still wrong. But after a while we mimicked the move at first with a feeling of uneasiness but soon we understood it. And the funny thing is, now we understand that it is the right move in a sense that AlphaGo does not. AlphaGo knows the value of each move, but does not understand it. The understanding that I have, which the programme lacks, consists of a certain irrational confidence that goes along with the abstract knowledge that the move is right.

To be clear, the points I made do not mean, that AI is unable to understand. There are different levels of understanding. I can understand a word, meaning I am able to pick one of six translations in Duolingo, or I can have what I would call Feynman-understanding which means that to understand radio I must be able to build one. The feeling of certainty is nice to have but not necessary for understanding. But to claim that there is not something in addition to fact-knowledge that humans have when they (think they) understand, seems silly to me.
Profile Image for Seth.
177 reviews21 followers
September 23, 2025
Nothing in this paper is wrong, technically, but I expect the people Manny's arguing against - the people who say "But LLMs don't actually understand anything" as if that's some kind of knockdown argument against LLM hype - would be unimpressed, because the paper assumes a behavioral operationalization of understanding to which they never agreed, and to which I doubt they would agree. I doubt they'd agree to any behavioral operationalization, for that matter. My impression is that what they mean is something more like "LLMs have no extra-linguistic model of the world." (See comments on Manny's own review for discussion between us on the subject.)
Profile Image for Emanuel.
56 reviews2 followers
November 8, 2025
[Mon. 27 Oct. 2025]
Discussion about AI understanding seems very plagued by obfuscation. "Understanding" and "knowledge" are ill-defined concepts. At this point, arguing LLMs are without understanding requires two things: a compelling definition (one that maps to the intuition about understanding that we have) and a compelling argument that LLMs do not satisfy it. Both are hard to come by.
One really good argument for the irrelevance of the word "understanding" is the submarine analogy: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
Personally, if an intentional stance gives me a better model for how LLMs work, I will just adopt it. If it blurs the lines of language, so be it.
Note: The submarine analogy was obtained while listening to a Nate Soares interview (originally due to Edsger Dijkstra).

[Sat. 8. Nov. 2025]
If you do not agree with the above, you might gain insight from reading the paper.
Displaying 1 - 4 of 4 reviews

Can't find what you're looking for?

Get help and learn more about the design.