[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz
Secondary source: https://bookshop.org/a/12476/9780063418561
Go to one of these “reasoning” AIs. Ask it to explain its reasoning. (It will!) Then ask it to explain its reasoning again. (It will!) Ask it yet again. (It will gladly do it thrice!)
Then put the “reasoning” side by side and count the contradictions. There’s a very good chance that the three explanations are not only different from each other, they’re very likely also mutually incompatible.
“Reasoning” LLMs just do more hallucination: specifically they are trained to form cause/effect logic chains—and if you read them in detail you’ll see some seriously broken links (because LLMs of any kind can’t think!)—using standard LLM hallucination practice to link the question to the conclusion.
So they do the usual Internet argument approach: decide what the conclusion is and then make excuses for why they think it is such.
If you don’t believe me, why not ask one? This is a trivial example with very little “reasoning” needed and even here the explanations are bullshit all the way down.
Note, especially, the final statement it made:
Now I’m absolutely technically declined. Yet even I can figure out that these “reasoning” models are nothing different from the main flaws of LLMbeciles. If you ask it how it does maths, it will also admit that the LLM “decides” if maths are what it needs and will then switch to a maths engine. But if the LLM “decides” it can do it on its own it will. So you’ll still get garbage maths out of the machine.