[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz
Secondary source: https://bookshop.org/a/12476/9780063418561
Likewise, reducing humanity to “probability gadgets”.
So what do you think we run on? Magic and souls?
It’s called understanding science and biology. When you drill it down, there’s nothing down there that’s not physical.
If that’s the case, there’s no reason it couldn’t theoretically be modelled and simulated.
This would be like all the technical workings for nuclear bombs being published and rather than focusing on their resultant harms and misuses, that you instead stuck your head in the sand and said ‘nuh uh, no way an atom can make a big explosion, don’t you know how small atoms are?’
I think that if the human mind was a simple “probability gadget” then we’d have discovered and implemented the algorithm of consciousness in human-level AI 30 years ago.
And you’re basing that on all the LLMs that existed 30 years ago?
I’m basing that on the amount of compute power available then.
The article posits that LLMs are just fancy probability machines which is what I was responding to. I’m positing that human intelligence is, while more advanced than current LLMs, still just a probability machine, and thus presumably a more advanced probability machine than an LLM.
So why would you think that human intelligence wouldve existed 30 years ago if LLMs couldn’t?
The problem with your line of reasoning is that “probability machines” are Turing-complete, and could therefore be used to emulate any computable processes. The statement is literally equivalent to “the mind is a computer”, which is itself a thought-terminating clichè that ignores the actual complexities involved.
Nobody’s arguing that simulated or emulated consciousness isn’t possible, just that if it were as simple as you’re making it out to be then we’d have figured it out decades ago.
But I’m not. I have literally stated in every comment that human intelligence is more advanced than LLMs, but that both are just statistical machines.
There’s literally no reason to think that would have been possible decades ago based on this line of reasoning.
Again, literally all machines can be expressed in the form of statistics.
You might as well be saying that both LLMs and human intelligence exist because that’s all that can be concluded from the equivalence you are trying to draw.
You should read up on modern philosophy. P-zombies and stuff like that. Very interesting.