[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz
Secondary source: https://bookshop.org/a/12476/9780063418561
I mean, nowadays the can of worms is long-since opened, and there’s the whole spiel about how definitions change over time with use, so… sure I guess?
AI became synonymous with computing in general, and “AGI” moved the goal posts in an attempt to un-muddy the waters? Give it time, I’m sure marketing will fuck that one up too and a couple other randoms on the internet will be having this same conversation but between AGI and whatever the new flavor is.