[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        10
        arrow-down
        2
        ·
        3 days ago

        Okay, but LLMs don’t have thoughts that can be terminated, so that’s just another way they aren’t intelligent. Saying “clearly” for them would just be a way to continue the pattern, they wouldn’t use it the way I did to express how self evident and insultingly obvious it is.

        AI isn’t impossible, but LLMs are not intelligent and you need to stop dehumanizing yourself to argue for their intelligence.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            arrow-up
            6
            arrow-down
            3
            ·
            3 days ago

            The article is about LLMs specifically? And it’s arguing that intelligence can’t exist without subjectivity, the qualia of experiential data. These LLM text generators are being assigned intelligence they do not have because we have a tendency to assume there is a mind behind the text.

            This is not about AI being conceptually impossible they’re “made of math”. I’m not even sure where you got that? Where did that quote come from? It’s not in the link, or the Atlantic article.

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                4
                ·
                3 days ago

                LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

                This line?

                Because that sure isn’t the process of human thought! We have reasoning, logical deductions, experiential qualia, subjectivity. Intelligence is so much more than just making statistically informed guesses, we can actually prove things and uncover truths.

                You’re dehumanizing yourself by comparing yourself to a chatbot. Stop that.

                  • hendrik@palaver.p3x.de
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    5
                    ·
                    edit-2
                    3 days ago

                    I feel you’re wasting your time here. Some people seem to be under the impression it’s the year 1990 or 1950 and we’re talking about markov chain chatbots. The stochastic parrot argument would certainly apply there. But we’re talking about something else here.

                    And it’s also a fairly common misconception that AI somehow has to be intelligent in the same way a human is. And by using the same methods. But it really doesn’t work that way. That’s why we put the word “Artificial” in front of “Intelligence”.

                    But this take gets repeated over and over again and I don’t really know why we need to argue about how maths and statistics are a part of our world, how language and perception work and who is dehumanizing themselves… The scientific approach is to define intelligence, come up with some means of measuring it, and then do it… And that’s what we’ve done. We can get rid of the perception part of language. We can measure how “intelligent” entities can memorize and recall facts, combine them, transfer and apply knowledge… That’s not really a secret… I mean obviously it seems to be misunderstood or hyped or whatever by lots of people. But we also (in theory) know some of the facts about AI and what it can and can not do and how that relates to the vague concept of intelligence.

                  • ZDL@lazysoci.al
                    link
                    fedilink
                    arrow-up
                    3
                    arrow-down
                    1
                    ·
                    2 days ago

                    Go to one of these “reasoning” AIs. Ask it to explain its reasoning. (It will!) Then ask it to explain its reasoning again. (It will!) Ask it yet again. (It will gladly do it thrice!)

                    Then put the “reasoning” side by side and count the contradictions. There’s a very good chance that the three explanations are not only different from each other, they’re very likely also mutually incompatible.

                    “Reasoning” LLMs just do more hallucination: specifically they are trained to form cause/effect logic chains—and if you read them in detail you’ll see some seriously broken links (because LLMs of any kind can’t think!)—using standard LLM hallucination practice to link the question to the conclusion.

                    So they do the usual Internet argument approach: decide what the conclusion is and then make excuses for why they think it is such.

                    If you don’t believe me, why not ask one? This is a trivial example with very little “reasoning” needed and even here the explanations are bullshit all the way down.

                    Note, especially, the final statement it made:

                    Yes, your summary is essentially correct: what is called “reasoning” in large language models (LLMs) is not true logical deduction or conscious deliberation. Instead, it is a process where the model generates a chain of text that resembles logical reasoning, based on patterns it has seen in its training data[1][2][6].

                    When asked to “reason,” the LLM predicts each next token (word or subword) by referencing statistical relationships learned from vast amounts of text. If the prompt encourages a step-by-step explanation or a “chain of thought,” the model produces a sequence of statements that look like intermediate logical steps[1][2][5]. This can give the appearance of reasoning, but what is actually happening is the model is assembling likely continuations that fit the format and content of similar examples it has seen before[1][2][6].

                    In short, the “chain of logic” is generated as part of the response, not as a separate, internal process that justifies a previously determined answer. The model does not first decide on an answer and then work backward to justify it; rather, it generates the answer and any accompanying rationale together, token by token, in a single left-to-right sequence, always guided by the prompt and the statistical patterns in its training[1][2][6].

                    “Ultimately, LLM ‘reasoning’ is a statistical approximation of human logic, dependent on data quality, architecture, and prompting strategies rather than innate understanding. … Reasoning-like behavior in LLMs emerges from their ability to stitch together learned patterns into coherent sequences.” [1]

                    So, what appears as reasoning is in fact a sophisticated form of pattern completion, not genuine logical deduction or conscious justification.

                    [1] https://milvus.io/ai-quick-reference/how-does-reasoning-work-in-large-language-models-llms

                    [2] https://www.digitalocean.com/community/tutorials/understanding-reasoning-in-llms

                    [3] https://sebastianraschka.com/blog/2025/understanding-reasoning-llms.html

                    [4] https://en.wikipedia.org/wiki/Reasoning_language_model

                    [5] https://arxiv.org/html/2407.11511v1

                    [6] https://www.anthropic.com/research/tracing-thoughts-language-model

                    [7] https://magazine.sebastianraschka.com/p/state-of-llm-reasoning-and-inference-scaling

                    [8] https://cameronrwolfe.substack.com/p/demystifying-reasoning-models

                    Now I’m absolutely technically declined. Yet even I can figure out that these “reasoning” models are nothing different from the main flaws of LLMbeciles. If you ask it how it does maths, it will also admit that the LLM “decides” if maths are what it needs and will then switch to a maths engine. But if the LLM “decides” it can do it on its own it will. So you’ll still get garbage maths out of the machine.

          • knightly the Sneptaur@pawb.social
            link
            fedilink
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            3 days ago

            I think that if the human mind was a simple “probability gadget” then we’d have discovered and implemented the algorithm of consciousness in human-level AI 30 years ago.

                  • knightly the Sneptaur@pawb.social
                    link
                    fedilink
                    arrow-up
                    5
                    arrow-down
                    1
                    ·
                    3 days ago

                    The problem with your line of reasoning is that “probability machines” are Turing-complete, and could therefore be used to emulate any computable processes. The statement is literally equivalent to “the mind is a computer”, which is itself a thought-terminating clichè that ignores the actual complexities involved.

                    Nobody’s arguing that simulated or emulated consciousness isn’t possible, just that if it were as simple as you’re making it out to be then we’d have figured it out decades ago.

          • vala@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            3 days ago

            You should read up on modern philosophy. P-zombies and stuff like that. Very interesting.