[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    22
    ·
    edit-2
    1 day ago

    This line of reasoning is dumb. Humans are impressive probability gadgets that have been fed huge amounts of training data.

    Current LLMs obviously are shit at reasoning and may be the wrong structural pattern for building intelligence (or could just be one building block of it), but there’s no reason to think that simulations of neurons couldn’t build an intelligence one day just because they’re based on math and circuits.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        14
        ·
        edit-2
        1 day ago

        Saying ‘clearly’ in this context is a thought terminating expression, not reasoning.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          10
          arrow-down
          2
          ·
          1 day ago

          Okay, but LLMs don’t have thoughts that can be terminated, so that’s just another way they aren’t intelligent. Saying “clearly” for them would just be a way to continue the pattern, they wouldn’t use it the way I did to express how self evident and insultingly obvious it is.

          AI isn’t impossible, but LLMs are not intelligent and you need to stop dehumanizing yourself to argue for their intelligence.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            6
            ·
            edit-2
            1 day ago

            Okay, but LLMs don’t have thoughts that can be terminated, so that’s just another way they aren’t intelligent. Saying “clearly” for them would just be a way to continue the pattern, they wouldn’t use it the way I did to express how self evident and insultingly obvious it is.

            So? As you said, nothing says that they couldn’t eventually be part of an intelligence, but the reasoning presented in the article is basically just ‘theyre made of math so they could never be intelligent’.

            AI isn’t impossible, but LLMs are not intelligent and you need to stop dehumanizing yourself to argue for their intelligence.

            You need to stop limiting yourself to thinking of all intelligence worthy of consideration as having to be exactly the same as humans. That’s literally one of the core lessons of Star Trek and basically every single BBC documentary. Are LLMs intelligent? No. Could we make synthetic intelligence worthy of consideration? All evidence points to eventually yes.

            • queermunist she/her@lemmy.ml
              link
              fedilink
              arrow-up
              6
              arrow-down
              3
              ·
              1 day ago

              The article is about LLMs specifically? And it’s arguing that intelligence can’t exist without subjectivity, the qualia of experiential data. These LLM text generators are being assigned intelligence they do not have because we have a tendency to assume there is a mind behind the text.

              This is not about AI being conceptually impossible they’re “made of math”. I’m not even sure where you got that? Where did that quote come from? It’s not in the link, or the Atlantic article.

              • masterspace@lemmy.ca
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                3
                ·
                edit-2
                1 day ago

                It’s the last line quoted in the post. They talk a lot of fancy talk up front but their entire reasoning for LLMs not being capable of thought boils down to that they’re statistical probability machines.

                So is the process of human thought.

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  1 day ago

                  LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

                  This line?

                  Because that sure isn’t the process of human thought! We have reasoning, logical deductions, experiential qualia, subjectivity. Intelligence is so much more than just making statistically informed guesses, we can actually prove things and uncover truths.

                  You’re dehumanizing yourself by comparing yourself to a chatbot. Stop that.

                  • masterspace@lemmy.ca
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    4
                    ·
                    1 day ago

                    Yes and newer models arent just raw LLMs, but specifically models designed to reason and deduct and start chaining LLMs with other types of models.

                    It’s not dehumanizing to recognize that alien intelligence could exist, and it’s not dehumanizing to think that we are capable of building synthetic intelligence.

                  • masterspace@lemmy.ca
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    3
                    ·
                    1 day ago

                    Yes and newer models arent just raw LLMs, but specifically models designed to reason and deduct and start chaining LLMs with other types of models.

                    It’s not dehumanizing to recognize that alien intelligence could exist, and it’s not dehumanizing to think that we are capable of building synthetic intelligence.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            8
            ·
            edit-2
            1 day ago

            So what do you think we run on? Magic and souls?

            It’s called understanding science and biology. When you drill it down, there’s nothing down there that’s not physical.

            If that’s the case, there’s no reason it couldn’t theoretically be modelled and simulated.

            This would be like all the technical workings for nuclear bombs being published and rather than focusing on their resultant harms and misuses, that you instead stuck your head in the sand and said ‘nuh uh, no way an atom can make a big explosion, don’t you know how small atoms are?’

            • knightly the Sneptaur@pawb.social
              link
              fedilink
              arrow-up
              5
              arrow-down
              1
              ·
              edit-2
              1 day ago

              I think that if the human mind was a simple “probability gadget” then we’d have discovered and implemented the algorithm of consciousness in human-level AI 30 years ago.

                  • masterspace@lemmy.ca
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    3
                    ·
                    edit-2
                    1 day ago

                    The article posits that LLMs are just fancy probability machines which is what I was responding to. I’m positing that human intelligence is, while more advanced than current LLMs, still just a probability machine, and thus presumably a more advanced probability machine than an LLM.

                    So why would you think that human intelligence wouldve existed 30 years ago if LLMs couldn’t?

            • vala@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              You should read up on modern philosophy. P-zombies and stuff like that. Very interesting.