[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      8
      ·
      edit-2
      1 day ago

      So what do you think we run on? Magic and souls?

      It’s called understanding science and biology. When you drill it down, there’s nothing down there that’s not physical.

      If that’s the case, there’s no reason it couldn’t theoretically be modelled and simulated.

      This would be like all the technical workings for nuclear bombs being published and rather than focusing on their resultant harms and misuses, that you instead stuck your head in the sand and said ‘nuh uh, no way an atom can make a big explosion, don’t you know how small atoms are?’

      • knightly the Sneptaur@pawb.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        1 day ago

        I think that if the human mind was a simple “probability gadget” then we’d have discovered and implemented the algorithm of consciousness in human-level AI 30 years ago.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              edit-2
              1 day ago

              The article posits that LLMs are just fancy probability machines which is what I was responding to. I’m positing that human intelligence is, while more advanced than current LLMs, still just a probability machine, and thus presumably a more advanced probability machine than an LLM.

              So why would you think that human intelligence wouldve existed 30 years ago if LLMs couldn’t?

              • knightly the Sneptaur@pawb.social
                link
                fedilink
                arrow-up
                4
                arrow-down
                1
                ·
                1 day ago

                The problem with your line of reasoning is that “probability machines” are Turing-complete, and could therefore be used to emulate any computable processes. The statement is literally equivalent to “the mind is a computer”, which is itself a thought-terminating clichè that ignores the actual complexities involved.

                Nobody’s arguing that simulated or emulated consciousness isn’t possible, just that if it were as simple as you’re making it out to be then we’d have figured it out decades ago.

                • masterspace@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  1 day ago

                  Nobody’s arguing that simulated or emulated consciousness isn’t possible, just that if it were as simple as you’re making it out to be then we’d have figured it out decades ago.

                  But I’m not. I have literally stated in every comment that human intelligence is more advanced than LLMs, but that both are just statistical machines.

                  There’s literally no reason to think that would have been possible decades ago based on this line of reasoning.

                  • knightly the Sneptaur@pawb.social
                    link
                    fedilink
                    arrow-up
                    2
                    arrow-down
                    2
                    ·
                    edit-2
                    1 day ago

                    Again, literally all machines can be expressed in the form of statistics.

                    You might as well be saying that both LLMs and human intelligence exist because that’s all that can be concluded from the equivalence you are trying to draw.

      • vala@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        You should read up on modern philosophy. P-zombies and stuff like that. Very interesting.