[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    14 hours ago

    It’s about time that we call the hype machine what it is. Ed Zitron has been calling this out for more than a year in his newsletter and on his podcast. These charlatans pretend we’re on the edge of thinking machines. Bullshit. They are statistical word generators. Can they be made to be useful beyond that? It appears so[0], but useful other things are not available to be mass-adopted so far. Curing cancer certainly doesn’t appear to be near.

    1. https://www.macstories.net/stories/sky-for-mac-preview/
  • Psaldorn@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    13 hours ago

    If you ever tried to use ai for code you’d know how dumb it was.

    Generally it’s ok, it’ll get some stuff done but if it thinks something is a certain way you can’t convince it otherwise. It hallucinates documentation, admits it made it up then carries on telling you to use the made up parts of the code.

    Infuriating.

    Like I said though, generally pretty good at helping you learn a new language if you have knowledge to start with.

    People learning from scratch are cooked, it makes crazy decisions sometimes that will compound over time and leave you with trash.

    • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      11 hours ago

      If you ever tried to use ai for code you’d know how dumb it was.

      If you ever tried using it for anything you are pretty familiar with, you’d know how dumb it was.

      That’s the only reason I think people still think AI is great; they don’t know shit so they think the AI is giving them good info when it’s not.

    • Daniel Quinn@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      13 hours ago

      I’ve actually tried to use these things to learn both Go and Rust (been writing Python for 17 years) and the experience was terrible. In both cases, it would generate code that referenced packages that didn’t exist, used patterns that aren’t used anymore, and wrote code that didn’t even compile. It was wholly useless as a learning tool.

      In the end what worked was what always works: I got a book and started on page 1. It was hard, but I started actually learning after a few hours.

      • Psaldorn@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        12 hours ago

        I used Gemini for go and was pleasantly surprised, might be important to note that I don’t ask it to generate a whole thing but more like “in go how do I <do small thing>” and sort of build up from there myself.

        Chatgpt and deepseek were a lot more failure prone.

        As an aside I found Gemini very good at debugging blender issues where the UI is very complex and unforgiving, and issues with that are super hard to search for, different versions and similarly named things etc.

        But as soon as you hit something it will not accept has changed it’s basically useless. But often that got me to a point where I could find posts in forums about “where did functionality x move to”

        Just like VR I think the bubble will burst and it will remain a niche technology that can be fine tuned for certain professions or situations.

        People getting excited for ways for ai to control their PCs are probably going to be in for a bad time…

  • BlameTheAntifa@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    14 hours ago

    The “Artificial” part isn’t clue enough?

    But I get it. The executives constantly hype up these madlib machines as things they are not. Emotional intelligence? It has neither emotion nor intelligence. “Artificial Intelligence” literally means it has the appearance of intelligence, but not actual intelligence.

    I used to be excited at the prospect of this technology, but at the time I naively expected people to be able to create and run their own. Instead, we got this proprietary capital-chasing clepto corporate dystopia.

    • Sterile_Technique@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      10 hours ago

      The “Artificial” part isn’t clue enough?

      Imo, no. The face-value connotation of “Artificial Intelligence” is intelligence that’s artificial. Actual intelligence, but not biologic. That’s a lot different from “it kinda looks like intelligence so long as you don’t look too hard at what’s beneath the hood”.

      Thus far, examples of that only exist in sci-fi. That’s part of why people are opposed to the bullshit generators marketed as “AI”, because calling it “AI” in the first place is dishonest. And that goes way back - videogame NPCs, Microsoft ‘Clippy’ etc have all been incorrectly branded “AI” in marketing or casual conversation for decades, but those aren’t stuffed into every product the way the current iteration is watering down the quality of what’s on the market, so outside of a mild pedantic annoyance, no one really gave a shit.

      Nowadays the stakes are higher since it’s having an actual negative impact on people’s lives.

      If we ever come up with true AI - actual intelligence that’s artificial - it’s going to be a game changer for humanity, for better or worse.

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    17
    ·
    edit-2
    15 hours ago

    This line of reasoning is dumb. Humans are impressive probability gadgets that have been fed huge amounts of training data.

    Current LLMs obviously are shit at reasoning and may be the wrong structural pattern for building intelligence (or could just be one building block of it), but there’s no reason to think that simulations of neurons couldn’t build an intelligence one day just because they’re based on math and circuits.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        11
        ·
        edit-2
        14 hours ago

        Saying ‘clearly’ in this context is a thought terminating expression, not reasoning.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          14 hours ago

          Okay, but LLMs don’t have thoughts that can be terminated, so that’s just another way they aren’t intelligent. Saying “clearly” for them would just be a way to continue the pattern, they wouldn’t use it the way I did to express how self evident and insultingly obvious it is.

          AI isn’t impossible, but LLMs are not intelligent and you need to stop dehumanizing yourself to argue for their intelligence.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            edit-2
            14 hours ago

            Okay, but LLMs don’t have thoughts that can be terminated, so that’s just another way they aren’t intelligent. Saying “clearly” for them would just be a way to continue the pattern, they wouldn’t use it the way I did to express how self evident and insultingly obvious it is.

            So? As you said, nothing says that they couldn’t eventually be part of an intelligence, but the reasoning presented in the article is basically just ‘theyre made of math so they could never be intelligent’.

            AI isn’t impossible, but LLMs are not intelligent and you need to stop dehumanizing yourself to argue for their intelligence.

            You need to stop limiting yourself to thinking of all intelligence worthy of consideration as having to be exactly the same as humans. That’s literally one of the core lessons of Star Trek and basically every single BBC documentary. Are LLMs intelligent? No. Could we make synthetic intelligence worthy of consideration? All evidence points to eventually yes.

            • queermunist she/her@lemmy.ml
              link
              fedilink
              arrow-up
              4
              arrow-down
              2
              ·
              14 hours ago

              The article is about LLMs specifically? And it’s arguing that intelligence can’t exist without subjectivity, the qualia of experiential data. These LLM text generators are being assigned intelligence they do not have because we have a tendency to assume there is a mind behind the text.

              This is not about AI being conceptually impossible they’re “made of math”. I’m not even sure where you got that? Where did that quote come from? It’s not in the link, or the Atlantic article.

              • masterspace@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                edit-2
                14 hours ago

                It’s the last line quoted in the post. They talk a lot of fancy talk up front but their entire reasoning for LLMs not being capable of thought boils down to that they’re statistical probability machines.

                So is the process of human thought.

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  14 hours ago

                  LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

                  This line?

                  Because that sure isn’t the process of human thought! We have reasoning, logical deductions, experiential qualia, subjectivity. Intelligence is so much more than just making statistically informed guesses, we can actually prove things and uncover truths.

                  You’re dehumanizing yourself by comparing yourself to a chatbot. Stop that.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            5
            ·
            edit-2
            14 hours ago

            So what do you think we run on? Magic and souls?

            It’s called understanding science and biology. When you drill it down, there’s nothing down there that’s not physical.

            If that’s the case, there’s no reason it couldn’t theoretically be modelled and simulated.

            This would be like all the technical workings for nuclear bombs being published and rather than focusing on their resultant harms and misuses, that you instead stuck your head in the sand and said ‘nuh uh, no way an atom can make a big explosion, don’t you know how small atoms are?’

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      15 hours ago

      If using AI to make ‘art’ (have a machine regurgitate other people’s art, the ones available right now literally can’t exist without other people’s art that it was trained on without permission, you don’t get to just skip this part because it annoys you) makes you an artist, then so does paying someone to make art and telling them how to make it. And walking into a restaurant and ordering something also makes you a chef. Paying to have someone build your house? You better believe that makes you an architect, carpenter, plumber, and electrician.

    • ckmnstr@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      15 hours ago

      Comparing Gen AI to a paintbrush is the same as comparing a quad to a unicycle. Sure, you’re not falling over, but is it really the same feat.

    • bcovertigo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      15 hours ago

      He’s right and this is why his comment is the artwork of the person replying to him. It’s no different from a keyboard. It’s a really advanced, very complicated keyboard.

      But I know the same people who argue lemmings aren’t intelligent also don’t want to recognize generated comments as being the property of the user who generated it. It’s “shitposting” and thus should be subject to scorn, ridicule, and has somehow stolen from all commenters everywhere, who have ever lived or ever will live in the future.