[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

  • BlameTheAntifa@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    1 day ago

    The “Artificial” part isn’t clue enough?

    But I get it. The executives constantly hype up these madlib machines as things they are not. Emotional intelligence? It has neither emotion nor intelligence. “Artificial Intelligence” literally means it has the appearance of intelligence, but not actual intelligence.

    I used to be excited at the prospect of this technology, but at the time I naively expected people to be able to create and run their own. Instead, we got this proprietary capital-chasing clepto corporate dystopia.

    • Sterile_Technique@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 day ago

      The “Artificial” part isn’t clue enough?

      Imo, no. The face-value connotation of “Artificial Intelligence” is intelligence that’s artificial. Actual intelligence, but not biologic. That’s a lot different from “it kinda looks like intelligence so long as you don’t look too hard at what’s beneath the hood”.

      Thus far, examples of that only exist in sci-fi. That’s part of why people are opposed to the bullshit generators marketed as “AI”, because calling it “AI” in the first place is dishonest. And that goes way back - videogame NPCs, Microsoft ‘Clippy’ etc have all been incorrectly branded “AI” in marketing or casual conversation for decades, but those aren’t stuffed into every product the way the current iteration is watering down the quality of what’s on the market, so outside of a mild pedantic annoyance, no one really gave a shit.

      Nowadays the stakes are higher since it’s having an actual negative impact on people’s lives.

      If we ever come up with true AI - actual intelligence that’s artificial - it’s going to be a game changer for humanity, for better or worse.

      • Tamo240@programming.dev
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        18 hours ago

        This is actually not true. You are referring to Artificial General Intelligence (AGI), an artificially intelligent system that is able to function in any context.

        Artificial Intelligence as a field of Computer Science goes back to the 50s, and is defined as systems that appear intelligent, not that actually exhibit thinking capabilities. The entire purpose of the Turing test is to appear intelligent, with no requirement that the system actually is.

        Rule based systems and statistical models are examples of AI in the scientific sense, but the public perception of what AI should mean is warped by portrayals in science fiction of what it could mean.

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 hours ago

          The Turing test was a thought experiment stating that if something seemed intelligent then it was intelligent. We have utterly proved that wrong now. IMHO we should only teach it to say that it isn’t a complete definition.

        • Sterile_Technique@lemmy.worldM
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 hours ago

          I mean, nowadays the can of worms is long-since opened, and there’s the whole spiel about how definitions change over time with use, so… sure I guess?

          AI became synonymous with computing in general, and “AGI” moved the goal posts in an attempt to un-muddy the waters? Give it time, I’m sure marketing will fuck that one up too and a couple other randoms on the internet will be having this same conversation but between AGI and whatever the new flavor is.