[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    3 days ago

    Nobody’s arguing that simulated or emulated consciousness isn’t possible, just that if it were as simple as you’re making it out to be then we’d have figured it out decades ago.

    But I’m not. I have literally stated in every comment that human intelligence is more advanced than LLMs, but that both are just statistical machines.

    There’s literally no reason to think that would have been possible decades ago based on this line of reasoning.

    • knightly the Sneptaur@pawb.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      3 days ago

      Again, literally all machines can be expressed in the form of statistics.

      You might as well be saying that both LLMs and human intelligence exist because that’s all that can be concluded from the equivalence you are trying to draw.