[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    I’ve actually tried to use these things to learn both Go and Rust (been writing Python for 17 years) and the experience was terrible. In both cases, it would generate code that referenced packages that didn’t exist, used patterns that aren’t used anymore, and wrote code that didn’t even compile. It was wholly useless as a learning tool.

    In the end what worked was what always works: I got a book and started on page 1. It was hard, but I started actually learning after a few hours.

    • Psaldorn@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      I used Gemini for go and was pleasantly surprised, might be important to note that I don’t ask it to generate a whole thing but more like “in go how do I <do small thing>” and sort of build up from there myself.

      Chatgpt and deepseek were a lot more failure prone.

      As an aside I found Gemini very good at debugging blender issues where the UI is very complex and unforgiving, and issues with that are super hard to search for, different versions and similarly named things etc.

      But as soon as you hit something it will not accept has changed it’s basically useless. But often that got me to a point where I could find posts in forums about “where did functionality x move to”

      Just like VR I think the bubble will burst and it will remain a niche technology that can be fine tuned for certain professions or situations.

      People getting excited for ways for ai to control their PCs are probably going to be in for a bad time…