Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      18 hours ago

      I think you point out the main issue here. Wtf is intelligence as defined by this axis? IQ? Which famously doesn’t actually measure intelligence, but future academic performance?

    • Todd Bonzalez@lemm.ee
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      17 hours ago

      Human intelligence created language. We taught it to ourselves. That’s a higher order of intelligence than a next word predictor.

      • Sl00k@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        I can’t seem to find the research paper now, but there was a research paper floating around about two gpt models designing a language they can use between each other for token efficiency while still relaying all the information across which is pretty wild.

        Not sure if it was peer reviewed though.

      • sunbeam60@lemmy.one
        link
        fedilink
        arrow-up
        2
        ·
        17 hours ago

        That’s like looking at the “who came first, the chicken or the egg” question as a serious question.