Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

  • Utsob Roy@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 year ago

    Yes. LLMs generate texts. They don’t use language. Using a language requires an understanding of the subject one is going to express. LLMs don’t understand.

    • Spzi@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I guess you’re right, but find this a very interesting point nevertheless.

      How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?

      For the sake of the comparison, we should talk about the presumed intelligence of other people, not our (“my”) own.

      • Utsob Roy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        In the case of current LLMs, we can tell. These LLMs are not black boxes to us. It is hard to follow the threads of their decisions because these decisions are just some hodgepodge of statistics and randomness, not because they are very intricate thoughts.

        We can’t compare the outputs, probably, but compute the learning though. Imagine a human with all the literature, ethics, history, and all kind of texts consumed like that LLMs, no amount of trick questions would have tricked him to believe in racial cleansing or any such disconcerting ideas. LLMs read so much, and learned so little.

    • kaffiene@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      This gets to the core of the issue. LLMs are a model of the statiscal relationship between words in texts, in a very large number of dimensions. The intelligence they appear to exhibit is that which existed in their source material in the first place. They don’t have a model of the world itself. If you consider how midjourney can produce photorealstic images of people yet very often it will get hands wrong. How is that? It’s because when you train on images, you get a statistical representation of what hands look like without the world model that let’s you know that hands only have 5 fingers and how they’re arranged. AIs like this are very clever copiers. They are not intelligent