• I Cast Fist@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      I’m sorry, I cannot answer that as I was not trained enough to differentiate between all the possible weights used to weigh a sheep during my dreams.

  • SpaceCowboy@lemmy.ca
    link
    fedilink
    arrow-up
    33
    ·
    9 months ago

    I’ve always said that Turing’s Imitation Game is a flawed way to determine if an AI is actually intelligent. The flaw is the assumption that humans are intelligent.

    Humans are capable of intelligence, but most of the time we’re just responding to stimulus in predictable ways.

    • Malgas@beehaw.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      There’s a running joke in the field that AI is the set of things that computers cannot yet do well.

      We used to think that you had to be intelligent to be a chess grandmaster. Now we know that you only have to be freakishly good at chess.

      Now we’re having a similar realization about conversation.

      • frezik@midwest.social
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        Didn’t really need an AI for chess to know that. A look at how crazy some grandmasters will show you that. Bobby Fischer is the most obvious one, but there’s quite a few where you wish they would stop talking about things that aren’t chess.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    31
    ·
    9 months ago

    Look at this another way. We succeeded too well and instead of making a superior AI we made a synthetic human with all our flaws.

    Realistically LLMs are just complex models based on our own past creations. So why wouldn’t they be a mirror of their creator, good and bad?

    • Baizey
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      They can certainly be promoted towards it

  • skye@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 months ago

    what if the whole universe is just the algorithm and data used to feed and LLM? we’re all just chat gpt

    (i don’t know how LLMs work)

    • Deceptichum@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      We basically are. We’re biological pattern recognising machines, where inputs influence everything.

      The only difference is somehow our electricity has decided it’s got free will.

      • skye@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        9 months ago

        well that decides it, gods are real and we’re their chat gpt, all our creations are just responses to their prompts lmao

        it’s wild though, i’ve heard that we don’t really have free will but i guess i’m personally mixed on it as i haven’t really looked that into it/thought much about it. it intuitively makes sense to me, though. that we wouldn’t, i mean, really have free will. i mean we’re just big walking colonies of micro-organisms, right? what is me, what is them? – idk where i’m going with this

  • fnmain@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    8 months ago

    Can’t LLMs take an insane number of tokens as context now (I think we’re up to 1M)

    Anywho, he just like me fr