• nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        4 hours ago

        There’s a vocal group of people who seem to think that LLMs can achieve consciousness despite the fact that it is not possible due to the way that LLMs fundamentally work. They have largely been duped by advanced LLMs’ ability to sound convincing (as well as a certain conman executive officer). These people often also seem to believe that by dedicating more and more resources to running these models, they will achieve actual general intelligence and that an AGI can save the world, releasing them of the responsibility to attempt to fix anything.

        That’s my point. AGI isn’t going to save us and LLMs (by themselves), regardless of how much energy is pumped into them, will not ever achieve actual intelligence.

        • Mossy Feathers (They/Them)@pawb.social
          link
          fedilink
          arrow-up
          1
          ·
          3 hours ago

          But an AGI isn’t an LLM. That’s what’s confusing me about your statement. If anything I feel like I already covered that, so I’m not sure why you’re telling me this. There’s no reason why you can’t recreate the human brain on silicon, and eventually someone’s gonna do it. Maybe it’s one of our current companies, maybe it’s a future company. Who knows. Except that a true AGI would turn everything upside down and inside out.