Longtermism poses a real threat to humanity

https://www.newstatesman.com/ideas/2023/08/longtermism-threat-humanity

“AI researchers such as Timnit Gebru affirm that longtermism is everywhere in Silicon Valley. The current race to create advanced AI by companies like OpenAI and DeepMind is driven in part by the longtermist ideology. Longtermists believe that if we create a “friendly” AI, it will solve all our problems and usher in a utopia, but if the AI is “misaligned”, it will destroy humanity…”

@technology

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      26
      ·
      1 year ago

      The “rush” to produce AI is a problem with Capitalism, not longtermarism. There is a rush to create the first generative AI not because it will benefit society but because it will make buttloads of money.

      This. I came here to say right this.

    • frog 🐸@beehaw.org
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      I’m inclined to agree. Even without having read McAskill’s book (though I’m interested to read it now), the way we’re approaching AI right now does seem more of a short-term gold rush, wrapped up in friendly sounding “this will lead to utopia in the future” to justify “we’re going to make a lot of money right now”.

  • ReallyActuallyFrankenstein@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    42
    ·
    edit-2
    1 year ago

    This is my first exposure to “longtermism” and it sounds like an interesting debate. But I’m actually having a hard time because this is one of the dumbest neologisms I’ve ever heard.

    Sorry for the similarly dumb rant but: Not only is it extremely vague given its very specific technical meaning (the author of the article has to clarify in the first paragraph that it doesn’t mean traditional “long term thinking.” Off to a great start!), but it is about as artful wordplay as painting boobies on a cave wall. It sounds to my ear like a five year old describing the art of fencing as “fightystickism.”

    • Kwakigra@beehaw.org
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      It reminds me of casting a golden calf and then worshipping it as a god. There aren’t even the seeds of the solution to any social problem in LLMs. It’s the classic issue of being knowledgeable about one thing and assuming they are knowledgeable about all things.

      • Erk@cdda.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        There are some seeds there. Llms show that automation will eventually destroy all jobs, and at any time even things we think are unassailable for decades could suddenly find themselves at risk.

        That plants a lot of seeds. Just not the ones the average longtermist wants planted.

        • Kwakigra@beehaw.org
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          I think we agree. LLMs and automation in general have a massive labor saving potential, but because of the way our economic systems are structured this could actually lead in the opposite direction of a Utopia instead of toward one as was suggested by the “longtermists” (weird to type that out). The seeds as you suggest are very different than towards the end of conflict and want.

          • Erk@cdda.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Oh we totally agree, I was just agreeing with you in a slightly tongue in cheek manner.

    • anachronist@midwest.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      It sounds good on its face but it really goes off the skids the moment you listen to what any “long termer” actually believes. Basically it’s a libertarian asshole ideology that lets them discount any human misery they may cause because they’re thinking about theoretical future people who don’t exist.

    • Mars@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It’s an amazing ideology to have because if you can create a plausible future benefit you can do any real evil and feel like the good guy!

      • Stealed billions in wages from your workers? It’s alright, the funds will be used for you to gain influence and guide humanity to a better future!

      • Did you release a bioweapon in some global south country? It’s all right! Overpopulation was a danger 5 or 6 generations from now!

      • Destroyed democracy! It does not matter. Fascism today ensures democracy in the year 4000, trust me bro!

  • HalJor@beehaw.org
    link
    fedilink
    arrow-up
    31
    ·
    1 year ago

    And here I thought short-termism was bad, where companies focus only on quarterly financial returns.

  • Deestan@beehaw.org
    link
    fedilink
    arrow-up
    30
    ·
    edit-2
    1 year ago

    Longtermism is a cardboard halo. A thin excuse to act in complete self-interest while pretending it is good for humanity.

    The further into the future we try to think, the more different factors and uncertainty dominate. This leaves you room to put in any argument you feel like, to make any prediction you feel like. So you pick something vaguely romantic or appealing to some relatively popular opinion, and hey you’re golden.

    I am approached by a beggar. What do I - the longterminist - do?

    I feel like being kind today. My longterminist argument is that every bit of happiness and relief today carries compound interest into the future, and by giving this person some money today, they are content and don’t have to resort to thievery, which again makes another person have a safe day and have mental energy to do a lot of good tomorrow. The goodness becomes bigger every step, over time. I give them $100. It’s pretty obvious, really.

    They smell and I don’t want to deal with that right now. My longterminist argument is that helping out beggars actually just perpetuates a problem in society. If people can’t function in society without random help, it’s just a ticking bomb of a humanitarian disaster. Giving them money just postpones the time until the crisis is too big to ignore, and allows it to grow further. No, this is a problem that society needs to handle right now, and by giving money to this person I’m just helping the problem stay hidden. I ignore them and walk on by. It’s pretty obvious, really.

    My wife left me and I want other people to hurt like I do. My longterminist argument is that unfortunately, these people are rejects of society and I can’t fix that. But we can prevent them from harassing productive citizens that work hard to create a better future. If fewer beggars make commuters sad and it gives a 1% improvement in productivity, that’s a huge compound improvement in a few hundred years. So I kick him in the leg, yell at him, and call the police on him and say he tried to assault me. It’s a bit cold-hearted, but it’s obviously good long term.

    • laylawashere44@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      They fr read Foundation and missed the fact that Hari Seldon’s preditctions fell apart very early on and he had the benefit of magical foresight.

      • Sheltac@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        These people read the cliffnotes and take everything at face value. Reasoning on material does not get you clicks.

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    Timnit is just roleplaying science fiction. LLMs are so far away from that it’s not even conceivable right now. We haven’t even figured out accuracy checking in LLMs.

  • astraeus@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    This article makes me think of two great, classic anime series: Ghost in the Shell and Serial Experiments Lain.

  • Kajo [he/him] 🌈@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I’ve never heard of this. How is it linked to transhumanism? Is it a re-branding? A fork? An attempted to propose a moral stance to transhumanism? Unless they are two rival theories to think the future?

    (I’m not a transhumanist)