• NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    335
    arrow-down
    2
    ·
    11 months ago

    So a Board member wrote a paper about focusing on safety above profit in AI development. Sam Altman did not take kindly to this concept and started pushing to fire her (to which end he may or may not have lied to other Board members to split them up). Sam gets fired for trying to fire someone for putting safety over profit. Everything exploded and now profit is firmly at the head of the table.

    I like nothing about this version of events either.

    • GregorGizeh@lemmy.zip
      link
      fedilink
      English
      arrow-up
      107
      arrow-down
      1
      ·
      edit-2
      11 months ago

      Wasn’t that evident from the very first few days, when we learned the board stood for the non profit, safety first mother org while the booted ceo stands for reckless monetization?

      Now he’s back, the safety concerns got silenced, money can be made, people can get fucked. A good day for capitalists

      • Jeena@jemmy.jeena.net
        link
        fedilink
        English
        arrow-up
        70
        ·
        11 months ago

        That’s why I was so confused that all the workers stood behind the CEO and threatened to go to Microsoft.

        • ours@lemmy.world
          link
          fedilink
          English
          arrow-up
          61
          ·
          11 months ago

          My guess is that they want the company to grow fast so that their salaries and stock options grow as well.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          45
          ·
          edit-2
          11 months ago

          That’s what a personality cult gets you. The amount of idiots willing to die for another man’s ego is why we have some of the shittiest things in society. “Daddy told me so” is a powerful force when the people who believe it cannot see that their vision has absolutely no rational support. Jobs, Musk, Gates, Trump, they all thrive by telling people that their irrational beliefs are true and if they follow them they will make their dreams realities. The talk and narrative around Altman has always struck me similar to Musk’s cult of personality in the late 2010s.

          • APassenger@lemmy.world
            link
            fedilink
            English
            arrow-up
            15
            ·
            edit-2
            11 months ago

            Stock options help. If they make enough off of OpenAI, they won’t need to find a job after this.

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              1
              ·
              11 months ago

              This is tech, they have no protections. I bet there’s some clause with a time lock that they can only sell the stock in 10 years time and they lose them if they leave OpenAI before that time window for any reason. In 5 years or before they’ll get hit by some mass layoffs and lose everything. This has happened so many times before with so many companies that it is laughable. Stock options in tech are a fairy tale.

          • FrostyTrichs@lemmy.world
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            2
            ·
            edit-2
            11 months ago

            The amount of idiots willing to die for another man’s ego

            U.S. Military has entered the chat

          • TimeSquirrel@kbin.social
            link
            fedilink
            arrow-up
            6
            ·
            edit-2
            11 months ago

            I’m not sure Gates ever had a “personality cult”. In the 90s during his heyday he was pretty much reviled even by Windows users. He built his empire by swallowing everyone else around him that was doing anything even a little bit innovative. He wasn’t really the “visionary artist/engineer” type like those others. Just a random rich nerd who won the technology monopoly game.

            • raspberriesareyummy@lemmy.world
              link
              fedilink
              English
              arrow-up
              15
              ·
              11 months ago

              Like @Zak, I would like to point out that - as much as I despised Bill Gates back then - he was actually competent. And - despite me never liking Microsoft - they have a legitimate business model built on selling products, not user data (like all social media and google). So of all the evil dipshits out there, Microsoft and Apple are the lesser ones. (I am a Linux user since 2004 or so)

            • Zak@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              ·
              11 months ago

              Early accounts are that Bill Gates was absolutely a talented coder, at least in the 1970s. Of course that was’t what made him rich - a series of business decisions that were some combination of lucky and prescient were.

        • trafalgar225@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          ·
          11 months ago

          The company gave the companies a large amount of equity. That was the work of Sam Altman. The employees are voting their wallet my sticking up for him.

        • NounsAndWords@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          11 months ago

          That was some classic business pressure tactics. The sort of thing a massive multinational corporation would have a lot of experience in. The sort of thing a massive multinational corporation suddenly blindsided by this with a lot of financial interest in the situation would be interested in doing…while at the same time mitigating risk by trying to pull those same employees into the parent company if things don’t go their way.

          Edit: Now that I think about it, they also managed to get the vast majority of employees to ‘join together’ on the issue making it (psychologically) easier for them to ‘join together’ in choosing where to jump ship to. Maybe I’m just paranoid, but it’s just a really clever move on Microsoft’s part.

    • SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      59
      ·
      11 months ago

      I feel like this isn’t surprising knowing about all the other stuff altman has done. Seems like yet another loss for the greater good in the name of profit.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      So basically it’s exactly what I expected and I’m not surprised in the slightest. Amazing how that works.

      It’s not too surprising considering they don’t even have basic essential security features in 2023 like two-factor authentication. Absolutely pitiful.

  • seiryth@lemmy.world
    link
    fedilink
    English
    arrow-up
    121
    arrow-down
    2
    ·
    11 months ago

    The thing that shits me about this is google appear to the public to be late to the party but the reality is they DID put safety before profit when it came to AI. The sheer amount of research and papers put out by them on AI should have proven to people they know what they’re doing.

    And then openAI throw caution into the wind and essentially make google and others panic knee jerk because there’s really money to be made, and now everyone seems to be throwing caution into the wind and pushing it into the mainstream before society is ready.

    All in the name of shareholders.

    • blazeknave@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      10k%! A friend works in brand marketing at Google. They’d been using internally for months before market pressure forced them to start onboarding public end users. I’ve been in the earliest of the external betas (bc I give a lot of product feedback over the years?) and from the beginning the user experiences have been the most locked down of all the consumer LLMs

    • Toes♀@ani.social
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      85
      ·
      11 months ago

      I think it’s not enough, disable all the safe guards and let people decide if the output is what they want, hate being treated like a child trying to buy a M rated game.

      • xor@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        72
        arrow-down
        5
        ·
        11 months ago

        But this isn’t an M rated game, it’s a transformative new technology with potentially horrifying consequences to misuse

        • PsychedSy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          32
          ·
          11 months ago

          By answering questions? We are general intelligences that can answer questions. Oh shit oh fuck what am I doing talking.

              • kyle@lemm.ee
                link
                fedilink
                English
                arrow-up
                12
                arrow-down
                2
                ·
                11 months ago

                I’m sure the military is so excited about AI because of its ability to “respond”.

                • DriftinGrifter@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  11 months ago

                  Im bot sure if you are aware but thats litterally what makes ai so usefull its just responding to external inputs and doesent habe to be programmed value for value because it getsTrained with datasets and chat GPT isn’t gonna hurt a fly the reason its m Rated is because the idiots who made it didn’t filter the input content whilst web scraping its litterally too stupid to Funktionass a weapon except for misinformation which it outputs regardless oft its age rating openai is just a buncha cucs who switched to a close source system and can’t actually make any gold company decisions

                  TL;DR: Fuck openai

                • PsychedSy@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  6
                  ·
                  11 months ago

                  If you can get chatgpt to drive your murder drone I’d be very impressed. Telsa can’t figure it out in 2d.

              • Socsa@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                11 months ago

                FWIW I work in the field and agree with this. LLMs in the current state are not so dangerous they can’t be released to public. Generative image and video models are a much bigger threat, but that was largely something which came from open source.

                If we really want to pearl clutch, it is NVIDIA which is really propping open this Pandora’s box in terms of putting the capability in irresponsible hands

          • xor@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            18
            arrow-down
            3
            ·
            11 months ago

            Okay, so let’s do a thought experiment, and take off all the safeguards.

            Oops, you made:

            • a bomb design generator
            • an involuntary pornography generator
            • a CP generator

            Saying “don’t misuse it” isn’t enough to stop people misusing it

            And that’s just with chatgpt - AI isn’t just a question and answer machine - I suggest you read about “the paperclip maximiser” as a very good example of how misalignment of general purpose AI can go horribly wrong

            • El Barto@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              2
              ·
              11 months ago

              I was going to say that a well-determined individual would find this information regardless. But the difference here is that it being so easily accessible would increase the risks of someone doing something reaaaally stupid by a factor of 100. Yikes.

                • El Barto@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  11 months ago

                  For you or many others, for sure it won’t be complicated. The world is vast, and the environment you are in is very specific to you. Many other kids may have phones, sure, but they are not in the same environment as you or me.

                  Some non-sciency kid will have a hard time getting to do what their edgy mind wants them to do, unless an AI guides them mini-step by mini-step.

            • PsychedSy@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              3
              ·
              11 months ago

              I mean half that is deviant art and you can look up how to make explosives on youtube chem channels or in books. It’s not hard to rig up a custom detonator if you can get the energetics.

            • Socsa@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              11 months ago

              ChatGPT was very far from the first publically available generative AI. It didn’t even do images at first.

              Also, there are plenty of YouTube channels which show you how to make all sorts of extremely dangerous explosives already.

              • xor@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                But the concern isn’t which was the first generative ai - their “idea” was that AIs - of all types, including generalised - should just be released as-is, with no further safeguards.

                That doesn’t consider that OpenAI doesn’t only develop text generation AIs. Generalised AI can do horrifying things, even just by accidental misconfiguration (see the paperclip optimiser example).

                But even a GANN like chatGPT can be coerced to generate non-text data with the right prompting.

                Even in that example, one can’t just dig up those sorts of videos without, at minimum, leaving a trail. But an unresticted pretrained model can be distributed and run locally, and used without trace to generate any content whatsoever that it’s capable of generating.

                And with a generalised AI, the only constraint to the prompt “kill everybody except me” becomes available compute.

      • hansl@lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        10
        ·
        11 months ago

        And while you’re at it, remove safety on guns. And seatbelts. And might as well get rid of those pesky boom gates. I can hear the trains just fine, I don’t like being treated like a child. /s

        • konalt@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          27
          ·
          11 months ago

          Guns and car crashes may break my bones, but words will never hurt me

          • hansl@lemmy.world
            link
            fedilink
            English
            arrow-up
            25
            arrow-down
            2
            ·
            11 months ago

            That makes a great song jingle but it’s been proven that you are more a product of words around you than you want to admit in your comment.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    3
    ·
    11 months ago

    This is the best summary I could come up with:


    Toner, who serves as director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, allegedly drew Altman’s negative attention by co-writing a paper on different ways AI companies can “signal” their commitment to safety through “costly” words and actions.

    In the paper, Toner contrasts OpenAI’s public launch of ChatGPT last year with Anthropic’s “deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype.”

    She also wrote that, “by delaying the release of [Anthropic chatbot] Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur.”

    At the same time, Duhigg’s piece also gives some credence to the idea that the OpenAI board felt it needed to be able to hold Altman “accountable” in order to fulfill its mission to “make sure AI benefits all of humanity,” as one unnamed source put it.

    “It’s hard to say if the board members were more terrified of sentient computers or of Altman going rogue,” Duhigg writes.

    The piece also offers a behind-the-scenes view into Microsoft’s three-pronged response to the OpenAI drama and the ways the Redmond-based tech giant reportedly found the board’s moves “mind-bogglingly stupid.”


    The original article contains 414 words, the summary contains 215 words. Saved 48%. I’m a bot and I’m open source!