Adobe’s employees are typically of the same opinion of the company as its users, having internally already expressed concern that AI could kill the jobs of their customers. That continued this week in internal discussions, where exasperated employees implored leadership to not let it be the “evil” company customers think it is.

This past week, Adobe became the subject of a public relations firestorm after it pushed an update to its terms of service that many users saw at best as overly aggressive and at worst as a rights grab. Adobe quickly clarified it isn’t spying on users and even promised to go back and adjust its terms of service in response.

For many though, this was not enough, and online discourse surrounding Adobe continues to be mostly negative. According to internal Slack discussions seen by Business Insider, as before, Adobe’s employees seem to be siding with users and are actively complaining about Adobe’s poor communications and inability to learn from past mistakes.

  • Static_Rocket@lemmy.world
    link
    fedilink
    English
    arrow-up
    120
    ·
    edit-2
    5 months ago

    I was working at a company at one point that got a contract to build something I viewed equivalent to malware. Immediately I brought it up to several higher-ups that this was not something I was willing to do. One of them brought up the argument “If we don’t do it someone else will.”

    This mentality scares the shit out of me, but it explains a lot of horrible things in the industry.

    Believing in that mentality is worse than the reality of the situation. At least if you say no there’s a chance it doesn’t happen or it gets passed to someone worse than you. If you say yes then not only are you complicit, you are actively enforcing that gloomy mentality for other engineers. Just say no.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      52
      ·
      5 months ago

      It‘s exactly this dangerous mindset that‘s riding us in some AI service hellhole. Too many super talented developers have told themselves exactly that instead of standing up for their principles or even allowing themselves to have principles in the first place.

      Only recently have they started leaving companies like OpenAI and taking a stance because they‘re actually seeing what their creation is used for and with how little care for human life it‘s been handled.

      Of course many critics knew this was headed towards military contracts and complete Enshittification. It was plain to see OpenAI founders aren‘t the good guys but „someone else would do it anyway“ kept the underlings happy. This deterministic fallacy is also why anyone still works for Meta or Google. It‘s a really lazy excuse.

      • Alphane Moon@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        5 months ago

        Only recently have they started leaving companies like OpenAI and taking a stance because they‘re actually seeing what their creation is used for and with how little care for human life it‘s been handled.

        Is this true though? From my understanding, Altman was able to overrule the board largely because the employees (especially the one who had been with the company for more than 1-2 years) were worried about their stock options.

        I wouldn’t be surprised if the vast majority of the OpenAI team are ghouls just like Altman, that fundamentally lack humanity (incapable of honesty, inability to tell right from wrong, incapable of empathy).

        Don’t get me wrong, I don’t mean this in the Hollywood sense, like the evil antagonists in say star wars, I am sure they come off as “normal” during a casual conversation. I am referring to going deeper and asking subtle questions referring to matters of ethics and self-enrichment in an off the record environment. They will always come up with some excuse to justify their greed as being “for the betterment of humanity” or some other comical word salad.

        • CosmoNova@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 months ago

          You know I‘m worried you might be exactly on point with this assumption. I still give some of them the benefit of doubt because humans can „reason“ themselves into pretty dangerous things by appealing to authority and the like. Doesn‘t make all of them evil but sure as hell way too gullible for the field they‘re working in.

    • thejml@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 months ago

      One thing I like to tell people with that attitude is: whatever someone does, there’s always someone else who will see it as an example and challenge to do it “better”. Do you want to be the company that started that chain? Are you prepared to compete in that race?

      For something borderline malware, someone will take your lead and make it “better malware”. If you are not prepared to respond in kind, then why did you even go there? If you’re not ready to be known as the top of the line malware creator, why start the product line?

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 months ago

      It is unfortunately one of the darker aspects of the hyper-growth-focused tech and engineering is the often highly mercenary/transactional nature of many people in the field. Like, there’s a reason Facebook pays engineers 250-400k or more. Sure, the work can be difficult, but most of the time it’s not that difficult. They’re paying people that much so that they ignore their morals, shut the fuck up, and just take the paycheck and do the work that is helping to destroy society.

      It’s immensely distressing to me as a software engineer. I am fully aware that my morality is limiting my earning potential, and that makes me kinda furious - not so much at myself, but that our economic system is set up in such a way that that’s not only possible, but optimal (in terms of earning a nice paycheck and being able to retire somewhat early).