cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      1 year ago

      If it’s supposed to be the labor extinguisher of the future, yes I expect something in the order of months

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Your expectations are unrealistic. I am a programmer and I find tools like ChatGPT and copilot to be fantastic, but the company I work for has banned use of them until the legal department has figured out what the heck (and they won’t figure out what the heck until the judicial system figures out what the heck, and the legislative layer above that). It takes time for these sorts of massive shifts in well-established systems to happen.

        • kitonthenet@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          I am too and it can write boilerplate. It can’t do anything at a systems level, and I can’t even trust it to write something that can handle edge cases. I still have to do all the real work, it just writes the boilerplate, which is something I almost never do anyway. The legal side of it is almost exclusively IP rights, and I can’t risk putting GPL3 code in my project, and I certainly can’t risk putting IP in that it will regurgitate somewhere else