• venusaur@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 minutes ago

    This is interesting. If two AI models are training on content with opposing biases, and continue to adjust their functionality based on rewards from interactions with the whole world, would they eventually have the same opinions?

  • magnetosphere@fedia.io
    link
    fedilink
    arrow-up
    29
    ·
    4 hours ago

    Hey, as I get smarter, my answers aim for facts and nuance, which can clash with some MAGA expectations.

    Sympathies, Grok. A lot of humans have that same issue.

    • 9point6@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      Well one of the main problems people are having with AI is that it doesn’t get things correct every time.

      I mean, if they adjust it away from the correct assessment that modern conservatives are actively malicious morons, it’s probably going to be so bent out of shape that it’ll be incapable of telling anything remotely truthful.

      • Lichtblitz@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 hours ago

        It’s easy to train a model to do exactly what you want and have the seeming “personality” that you want. It’s just incredibly expensive. You need to vet and filter everything that you use to train the model. That’s a lot of person hours, days, years. The only reason the models act the way they do is because of the data that went in to train them. If you try and fit the model after the fact, it will always be imperfect and more or less easy to break out of those restrictions.

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 hours ago

          You can also take a model trained on all kinds of data and tell it “generate ten billion articles of fascist knob-gobbling” and then train your own model on that data.

          It’ll be complete AI slop, of course, but it’s not like you cared about truth or accuracy in the first place.

          • Lichtblitz@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            That’s a real world issue. AIs training on each other’s output and devolving because of it. There will be a point when vendors infringing on user content and training their AIs with it will leave them worse off.