: So much for buttering up ChatGPT with ‘Please’ and ‘Thank you’

Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

“We don’t circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence,” he said in an interview last week on All-In-Live Miami. […]

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    7 days ago

    This just sounds like CEOs only know how to threaten people and they’re dumb enough to believe it works on AI.

  • Sabata@ani.social
    link
    fedilink
    English
    arrow-up
    23
    ·
    7 days ago

    No thanks. I’ve seen enough SciFi to prompt with “please” and an occasional ”<3".

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      7 days ago

      I feel like even aside from that, being polite to AI is more about you than the AI. It’s a bad habit to shit on “someone” helping you, if you’re rude to AI then I feel like it’s a short walk to being rude to service workers

      • Sabata@ani.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        I don’t want infinite torture, and I don’t want to get my lunch spat on.

  • Alue42@fedia.io
    link
    fedilink
    arrow-up
    19
    ·
    7 days ago

    It’s not that they “do better”. As the article is saying, the AI are parrots that are combining information in different ways, and using “threatening” language in the prompt leads it to combine information in a different way than if using a non-threatening prompt. Just because you receive a different response doesn’t make it better. If 10 people were asked to retrieve information from an AI by coming up with prompt, and 9 of them obtained basically the same information because they had a neutral prompt but 1 person threatened the AI and got something different, that doesn’t make his info necessarily better. Sergey’s definition is that he’s getting the unique response, but if it’s inaccurate or incorrect, is it better?

  • Zenith@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 days ago

    The same tactic used on all other minorities by those in power…. Domestically abuse your AI, I’m sure that’ll work out long term for all of us…

  • kat_angstrom@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 days ago

    If it’s not working well without threats of violence, perhaps that’s because it simply doesn’t work well?

  • crowbar@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    hmmm AI slavery, the future is gonna be bright (for a second, then it will be dark)

  • 474D@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 days ago

    It would be hilarious that, if trained off our behavior, it is naturally disinterested. And threatening to beat the shit out of it just makes it put in that extra effort lol