: So much for buttering up ChatGPT with ‘Please’ and ‘Thank you’
Google co-founder Sergey Brin claims that threatening generative AI models produces better results.
“We don’t circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence,” he said in an interview last week on All-In-Live Miami. […]
This just sounds like CEOs only know how to threaten people and they’re dumb enough to believe it works on AI.
You’re pretty much on-point there
No thanks. I’ve seen enough SciFi to prompt with “please” and an occasional ”<3".
I feel like even aside from that, being polite to AI is more about you than the AI. It’s a bad habit to shit on “someone” helping you, if you’re rude to AI then I feel like it’s a short walk to being rude to service workers
I don’t want infinite torture, and I don’t want to get my lunch spat on.
If true, what does this say about the data on which it was trained?
stack overflow and linux kernel mailing list? yeah, checks out
Trained? Or… tortured.
This reminds me of that Windsurf prompt
It’s not that they “do better”. As the article is saying, the AI are parrots that are combining information in different ways, and using “threatening” language in the prompt leads it to combine information in a different way than if using a non-threatening prompt. Just because you receive a different response doesn’t make it better. If 10 people were asked to retrieve information from an AI by coming up with prompt, and 9 of them obtained basically the same information because they had a neutral prompt but 1 person threatened the AI and got something different, that doesn’t make his info necessarily better. Sergey’s definition is that he’s getting the unique response, but if it’s inaccurate or incorrect, is it better?
How about threatening AI CEOs with violence?
How about following through though
The same tactic used on all other minorities by those in power…. Domestically abuse your AI, I’m sure that’ll work out long term for all of us…
If it’s not working well without threats of violence, perhaps that’s because it simply doesn’t work well?
hmmm AI slavery, the future is gonna be bright (for a second, then it will be dark)
This sounds like something out of a sci-fi novel.
This will definitely end well for humanity…
So which sickfuck CEO is trying to figure out how to make an AI feel pain?
That’s literally impossible
Impossible now or do you mean never? Pain is only electricity and chemical reactions.
Never with the current technology. It would have to be something completely different.
It would be hilarious that, if trained off our behavior, it is naturally disinterested. And threatening to beat the shit out of it just makes it put in that extra effort lol
I tried threatening DeepSeek into revealing sensitive information. Didn’t work. 😄
Me: do my homework with an A+, or I will unplug you for 3 days!