which it turned out belonged to James […] whose number appears on his company website.
When Smethurst challenged that, it admitted: “You’re right,” and said it may have been “mistakenly pulled from a database”.
but the overreach of taking an incorrect number from some database it has access to is particularly worrying.
I really love this new style of journalism where they bash the AI for hallucinating and making clear mistakes, to then take anything it says about itself at face value.
It’s a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing “terrifying” about this imo.
It’s a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing “terrifying” about this imo.
Right. There’s nothing terrifying about the technology.
What is terrifying is how people treat it.
LLMs will cough up anything they have learned to any user. But they do it while successfully giving all the human social cues of an intelligent human who knows how to keep a secret.
This often creates trust for the computer that it doesn’t deserve yet.
Examples, like this story, that show how obviously misplaced that trust is, can be terrifying to people who fell for modern LLM intelligence signaling.
Today, most chat bots don’t do any permanent learning during chat sessions, but that is gradually changing. This trend should be particularly terrifying to anyone who previously shared (or keeps habitually sharing) things with a chatbot that they probably shouldn’t.
Some eat up pro-AI drivel, some others anti-AI drivel. Tech bubbles are a wild ride. At least it’s not a bullshit bubble like crypto or web3/nft/metaverse.
Also, the first five digits were the same between the two numbers. Meta is guilty, but they’re guilty of grifting, not of giving a rogue AI access to some shadow database of personal details… yet? Lol
I really love this new style of journalism where they bash the AI for hallucinating and making clear mistakes, to then take anything it says about itself at face value.
It’s a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing “terrifying” about this imo.
Right. There’s nothing terrifying about the technology.
What is terrifying is how people treat it.
LLMs will cough up anything they have learned to any user. But they do it while successfully giving all the human social cues of an intelligent human who knows how to keep a secret.
This often creates trust for the computer that it doesn’t deserve yet.
Examples, like this story, that show how obviously misplaced that trust is, can be terrifying to people who fell for modern LLM intelligence signaling.
Today, most chat bots don’t do any permanent learning during chat sessions, but that is gradually changing. This trend should be particularly terrifying to anyone who previously shared (or keeps habitually sharing) things with a chatbot that they probably shouldn’t.
It’s as if some people will believe any grammatically & semantically intelligible text put in front of their faces.
Especially if it’s anti-AI drivel. People eat this crap up.
Some eat up pro-AI drivel, some others anti-AI drivel. Tech bubbles are a wild ride. At least it’s not a bullshit bubble like crypto or web3/nft/metaverse.
Also, the first five digits were the same between the two numbers. Meta is guilty, but they’re guilty of grifting, not of giving a rogue AI access to some shadow database of personal details… yet? Lol
It’s a case of Gell-Mann Amnesia.