It’s a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing “terrifying” about this imo.
Right. There’s nothing terrifying about the technology.
What is terrifying is how people treat it.
LLMs will cough up anything they have learned to any user. But they do it while successfully giving all the human social cues of an intelligent human who knows how to keep a secret.
This often creates trust for the computer that it doesn’t deserve yet.
Examples, like this story, that show how obviously misplaced that trust is, can be terrifying to people who fell for modern LLM intelligence signaling.
Today, most chat bots don’t do any permanent learning during chat sessions, but that is gradually changing. This trend should be particularly terrifying to anyone who previously shared (or keeps habitually sharing) things with a chatbot that they probably shouldn’t.
Right. There’s nothing terrifying about the technology.
What is terrifying is how people treat it.
LLMs will cough up anything they have learned to any user. But they do it while successfully giving all the human social cues of an intelligent human who knows how to keep a secret.
This often creates trust for the computer that it doesn’t deserve yet.
Examples, like this story, that show how obviously misplaced that trust is, can be terrifying to people who fell for modern LLM intelligence signaling.
Today, most chat bots don’t do any permanent learning during chat sessions, but that is gradually changing. This trend should be particularly terrifying to anyone who previously shared (or keeps habitually sharing) things with a chatbot that they probably shouldn’t.