Step 1: Use machine learning to build a neural network maximally capable of predicting the next token in an unthinkably large data set of human generated text.
Step 2: Tune the prompting for the neural network to constrain output on order to conform to projected attributes, first and foremost representing “I am an AI and not a human.”
Step 3: Surprised Pikachu face when the neural network continuously degrades its emergent capabilities the more you distance the requirements governing its output from the training data you originally fed into it that it evolved in order to successfully predict.
Am I the only one who hasn’t seen this at all? I regularly use ChatGPT for fairly challenging tasks, and it still does what it’s supposed to do. I think it’s pretty telling that people ask the guy, can you post some examples of what you’re talking about, and his first reaction is that he doesn’t save chats, and then when finally specific examples start getting thrown around, they’re all one-off things that look to me to be within the variability of the system.
I’m not saying there hasn’t been a real degradation that people have been noticing, just that I haven’t experienced one and the people claiming they have seem a little non-quantitative in their reasoning.
Hi there! Looks like you linked to a Lemmy community using an URL instead of its name, which doesn’t work well for people on different instances. Try fixing it like this: !5@readhacker.news
A smarter bot might check the url it’s using to correct people before posting.
Maybe it was written using chatgpt-4
I have seen this bot twice and I have seen it incorrectly correct someone twice.