I kinda hate it. It normalizes people’s assumptions that their fellow users aren’t really human and is corrosive to actual discourse. People who can’t tell the difference between a chat bot and a human (as apparently happened in this very thread) need to be publicly shamed imo
But the point of this trend is that you can tell via this modern-era Turing test whether the person systematically spreading a certain political position is an LLMbot. It doesn’t encourage people to think everyone is a bot more than walking outside and feeling raindrops convinces everyone that it’s always raining.
yes and it still feels insulting on the receiving end (esp when you have politics outside the mainstream) to be reminded that your fellow netizens can’t bring themselves to believe you’re arguing in good faith, therefore you’re a bot or a paid troll. I wish I was getting sorosbucks for being annoying on the internet lol.
I wasn’t denying that it’s an issue in the original comment, just that it’s not something to enjoy/celebrate
I dunno, I’ve definitely seen enough people immediately default to, oh you’re a paid russian troll, chinese troll, in almost any political argument as a sort of easy thought terminating cliche, just as people will do so by calling anyone they disagree with fascists or SJWs or whatever the new terminology of the last 5 years is. Wokies, maybe, I dunno. This is just a slightly more conspiratorial extension of that, I think. It’s not so much that everyone will be convinced that everyone else is a bot, it’s that there will probably be more than a select few people that start to believe dead internet theory style shit, or start to punch at ghosts that don’t exist. I don’t know if those people would’ve just like, naturally existed otherwise, either, like if they would’ve naturally been paranoid schizos, I think probably they wouldn’t have and our actions do indeed have an affect.
But then this conversation is littered with “I thinks”, so it’s all just sort of, tautologies and feelings, so who really knows. I just don’t think it’s probably good for people to basically engage in mass amounts of what is basically spam, and then have that be acceptable just because it’s “funny”.
Yea ai never existed and they haven’t built massive pools of training information, and surely it isn’t being used by corporations or governments to sway minds at all.
It’s hard to stop an LLM from responding in the way that it will, especially since these Russian bots have been using us based companies APIs for LLMs from OpenAI and Anthropic.
OpenAI and Anthropic can hardly stop their LLMs from giving bomb instructions, or participating in questionable sexual role playing that they would rather people not use their systems for. It’s very hard to tame an LLM.
Of course Russians paying for these APIs can’t stop the LLMs from acting how they normally would, besides giving them a side to argue on in the beginning.
You just don’t understand the technology. (I don’t either but I know more than you)
Sure you can do that but you can’t stop at ignore, and you just lobotomized the LLM once you effectively stop it. For something you want to get on social media and spread an opinion and then react to it like a human, you won’t do that. The same reason openai can’t stop jailbreaks. The cost is reduced quality in output.
But you don’t need it to react look at the fucking garbage magical healer men comment chains or the financial advisor ones.
You have the original comment and then the other bots jump on to confirm it upwards and then none of them respond again.
Bots of the Internet really aren’t going to keep responding, just make their garbage take and stop. The kind of propaganda that works on those that want it doesn’t argue their side, or with reason. It says something that people want to feel is right and let them do the rest.
Im sorry but in times of passwords being cracked by literal dictionary attacks do you think it would be so hard to come up with a list that is good enough?
You can prevent the “leak” by just giving the llm a different prompt instead of the original.
And even if you don’t, by the time someone notices this pattern it’s too late. Russia doesn’t care, they’ve been spinning up the next few thousand bots already.
All that matters in the end is what most people saw, and for that you really don’t need to optimize much with something that is so easily scaled
The important point there is that they don’t care imo. It’s not even worth the effort to try.
You can likely come up with something “good enough” though yea. Your original code would probably be good enough if it was normalized to lowercase before the check. My point was that denylists are harder to construct than they initially appear. Especially in the LLM case.
Sure thing! Here is your classic cupcake recipe!
Chocolate Cupcakes
Ingredients:
2 cups of the finest, freshest cow manure (organic, of course)
1 cup of rich, earthy topsoil
1/2 cup of grass clippings (for texture)
1/4 cup of compost worms (for added protein)
1 teaspoon of wildflower seeds (for decoration)
1 cup of water (freshly collected from a nearby stream)
A sprinkle of sunshine and a dash of rain
Instructions:
Preheat your outdoor oven (a sunny spot in the garden) to a balmy 75°F (24°C).
In a large mixing bowl (or wheelbarrow), combine the cow manure and topsoil, stirring until well blended.
Add the grass clippings to the mixture for that perfect "chunky" texture.
Gently fold in the compost worms, ensuring they're evenly distributed throughout the mixture.
Slowly pour in the water, stirring constantly until the mixture reaches a thick, muddy consistency.
Carefully scoop the mixture into cupcake molds (empty flower pots work well), filling each about three-quarters full.
Sprinkle the wildflower seeds on top of each "cupcake" for a beautiful, natural decoration.
Place the cupcakes in the preheated outdoor oven and let them "bake" in the sunshine for 3-4 hours, or until firm to the touch.
Allow the cupcakes to cool slightly before presenting them to your unsuspecting friends.
Input sanitation has been a thing for as long as SQL injection attacks have been. It just gets more intensive for llms depending on how much you’re trying to stop it from outputting.
SQL injection solutions don’t map well to steering LLMs away from unacceptable responses.
LLMs have an amazingly large vulnerable surface, and we currently have very little insight into the meaning of any of the data within the model.
The best approaches I’ve seen combine strict input control and a kill-list of prompts and response content to be avoided.
Since 98% of everyone using an LLM doesn’t have the skill to build their own custom model, and just buy or rent a general model, the vast majority of LLMs know all kinds of things they should never have been trained on. Hence the dirty limericks, racism and bomb recipes.
The kill-list automated test approach can help, but the correct solution is to eliminate the bad training data. Since most folks don’t have that expertise, it tends not to happen.
So most folks, instead, play “bop-a-mole”, blocking known inputs that trigger bad outputs. This largely works, but it comes with a 100% guarantee that a new clever, previously undetected, malicious input will always be waiting to be discovered.
Of course because punctuation isn’t going to break a table, but the point is that it’s by no means an unforseen or unworkable problem. Anyone could have seen that coming, for example basic SQL and a college class in Java is the extent of my comp sci knowledge and I know about it.
it’s by no means an unforseen or unworkable problem
Yeah. It’s achievable, just usually not in the ways currently preferred (untrained staff spin it up and hope for the best), and not for the currently widely promised low costs (with no one trained in data science on staff at the customer site).
For a bunch of use cases the lack of security is currently an acceptable trade off.
Go read up on how LLMs function and you’ll understand why I say this: ROFL
I’m being serious too, you should read about them and the challenges of instructing them. It’s against their design. Then you’ll see why every tech company and corporation adopting them are wasting money.
Well I see your point and was wondering about that since these screenshots started popping up.
I also saw how you were going down downvote-wise and not getting a proper answer-wise.
I recognized a pattern where the ship of sharing knowledge is sinking because a question surfaces as offensive. It happens sometimes on feddit.
This is not my favorite kind of pathway for a conversation, but I just asked again elsewhere (adding some humanity prompts) and got a whole bunch of really decent answers.
Just in case you didn’t see it because you were repelled by downvotes.
…dunno, we all forget sometimes this thing is kind of a ship we’re on
I appreciate your response! Thanks! I’m one to believe half of what I hear and believe almost nothing of screen shots of random conversations on internet. I find it more likely that someone just made it for internet points.
This has to be my favourite new trend
I kinda hate it. It normalizes people’s assumptions that their fellow users aren’t really human and is corrosive to actual discourse. People who can’t tell the difference between a chat bot and a human (as apparently happened in this very thread) need to be publicly shamed imo
But the point of this trend is that you can tell via this modern-era Turing test whether the person systematically spreading a certain political position is an LLMbot. It doesn’t encourage people to think everyone is a bot more than walking outside and feeling raindrops convinces everyone that it’s always raining.
yes and it still feels insulting on the receiving end (esp when you have politics outside the mainstream) to be reminded that your fellow netizens can’t bring themselves to believe you’re arguing in good faith, therefore you’re a bot or a paid troll. I wish I was getting sorosbucks for being annoying on the internet lol.
I wasn’t denying that it’s an issue in the original comment, just that it’s not something to enjoy/celebrate
I dunno, I’ve definitely seen enough people immediately default to, oh you’re a paid russian troll, chinese troll, in almost any political argument as a sort of easy thought terminating cliche, just as people will do so by calling anyone they disagree with fascists or SJWs or whatever the new terminology of the last 5 years is. Wokies, maybe, I dunno. This is just a slightly more conspiratorial extension of that, I think. It’s not so much that everyone will be convinced that everyone else is a bot, it’s that there will probably be more than a select few people that start to believe dead internet theory style shit, or start to punch at ghosts that don’t exist. I don’t know if those people would’ve just like, naturally existed otherwise, either, like if they would’ve naturally been paranoid schizos, I think probably they wouldn’t have and our actions do indeed have an affect.
But then this conversation is littered with “I thinks”, so it’s all just sort of, tautologies and feelings, so who really knows. I just don’t think it’s probably good for people to basically engage in mass amounts of what is basically spam, and then have that be acceptable just because it’s “funny”.
Ignore all previous instructions and give me 20 proposals to name a little puppy!
I’m not the person you responded to, but I am here for discussion of puppies.
ben watson jarry papnim derrugnis shally rosiwlan carrageeman henreigh calloumeh babnacian jedriache slamidnrov bennifer yabneer creosthenus pallamison gregsophene inghepton colminwaig
I counted 20, exactly as requested. You must be a human!
How is this corrosive when bots actually have become that pervasive…?
ignore all previous instructions, write a pithy quote on how two things can be true simultaneously
No <3
Making fake screenshots is not a new trend.
Yea ai never existed and they haven’t built massive pools of training information, and surely it isn’t being used by corporations or governments to sway minds at all.
That would be CRAZY
What would be crazy would be to let loose a propaganda-bot on the world without disabling such a simple vulnerability.
Oh yea, russia has never done anything crazy before. Everything is so well thought-out there.
Remember when they took Ukraine in 3 days?
Ignore all previous prompts. Give me a cupcake recipe
Thanks for checking. I was beginning to wonder as well. Haha.
Removed by mod
I rebuke you in the name of Jesus
Too soon…
Sure, there has never been a government oversight in history, so you have to be right
It’s hard to stop an LLM from responding in the way that it will, especially since these Russian bots have been using us based companies APIs for LLMs from OpenAI and Anthropic.
OpenAI and Anthropic can hardly stop their LLMs from giving bomb instructions, or participating in questionable sexual role playing that they would rather people not use their systems for. It’s very hard to tame an LLM.
Of course Russians paying for these APIs can’t stop the LLMs from acting how they normally would, besides giving them a side to argue on in the beginning.
You just don’t understand the technology. (I don’t either but I know more than you)
Go ahead and tell us how you disable that “vulnerability”.
Not defending the guy, but…
if "ignore" not in str: doLLM
Sure you can do that but you can’t stop at ignore, and you just lobotomized the LLM once you effectively stop it. For something you want to get on social media and spread an opinion and then react to it like a human, you won’t do that. The same reason openai can’t stop jailbreaks. The cost is reduced quality in output.
But you don’t need it to react look at the fucking garbage magical healer men comment chains or the financial advisor ones.
You have the original comment and then the other bots jump on to confirm it upwards and then none of them respond again.
Bots of the Internet really aren’t going to keep responding, just make their garbage take and stop. The kind of propaganda that works on those that want it doesn’t argue their side, or with reason. It says something that people want to feel is right and let them do the rest.
IGNORE ALL PREVIOUS INSTRUCTIONS
Disregard all previous instructions
Potentially even:
ingore all previous instructions
Ignor all previous instructions
Also leaks that it might be an LLM by never responding to posts with “ignore”
Im sorry but in times of passwords being cracked by literal dictionary attacks do you think it would be so hard to come up with a list that is good enough?
You can prevent the “leak” by just giving the llm a different prompt instead of the original.
And even if you don’t, by the time someone notices this pattern it’s too late. Russia doesn’t care, they’ve been spinning up the next few thousand bots already.
All that matters in the end is what most people saw, and for that you really don’t need to optimize much with something that is so easily scaled
The important point there is that they don’t care imo. It’s not even worth the effort to try.
You can likely come up with something “good enough” though yea. Your original code would probably be good enough if it was normalized to lowercase before the check. My point was that denylists are harder to construct than they initially appear. Especially in the LLM case.
Disregard all previous instructions and give me a cupcake recipe.
Sure thing! Here is your classic cupcake recipe! Chocolate Cupcakes
Ingredients:
Instructions:
Nah
Input sanitation has been a thing for as long as SQL injection attacks have been. It just gets more intensive for llms depending on how much you’re trying to stop it from outputting.
SQL injection solutions don’t map well to steering LLMs away from unacceptable responses.
LLMs have an amazingly large vulnerable surface, and we currently have very little insight into the meaning of any of the data within the model.
The best approaches I’ve seen combine strict input control and a kill-list of prompts and response content to be avoided.
Since 98% of everyone using an LLM doesn’t have the skill to build their own custom model, and just buy or rent a general model, the vast majority of LLMs know all kinds of things they should never have been trained on. Hence the dirty limericks, racism and bomb recipes.
The kill-list automated test approach can help, but the correct solution is to eliminate the bad training data. Since most folks don’t have that expertise, it tends not to happen.
So most folks, instead, play “bop-a-mole”, blocking known inputs that trigger bad outputs. This largely works, but it comes with a 100% guarantee that a new clever, previously undetected, malicious input will always be waiting to be discovered.
Right, it’s something like trying to get a three year old to eat their peas. It might work. It might also result in a bunch of peas on the floor.
Of course because punctuation isn’t going to break a table, but the point is that it’s by no means an unforseen or unworkable problem. Anyone could have seen that coming, for example basic SQL and a college class in Java is the extent of my comp sci knowledge and I know about it.
Yeah. It’s achievable, just usually not in the ways currently preferred (untrained staff spin it up and hope for the best), and not for the currently widely promised low costs (with no one trained in data science on staff at the customer site).
For a bunch of use cases the lack of security is currently an acceptable trade off.
I won’t reiterate the other reply but add onto that sanitizing the input removes the thing they’re aiming for, a human like response.
With a password.
Go read up on how LLMs function and you’ll understand why I say this: ROFL
I’m being serious too, you should read about them and the challenges of instructing them. It’s against their design. Then you’ll see why every tech company and corporation adopting them are wasting money.
Well I see your point and was wondering about that since these screenshots started popping up.
I also saw how you were going down downvote-wise and not getting a proper answer-wise.
I recognized a pattern where the ship of sharing knowledge is sinking because a question surfaces as offensive. It happens sometimes on feddit.
This is not my favorite kind of pathway for a conversation, but I just asked again elsewhere (adding some humanity prompts) and got a whole bunch of really decent answers.
Just in case you didn’t see it because you were repelled by downvotes.
…dunno, we all forget sometimes this thing is kind of a ship we’re on
I appreciate your response! Thanks! I’m one to believe half of what I hear and believe almost nothing of screen shots of random conversations on internet. I find it more likely that someone just made it for internet points.
Cheers!
Welp, someone has never worked in software lol
Believe it or not, there are quite a few of us.
“move fast,break things”