“Newspaper which uses AI to write its articles concocts derogatory term for people who doesn’t use AI”
This makes about as much sense as calling Linux users “Windows vegans”.
Choosing to not use AI isn’t some wacky contrarian position, it’s a tame position that can easily be justified. (Don’t want to use AI? Then don’t.) If anything, trying to assert that constantly using AI for everything would be the new normal is the wacky position.
Im also a gun vegan, a car vegan, a facebook vegan, an exercise vegan (unfortunately), a windows vegan, … just not actual vegan.
I feel like thats a bad way to use the word vegan.
how to belittle and minimize a very serious thing: call any protesters of it a " ___ vegan"
Abstaining from a thing does not make one a vegan. That’s not how any of this works.
I’m sex vegan. Cry about it virgins
It’s like how they put the word gate after something to say that it is a scandal involving the former word.
Somesort of political scandal involving road maintenance? Oh yes well that’s roadgate then. Even though the Watergate scandal was in fact it scandal in the watergate hotel, rather than a scandal about water.
Awe so the article author has a vendetta against vegans got it.
But it makes people come off as extremely annoying. So that’s working.
I mean, abstaining from animal products makes someone a vegan, right? If you abstain from AI products then it would follow that you’re an “AI vegan”.
Calling them after a maligned (if harmless) group seems like a choice to paint refusing to use AI as being annoying, preachy and scorn-worthy.
They seem very determined to pressure people into using AI regardless of it’s practicality, environmental impact, or anything. Fuck this shit.
There’s been recent pushes in that regard, investment in AI shit has been enormous but the financial payoff for anyone besides hardware manufacturers remains nonexistent. So investors and corporations have recently redoubled their efforts into trying to get everyone to use it in the hopes that this somehow will make them profitable.
yeah, this is 1000% deliberate manufacturing consent
i wonder if they came up with such term to mock those who dont want to use ai and possibly actual vegans on the side.
We don’t need to invent new terms, like ‘AI Vegan’, when we have a perfectly good term already: Butlerian Jihadist.
“refuse” lol as if there were a general requirement to use this shit
I refuse to use it because it’s shit.
We are not the same.
“AI vegans”? I knew guardian was already bought by tech bros, but wtf is that phrasing lmao I dont use AI either, simply because it is wrong more often than not and I am still capable of googling myself, but being cautious equals to being vegan in tech bro eyes?
We let environmentalism become an individual issue, and that was a mistake. Can we not do this for AI? It’s a society-wide problem, not something you can solve by measuring your own personal AI footprint.
The better term would be “LLM gobbling fuckheads” for those who use that stuff and believe it has anything to do with “AI”
This is the dumbest shit I’ve ever read. Refusing to submit to corpo ratfuckery isn’t a lifestyle choice. It’s common sense.
I don’t use A.I. because I’ve had nothing but negative interactions with A.I. Customer service bots that fail to give adequate responses, unhelpful and incorrect search result summaries, and, “art,” that looks like shit hasn’t made me want to sign up for ChatGPT or Gemini. For most people, this isn’t a moral stance, it’s just that the product isn’t worth paying for. Stop framing people that don’t use A.I. as luddites with an ax to grind just because tech bros spent billions on a product that isn’t good yet.
It’s fair to say that the environmental and ethical concerns are significant and I wouldn’t look down in anyone refusing to use AI for those reasons. I don’t look down on vegetarians or vegans either - I don’t have to agree with someone’s moral stance or choices to respect them.
But you’re right, LLMs are full of crap.
LLMs definitely are full of crap. But that isn’t the point of them (even if some corporations make it seem like it is)
They are supposed to be used for text generation. And you are supposed to read through everything afterwards to correct any hallucinations.
It can’t work on its own, and make mistakes about 30% of the time.
But there are use cases where that isn’t a problem. Use them as inspiration for creative writing prompts for example. They are crazy good at that.
Truth is definitely a bit of a blind spot for LLMs.
For most people, this isn’t a moral stance, it’s just that the product isn’t worth paying for.
Wait till you see the price of a burger in another five years.
Yea, it’s often really fucking cheap for the value, just like streaming services to an extent
Customer service AI sucks, I think we can all agree to this
But if you really believe that ChatGPT and Gemini is mainly for generating art, then you’re completely wrong
You only notice AI-generated content when it’s bad/obvious, but you’d never notice the AI-generated content that’s so good it’s indistinguishable from something generated by a human.
I don’t know what percentage of the “good” content we see is AI-generated, but it’s probably more than 0 and will probably go up over time.
Shit take, the more AI-made media is online, the harder it is for AI developing companies to improve on previous models.
It won’t be indistinguishable from media made with human effort, unless you enjoy wasting your time on cheap uninteresting manmade slop then you won’t be fooled by cheap uninteresting and untrue AI-made slop.
deleted by creator
the harder it is for AI developing companies to improve on previous models.
They all use each other’s data to improve. That’s federated learning!
In a way, it’s good because it helps have more competition
I was talking about ai training on ai output, ai requires genuine data, having a feedback loop makes models regress, see how ai makes yellow pictures because of the ghibli ai thing
Sure, that mainly applies when it’s the same model training on itself. If a model trains on a different one, it might retrieve some good features from it, but the bad sides as well
AI requires genuine data, period. Go read about it instead of spewing nonsense.
If they weren’t trained on the same data, it ends up similar
Training inferior models with superior models output can lower the gap between both. It’ll not be optimal by any means and you might fuck its future learning, but it will work to an extent
The data you feed it should be good quality though
Maybe, but that doesn’t change the fact that it was trained on stolen artwork and is being used to put artists out of work. I think that, and the environmental effect, are better arguments against AI than some subjective statement about whether or not it’s good.