https://twipped.social/@twipped/114662771295312758
article they are referencing: https://futurism.com/atari-beats-chatgpt-chess
https://twipped.social/@twipped/114662771295312758
article they are referencing: https://futurism.com/atari-beats-chatgpt-chess
I find this questionable; people forget that a locally-hosted LLM is no more taxing than a video game.
Why do you believe this? It has continued to get dramatically better over the past 5 years. Look at where GPT2 was in 2019.
It is not consistently usable for coding. If you are hoping this slop-producing machine is consistently useful for anything then you are sorely mistaken. These things are most suitable for applications where unreliability is acceptable.
Do you not see the obvious contradiction here? If you are sure that this is not going to get better and it’s not profitable, then you have nothing to worry about in the long-term about careers being replaced by AIs.
Google did this intentionally as part of enshittification.
So read and learn.
Fair enough. It’s not going to get better because the fundamental problem is AI as represented by, say, ChatGPT doesn’t know anything. It has no understanding of anything it’s “saying”. Therefore, any results derived from ChatGPT or equivalent, will need to be double-checked in any serious endeavor. So, yes it can poop out a legal brief in two seconds but it still has to be revised, refined, and inevitably fixed when it hallucinates precedent citations and just about anything else. That, the core of it, will never get better. It might get faster. It might “sound” “more human”. But it won’t get better.
Well tell that to the half a million people laid off in the last couple of years. Damage is done. Also, the bubble is still growing, and if you haven’t noticed what AI has done to the HR industry, let me summarize it thusly: it has destroyed it.
Well, yes. Every company which has chosen to promote and focus on AI has done so intentionally. That doesn’t mean it’s good. If AI wasn’t the all-hype vaporware it is, this wouldn’t have been an option. If OpenAI had been honest about it and said “it’s very interesting and we’re still working on it” instead of “it’s absolutely going to change the world in six months” this wouldn’t be the unusable shitpile it is.
I don’t think we disagree that much.
But still, I cringe when someone implies open-model locally-hosted AIs are environmentally problematic. They have no sense of scale whatsoever.
Good, so we agree that there is the potential for long-term damage. In other words, AIs are a long-term threat, not just a short-term one. Maybe the bubble will pop but so did the dotcom bubble and we still have the internet.
No, I think enshittification started well before 2022 (ChatGPT). Sure, even before that LLMs were making SEO garbage webpages that google was reporting, so you can blame AI in that regard – but I don’t believe for a second that Google couldn’t have found a way to filter those kinds of results out. The user-negative feature was profitable for them, so they didn’t fix it. If LLMs hadn’t been around, they would have found other ways to make search more user-negative (and they probably did indeed employ such techniques).