Something I’ve found disappointing in the “AI conversation” around me …
… there hasn’t been enough honest introspection about how this whole thing feels and likely will feel.
Like, there’s something disturbing in AI’s first “success” being “art” and “music”.
There’s something disturbing about how we were never going to be able to help ourselves & are compelled to make things like LLMs, but can still be frightened by its implications.
anger v hype leaves all that out
I mean, it’s a little arbitrary to say “AI begins here”. They’re trained off datasets, but I would say that, for example, OCR is very successful, has been around for a while, and definitely uses machine learning.
Ditto for speech recognition.
On the other hand, none of these are capable of generalized problem-solving, either, AGI, which is really the sort of thing that I’m usually thinking of as being significant.
I feel like people don’t discuss it much, because it would mostly amount to “Capitalism, amiright?”. It’s not surprising that companies have no consideration for their actions, other than how it affects shareholder value.
I’m certainly not fond of artistic careers not being viable anymore, especially since LLMs hardly create new jobs to catch these people.
Also really not a fan of all the spam that killed internet search, nor the climate impact. For a moment, I felt like we had the IT industry back on track, after cryptomining folded. Nope, here’s a way to burn tons of energy for you to generate some text or image that is unlikely to contribute much to anything.
At the same time, of course, there’s some opportunities there. Mozilla is generating alt texts for images, so that visually impaired folks have a description to go off of. Like, that feels worth it.
If we switch to 100% regenerative energy and solve the unemployment problems (which I’m not holding my breath for), then I would also be onboard with having some fun with it. Then we can build videogames and have tons of texts in there, which get voiced by some AI.
Like, that’s definitely where feeling comes in. If the ethics of it are garbage, then I cannot get excited about dicking aroubd with it. I know many people ignore the ethics, but that is just weird to me.
Personally I am hopeful about it. For a long time technology has been a factor in a bigger slow motion collapse of the economy being viable to sustain the lives of regular people, and the recent breakthroughs in machine learning do contribute to that collapse. But it’s not totally monopolized, local models are becoming viable and a lot of the work is open sourced, so the power of this stuff goes to anyone who wants it and has an idea of what they would want to do with it. Massive change is inevitable, the question is just what direction it goes in.
Removed by mod
While LLMs might not be the path to AGI (though they might) there’s still quite the difference between gpt2 and 4. What’s gpt5 going to be like? Or 8? No one can know how good it can get even if it’s just faking intelligence. Atleast one thing is for sure; the current version of it is the worst it’ll ever be.