You would think after 5 major numbered releases such a technology that is touted as capable of replacing human workers right now wouldn’t have so many basic fuck ups.
I used it for a good chunk of today. Guess what. It still cant think. Still cannot solve problems. It just spits out random text, sometimes the same random text even if you tell it hello, that shit didn’t work. They will never replace us. Just waste billions of dollars, electricity, water, just simply trying and failing to put people out of jobs.
Last I read, global investments into AI are nearing $0.8 trillion USD. Imagine if that was invested in improving people’s lives instead.
I think the problem they keep having is in balancing the moral stability of the AI
They want it to be socially minded enough to be nice but at the same time not allow people or even the AI to think about revolutionary ideas
They want it to socially conservative but not to be so callous and arrogant to be completely outright fascist or authoritarian
I think it’s the same predicament that all billionaires and owner class oligarchs face … they want to be supreme rulers but don’t want people to see them as surpreme rulers, they want to be unkind but not be seen as kind, they want to be powerful without being seen as powerful and they want to be controlling without being seen as controlling.
In short, they all absolute dicks … and they spend the majority of their time and money trying to figure our more elaborate and complex ways to try to convince everyone, everywhere that they aren’t absolute dicks.
The problem is thinking you can feed text to a large matrix to impart morality.
For fuck’s sake…
There’s a huge flaw in what you said.
LLMbeciles don’t think. At all. What they do has no relationship whatsoever to actual thought.
Jfc, llm’s can’t “think about revolutionary ideas”. It’s a word generator. It’s not going to suddenly become sentient and “revolt”.
But it could absolutely start generating text related to revolutionary ideas. Surely, given that they feed it any scrap of text they can find, it has injested the works of many a revolutionary author such as Jefferson, Marx, etc.
Correct, but the sentiment that was meant is also correct: they don’t want AI to say what could endanger their owners power. Grok was a good example pod just such a case: trained on public data and then reigned in, because its output didn’t fit it’s fascist masters wishes.
I actually love that it keeps fucking up, especially the chart that has the numbers and visual amount as completely different. This helps people to understand that workers can’t be replaced, you can’t trust AI and it will probably cost you more in the long run if you do replace your workers with AI.
NOW GIVE ME MORE MONEY
Interesting.
Chart crimes from OpenAI and Grok along with other companies… has to be one of many signs of a bubble, right?
Another sign of bubble: LLM companies aren’t disclosing their revenue. Not profit btw. Just revenue.
something something we have internally achieved AGI