I’d argue it has. Things like ChatGPT shouldn’t be possible, maybe it’s unpopular to admit it but as someone who has been programming for over a decade, it’s amazing that LLMs and “AI” has come as far as it has over the past 5 years.
That doesn’t mean we have AGI of course, and we may never have AGI, but it’s really impressive what has been done so far IMO.
If you’ve been paying attention to the field, you’d see it’s been a slow steady march. The technology that LLMs are based in were first published in 2016/2017, ChatGPT was the third iteration of the same base model.
Thats not even accounting for all the work done with RNNs and LSTMs prior to that, and even more prior.
Its definitely a major breakthrough, and very similar to what CNNs did for computer vision further back. But like computer vision, advancements have been made in other areas (like the generative space) and haven’t followed a linear path of progress.
Agreed. I never thought it would happen in my lifetime, but it looks like we’re going to have Star Trek computers pretty soon.
AI LLMs have been pretty shit, but the advancement in voice, image generation, and video generation in the last two years has been unbelievable.
We went from the infamous Will Smith eating spaghetti to videos that are convincing enough to fool most people… and it only took 2-3 years to get there.
But LLMs will have a long way to go because of how they create content. It’s very easy to poison LLM datasets, and they get worse learning from other generated content.
Poisoning LLM datasets is fun and easy! Especially when our online intellectual property is scraped (read: stolen) during training and no one is being accountable for it. Fight back! It’s as easy as typing false stuff at the end of your comments. As an 88 year old ex-pitcher for the Yankees who just set the new world record for catfish noodling you can take it from me!
It has slowed exponentially because the models get exponentially more complicated the more you expect it to do.
The exponential problem has always been there. We keep finding tricks and optimizations in hardware and software to get by it but they’re only occasional.
The pruned models keep getting better so now You’re seeing them running on local hardware and cell phones and crap like that.
I don’t think they’re out of tricks yet, but God knows when we’ll see the next advance. And I don’t think there’s anything that’ll take this current path into AGI I think that’s going to be something else.
It has taken off exponentially. It’s exponentially annoying that’s it’s being added to literally everything
Humanity may achieve an annoyance singularity within six months
How do you know it hasn’t and us just laying low? I for one welcome our benevolent and merciful machine overlord.
Duly noted. 🤭 🤫
LOL… you did make me chuckle.
Aren’t we 18months until developers get replaced by AI… for like few years now?
Of course “AI” even loosely defined progressed a lot and it is genuinely impressive (even though the actual use case for most hype, i.e. LLM and GenAI, is mostly lazier search, more efficient spam&scam personalized text or impersonation) but exponential is not sustainable. It’s a marketing term to keep on fueling the hype.
That’s despite so much resources, namely R&D and data centers, being poured in… and yet there is not “GPT5” or anything that most people use on a daily basis for anything “productive” except unreliable summarization or STT (which both had plenty of tools for decades).
So… yeah, it’s a slow take off, as expected. shrug
Things just don’t impend like they used to!
Nobody wants to portend anymore.
I think we might not be seeing all the advancements as they are made.
Google just showed off AI video with sound. You can use it if you subscribe to thier $250/month plan. That is quite expensive.
But if you have strong enough hardware, you can generate your own without sound.
I think that is a pretty huge advancement in the past year or so.
I think that focus is being put on optimizing these current things and making small improvements to quality.
Just give it a few years and you will not even need your webcam to be on. You could just use an AI avatar that look and sounds just like you running locally on your own computer. You could just type what you want to say or pass through audio. I think the tech to do this kind of stuff is basically there, it just needs to be refined and optimized. Computers in the coming years will offer more and more power to let you run this stuff.
How is that an advance ? Computers have been able to speak since the 1970s. It was already producing text.
Iirc there are mathematical reason why AI can’t actually become exponentially more intelligent? There are hard limits on how much work (in the sense of information processing) can be done by a given piece of hardware and we’re already pretty close to that theoretical limit. For an AI to go singulaity we would have to build it with enough initial intelligence that it could aquire both the resources and information with which to improve itself and start the exponential cycle.
how do you grow zero exponentially
Computers are still advancing roughly exponentially, as they have been for the last 40 years (Moore’s law). AI is being carried with that and still making many occasional gains on top of that. The thing with exponential growth is that it doesn’t necessarily need to feel fast. It’s always growing at the same rate percentage wise, definitionally.
Moore’s law is kinda still in effect, depending on your definition of Moore’s law. However, Dennard Scaling is not so computer performance isn’t advancing like it used to.
Moore’s law is kinda still in effect, depending on your definition of Moore’s law.
Sounds like the goal post is moving faster than the number of transistors in an integrated circuit.
We once again congratulate software engineers for nullifying 40 years of hardware improvements.
It has definitely plateaued.
That’s only if the exponent is greater than 1.
It “took off” by companies forcing it into everything
This is precisely a property of exponential growth, that it can take (seemingly) very long until it starts exploding.
What are you talking about it asymptoped at 5 units. It cant be described as exponential until it is exponential otherwise its better described as linear or polynomial if you must.
Exponential growth is always exponential, not just if it suddenly starts to drastically increase in the arbitrarily choosen view scale.
A simple way, to check wether data is exponential, is to visualize it in loc-scale, and if it shows there a linear behavior, it has a exponential relation.
Exponential growth means, that the values change by a constant ratio, contrary to linear growth where the data changes by a constant rate.
There’s no point in arguing with OP, he’s doubling down at an exponential rate (or was it linear).
That’s what I said. Exponential growth is always exponential.
Iykyk
deleted by creator
Close enough chat gpt
It’s exponential along its entire range, even all the way back to negative infinity.
Sure. Everything is exponential if you model it that way asymptote.
No, exponential functions are that way. A feature of exponential functions is that it increases very slowly until the slope hits 1. We’re still on the slow part, we didn’t really have any way of knowing exactly the extreme increase will be.
Do you think that our current iteration of A.I. can have these kinds if gains? Like, what if the extreme increase happens beyond our lifetimes? or beyond the lifetime of our planet?
I think we can’t know, but LLMs definitely feel like a notable acceleration. Exponential functions are also, well, exponential. As X grows, X × X grows faster. The exponential part is gonna come from meta-models, coordinating multiple specialized models to complete complex tasks. Once we get a powerful meta-model, we’re off to the races. AI models developing AI models.
It could take 50 years, it could take 5, it could happen this Wednesday. We won’t know which development is going to be the one to tip us over the edge until it happens, and even then only in retrospect. But it could very well be soon.
No, LLMs have always been an evident dead end when it comes to general AI.
They’re hampering research in actual AI, and the fact that they’re being marketed as AI ensures that no one will invest in actual AI research in decades after the bubble bursts.
We were on track for a technological singularity in our lifetimes, until those greedy bastards derailed us and murdered the future by poisoning the Internet with their slop for some short term profits.
Now we’ll go extinct due to ignorance and global warming long before we have time to invent something smart enough to save us.
But, hey, at least, for a little while, their line did go up, and that’s all that matters, it seems.
An exponential function is a precise mathematical concept, like a circle or an even number. I’m not sure what you mean by “asymptote” here - an exponential function of the form
y = k^x
asymptotically approaches zero asx
goes to negative infinity, but that doesn’t sound like what you’re referring to.People often have bad intuition about how exponential functions behave. They look like they grow slowly at first but that doesn’t mean that they’re not growing exponentially. Consider the story about the grains of rice on a chessboard.
Its a horizontal asymtote. From x=1, as demonstrated in the graph, to around x=-4, where the asymtote is easily estimated by Y, it is 5 units.
Man just say you don’t understand functions and that’s it, you don’t have to push it
Tell me how im wrong. Or why did you even bother?
Or you can just admit you dont have any data to quantify your assertion that AI advancement is exponential growth. So youre just going off vibes.
Would you even admit that linear growth can grow faster than exponential growth?
Edit:
How about this, this is a real easy one.
What type of function is this:
The exponential function has a single horizontal asymptote at y=0. Asymptotes at x=1 and x=-4 would be vertical. Exponential functions have no vertical asymptotes.
I didnt say there are asymtotes at 1 and -4. I said at x=-4, the asymtote can be estimated by Y.
The derivative of an exponential is exponential. The relative difference between -1 and -2 is the same as 1 and 2.
I’d say the development is exponential. Compare what we had 4 years ago, 2 years ago and now. 4 years ago it was inconceivable that an AI model could generate any convincing video at all. 2 years ago we laughed at Will Smith eating pasta. Today we have Veo 3 which generates videos with sound that are near indistinguishable from real life.
It’s not going to be long until you regularly see AI generated videos without realizing it’s AI.