22
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
LLMs do not grow up. Without training they don’t function properly. I guess in this aspect they are similar to humans (or dogs or anything else that benefits from training), but that still does not make them intelligent.
What does it mean to “grow up”? LLMs get better at their tasks during training, just as humans do while growing up. You have to clearly define the terms you use.
You used the term and I was using it with the same usage you were. Why are you quibbling semantics here? It doesn’t change the point.
Yes, I used the term because “growing up” has a well-defined meaning with humans. It doesn’t with LLMs, so I didn’t use it with LLMs.
Did you have a point or are you only trying to argue semantics?
You should ask yourself that question.
So, no point? Cool.
I don’t know what your point was in asking that question, you should know yourself.