thanks for this very yummy response. I’m having to read up about the technicalities you’re touching on so bear with me!
According to wiki, the neocortex is only present in mammals but as I’m sure you’re aware mammals are not the only creatures to exhibit intelligence. Are you arguing that only mammals are capable of “general intelligence”? I can get on board with what you’re saying as *one way* to develop AGI - work out how brains do it and then copy that - but I don’t think it’s a given that that is the *only* way to AGI, even if we were to agree that only animals with a neocortex can have “general intelligence”. Hence the fact that a given class of machine architecture does not replicate a neocortex would not in my mind make that architecture incapable of ever achieving AGI.
As for your point about the importance of sensorimotor integration, I don’t see that being problematic for any kind of modern computer software - we can easily hook up any number of sensors to a computer, and likewise we can hook the computer up to electric motors, servos and so on. We could easily “install” an LLM inside a robot and allow it to control the robot’s movement based on the sensor data. Hobbyists have done this already, many times, and it would not be hard to add a sensorimotor stage to an LLM’s training.
I do like what you’re saying and find it interesting and thought-provoking. It’s just that what you’ve said hasn’t convinced me that LLMs are incapable of ever achieving AGI for those reasons. I’m not of the view that LLMs *are* capable of AGI though, it’s more like something that I don’t personally feel well enough informed upon to have a firm view. It does seem unlikely to me that we’ve currently reached the limits of what LLMs are capable of, but who knows.
I think current LLMs are already intelligent. I’d also say cats, mice, fish, birds are intelligent - to varying degrees of course.
If you’re referring to my comment about hobbyist projects, I was just thinking of the sorts of things you’ll find on a search of sites like YouTube, perhaps this one is a good example (but I haven’t watched it as I’m avoiding YouTube). I don’t know if anyone has tried to incorporate a “learning to walk” type of stage into LLM training, but my point is that it would be perfectly possible, if there were reason to think it would give the LLM an edge.
The matter of how intelligent humans are is another question, and relevant because AFAIK when people talk about AGI now, they’re talking about an AI that can do better on average than a typical human at any arbitrary task. It’s not a particularly high bar, we’re not talking about super-intelligence I don’t think.