I’m getting tired of repeating this but Language models are incapable of doing math. They generate text which has the appearance of a mathematical explanation, but they have no incentive or reason for it to be accurate.
Hikaru Nakamura tried to play ChatGPT in a game of chess, and it started making illegal moves after 10 moves. When he tried to correct it, it apologized and gave the wrong reason for why the move was illegal, and then it followed up another illegal move. That’s when I knew that LLM’s were just fragile toys.
I’m getting tired of repeating this but Language models are incapable of doing math. They generate text which has the appearance of a mathematical explanation, but they have no incentive or reason for it to be accurate.
Hikaru Nakamura tried to play ChatGPT in a game of chess, and it started making illegal moves after 10 moves. When he tried to correct it, it apologized and gave the wrong reason for why the move was illegal, and then it followed up another illegal move. That’s when I knew that LLM’s were just fragile toys.
It is after all a Large LANGUAGE Model. There’s no real reason to expect it to play chess.
There is. All the general media is calling these LLMs AI and AIs have been playing chess and winning for decades.
Yeah for that we’d need a Gigantic LANGUAGE Model.