AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-216 hours agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square325linkfedilinkarrow-up1847arrow-down138file-textcross-posted to: apple_enthusiast@lemmy.world
arrow-up1809arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-216 hours agomessage-square325linkfedilinkfile-textcross-posted to: apple_enthusiast@lemmy.world
minus-squareMangoCats@feddit.itlinkfedilinkEnglisharrow-up2·20 hours agoMy impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.
My impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.