AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-28 hours agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square316linkfedilinkarrow-up1823arrow-down137file-textcross-posted to: apple_enthusiast@lemmy.world
arrow-up1786arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-28 hours agomessage-square316linkfedilinkfile-textcross-posted to: apple_enthusiast@lemmy.world
minus-squareMangoCats@feddit.itlinkfedilinkEnglisharrow-up2·11 hours agoMy impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.
My impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.