AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-26 days agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square342linkfedilinkarrow-up1865arrow-down142file-textcross-posted to: apple_enthusiast@lemmy.world
arrow-up1823arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-26 days agomessage-square342linkfedilinkfile-textcross-posted to: apple_enthusiast@lemmy.world
minus-squareauraithx@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up9·edit-27 days agoDefine reason. Like humans? Of course not. models lack intent, awareness, and grounded meaning. They don’t “understand” problems, they generate token sequences.
Define reason.
Like humans? Of course not. models lack intent, awareness, and grounded meaning. They don’t “understand” problems, they generate token sequences.
as it is defined in the article