AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-26 days agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square342linkfedilinkarrow-up1865arrow-down142file-textcross-posted to: apple_enthusiast@lemmy.world
arrow-up1823arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-26 days agomessage-square342linkfedilinkfile-textcross-posted to: apple_enthusiast@lemmy.world
minus-squarereksas@sopuli.xyzlinkfedilinkEnglisharrow-up39arrow-down2·7 days agodoes ANY model reason at all?
minus-squareMrLLM@ani.sociallinkfedilinkEnglisharrow-up2·7 days agoI think I do. Might be an illusion, though.
minus-square4am@lemm.eelinkfedilinkEnglisharrow-up35arrow-down1·7 days agoNo, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.
minus-squareMiles O'Brien@startrek.websitelinkfedilinkEnglisharrow-up8·7 days ago… So you’re saying there’s a chance?
minus-squareRefurbished Refurbisher@lemmy.sdf.orglinkfedilinkEnglisharrow-up3·7 days agoThat sounds really floppy.
minus-squareauraithx@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up9·edit-27 days agoDefine reason. Like humans? Of course not. models lack intent, awareness, and grounded meaning. They don’t “understand” problems, they generate token sequences.
does ANY model reason at all?
I think I do. Might be an illusion, though.
No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.
… So you’re saying there’s a chance?
10^36 flops to be exact
That sounds really floppy.
Define reason.
Like humans? Of course not. models lack intent, awareness, and grounded meaning. They don’t “understand” problems, they generate token sequences.
as it is defined in the article