Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-27 days agoIf AI was going to advance exponentially I'd have expected it to take off by now.message-squaremessage-square219linkfedilinkarrow-up1251arrow-down138
arrow-up1213arrow-down1message-squareIf AI was going to advance exponentially I'd have expected it to take off by now.Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-27 days agomessage-square219linkfedilink
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up58arrow-down5·7 days agoAnd the single biggest bottleneck is that none of the current AIs “think”. They. Are. Statistical. Engines.
minus-squareCaveman@lemmy.worldlinkfedilinkarrow-up3·7 days agoHow closely do you need to model a thought before it becomes the real thing?
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up3·7 days agoNeed it to not exponentially degrade when AI content is fed in. Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
minus-squaredaniskarma@lemmy.dbzer0.comlinkfedilinkarrow-up1arrow-down1·6 days agoMaybe we are statistical engines too. When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.
minus-squarethemurphy@lemmy.mllinkfedilinkarrow-up7arrow-down1·7 days agoAnd it’s pretty great at it. AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to. AI is so much better and many other tasks.
minus-squareYesButActuallyMaybe@lemmy.calinkfedilinkarrow-up5arrow-down2·7 days agoMarkov chains with extra steps
minus-squareXaphanos@lemmy.worldlinkfedilinkEnglisharrow-up4arrow-down2·7 days agoYou’re not going to get an argument from me.
minus-squaremoonking@lemy.lollinkfedilinkarrow-up23arrow-down21·7 days agoHumans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.
And the single biggest bottleneck is that none of the current AIs “think”.
They. Are. Statistical. Engines.
How closely do you need to model a thought before it becomes the real thing?
Need it to not exponentially degrade when AI content is fed in.
Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
Maybe we are statistical engines too.
When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.
Same
And it’s pretty great at it.
AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to.
AI is so much better and many other tasks.
Markov chains with extra steps
You’re not going to get an argument from me.
Humans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.