LughMA to FuturologyEnglish · 7 months agoEvidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.arxiv.orgexternal-linkmessage-square67fedilinkarrow-up1300arrow-down127
arrow-up1273arrow-down1external-linkEvidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.arxiv.orgLughMA to FuturologyEnglish · 7 months agomessage-square67fedilink
minus-squareGeneral_Effort@lemmy.worldlinkfedilinkEnglisharrow-up1·7 months ago In theory there’s an inflection point at which models become sophisticated enough that they can self-sustain with generating training data to recursively improve That sounds surprising. Do you have a source?
That sounds surprising. Do you have a source?