technocrit@lemmy.dbzer0.com to Fuck AI@lemmy.worldEnglish · 17 days agoAI models are using material from retracted scientific paperswww.technologyreview.comexternal-linkmessage-square14linkfedilinkarrow-up1172arrow-down11cross-posted to: science@mander.xyztechnology@lemmy.zip
arrow-up1171arrow-down1external-linkAI models are using material from retracted scientific paperswww.technologyreview.comtechnocrit@lemmy.dbzer0.com to Fuck AI@lemmy.worldEnglish · 17 days agomessage-square14linkfedilinkcross-posted to: science@mander.xyztechnology@lemmy.zip
minus-squareTartas1995@discuss.tchncs.delinkfedilinkarrow-up1arrow-down5·16 days agoA retracted scientific paper is probably better than reddit comments… so… arguably better with than without.
minus-squareRailcar8095@lemmy.worldlinkfedilinkarrow-up5·16 days agoGoogle “Andrew Jeremy Wakefield” retracted
minus-squareTartas1995@discuss.tchncs.delinkfedilinkarrow-up1arrow-down4·16 days agoStill probably better than reddit posts about autism and vaccines.
minus-squareRhaedas@fedia.iolinkfedilinkarrow-up1·16 days agoIt’s surprising that LLMs don’t use the frequently repeated data from a source like Stack Overflow to tell the user that they’ve already told them the answer before.
minus-squareTartas1995@discuss.tchncs.delinkfedilinkarrow-up1·16 days agoThat would be amazing. Imagine showing your coworker arguing with chatgtp about what they were already told.
A retracted scientific paper is probably better than reddit comments… so… arguably better with than without.
Google “Andrew Jeremy Wakefield” retracted
Still probably better than reddit posts about autism and vaccines.
It’s surprising that LLMs don’t use the frequently repeated data from a source like Stack Overflow to tell the user that they’ve already told them the answer before.
That would be amazing. Imagine showing your coworker arguing with chatgtp about what they were already told.