• blargerer@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    10 months ago

    Hallucination isn’t a solvable quirk of GPTs, its their function. You can’t get rid of it by throwing more money at the problem, you’d need another idea.

    • JTugger@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      10 months ago

      There are tools to manage major hallucinations. More are coming. Automated fact checking, pattern analysis, multiple layer analysis, etc.

      Yes, there are functional mechanisms that power hallucinations. Especially in the probability models. But there are some powerful tools automate analysis of the outputs and rework for accuracy. Those are likely to improve to eventually reach a level of trust that is sufficient for many business use cases.