• kwedd@feddit.nl
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    Is there no risk of the LLM hallucinating cases or laws that don’t exist?

    • Bipta@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      10 months ago

      GPT4 is dramatically less likely to hallucinate than 3.5, and we’re barely starting the exponential growth curve.

      Is there a risk? Yes. Humans do it too though if you think about it, and all AI has to do is better than humans, which is a milestone it’s already got within sight.