• Allseer
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    i like your definition of these ai tools. Its feels broad enough to cover all of the recent accomplishments so many are praising.

    Many people aren’t able to distinguish that the software is just a tool and even less so as it becomes more autonomous

    • agent_flounder@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I think what gets lost in translation with LLMs (and machine vision and similar ML tech) is that it isn’t magic and it isn’t emergent behavior. It isn’t truly intelligent.

      LLMs do a good job of tricking us into thinking they are more than they are. They generate a seemingly appropriate response to input based on training but it’s nothing more than a statistical model of what the most likely chain of words are in response or another chain of words, based on questions and “good” human responses.

      There is no understanding behind it. No higher cognitive process. Just “what words go next based on Q&A training data.” Which is why we get well written answers that are often total bullshit.

      Even so, the tech could easily upend many writing careers.

      • Allseer
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’ve had the 3.5 gpt model give me a made up source for research. Either that or it told me the source material was related to what I was researching when it wasn’t. Regardless it was one bs moments, its called a hallucination I think.