• JDubbleu@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    1 year ago

    The thing is LLMs are extremely useful at aiding humans. I use one all the time at work and it has made me faster at my job, but left unchecked they do really stupid shit.

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I agree they can be useful (I’ve found intelligent code snippet autocompletion to be great), but it’s really important that the humans using the tool are very skilled and aware of the limitations of AI.

      Eg, my usage generates only very, very small amounts of code (usually a few lines). I have to very carefully read those lines to make sure they are correct. It’s never generating something innovative. It simply guesses what I was going to type anyways. So it only saved me time spent typing and the AI is by no means in charge of logic. It also is wrong a lot of the time. Anyone who lets AI generate a substantial amount of code or lets it generate code you don’t understand thoroughly is both a fool and a danger.

      It does save me time, especially on boilerplate and common constructs, but it’s certainly not revolutionary and it’s far too inaccurate to do the kinds of things non programmers tend to think AI can do.