The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

  • chaosCruiser
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 months ago

    That’s not very far in the future, so it’s going to be really exciting to see how that works out.

    • explodicle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Maybe only 51% of the code it writes needs to be good before it can self-improve. In which case, we’re nearly there!

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        6 months ago

        We are already past that. The 48% is from a version of chatgpt(3.5) that came out a year ago, there has been lots of progress since then.