• PixelPilgrim@lemmings.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    10 hours ago

    I contributed a few open code to GitHub that I made via LLM ai. I had to test it and figure out the architecture

  • Dave@lemmy.nz
    link
    fedilink
    arrow-up
    19
    ·
    1 day ago

    Wasn’t there recently an article on Lemmy about all the bullshit AI pull requests that FOSS maintainers have to put up with?

  • pixxelkick@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    9
    ·
    2 days ago

    All over tbh, most devs are using it to some degree now.

    However it’s not an “AI” FOSS contribution, it’ll be Doug contributing, or Pardep, or whatever.

    They just quietly used AI like a normal person for some basic parts of the code to get it done faster, then tweaked it to look better.

    I’d expect most FOSS projects with contributions in the past 6months have bits and pieces, a line here or there, written in AI

    That’s the thing, when the AI performs well, you wouldn’t even be able to tell AI was used

    • jagged_circle@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      ·
      20 hours ago

      Yeah, I mean most Foss projects have code copied from stack exchange since decades.

      AI mostly just copies from stack exchange too, so its really just copying from stack exchange with extra steps.

    • akademy@lemm.ee
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      Most?

      Where did you get that information from?

      From my observations, it’s barely anyone.

      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        17 hours ago

        Then you are deeply out of touch with communities. I rarely encounter anyone who hasn’t used LLMs in their coding workflows in some way at all.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      11
      arrow-down
      4
      ·
      edit-2
      1 day ago

      “AI” is nowhere because it doesn’t exist. Sure, there are programs that are good at summarizing Stackexchange but is that so really amazing? Maybe it saves devs a few seconds? Do we credit “AI” with amazing writing when people use grammar correction? The hype is so inane. Don’t feed into it with this nonsense.

      As the article explains, they haven’t been able to find any meaningful contributions to actual problems. I’m sure that plagarized summaries can help with your boilerplates/etc but that’s not “AI”.

      • FrederikNJS@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        11 hours ago

        “AI” is a very broad term. Back when I went to university, my AI course started out with Wumpus World. While this is an extremely simple problem, it’s still considered “AI”.

        The enemies in computer games that are controlled by the computer are also considered “AI”.

        Machine learning algorithms, like recommender algorithms, and image recognition are also considered “AI”

        LLMs like ChatGPT and Claude are also “AI”

        None of these things are conscious, self aware or intelligent, yet they are part of the field called “AI”.

        These are however not “AGI” (Artificial General Intelligence). AGI is when the machine becomes conscious, and self aware. This is the scenario that all the sci-fi movies portray. We are still very far away from this stage.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        20 hours ago

        Thats like saying search engines dont exist.

        AI definitely exists. Its basically just a slightly faster way to get code from stack exchange, except with less context and more uncertainty

      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        1 day ago

        If the only point you can make is picking apart that LLMs don’t “count” as AI, then sorry mate but 2022 called, ot wants it’s discussion back.

        No one really cares about this distinction anymore. It’s like literally vs figuratively.

        LLMs are branded under the concept of AI, arguing it doesn’t count is not a discussion people really care about anymore in the industry.

        • bcovertigo@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          21 hours ago

          Yes, we do care that it’s unintelligent because that’s the reason it can’t be trusted with anything important. This is not being pedantic. This technology is unreliable dogshit. We’ll still be having this conversation in 2030 if it hasn’t cooked us all or lost it’s undeserved hype.

          • pixxelkick@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            18 hours ago

            we do care that it’s unintelligent

            Lol, wait, is that why you were contending the “AI” title?

            lmao

    • pemptago@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      quietly used AI like a normal person

      Is ai use normal though? Maybe for you and many others but the existence of these communities, articles, and folks who just don’t get much out of it despite industry cramming it down everyone’s throat would suggest it’s anything but normal.

      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        1 day ago

        I work at a very big company with humdreds of debelopers.

        We gotamdatory training awhile ago, and it’s now very much normalized as a concept.

        It’s no longer a question of does a dev use it, it’s a question of how much

        Some use it rarely, others a lot.

        • pemptago@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I don’t doubt it’s normalized in big companies. I imagine the bigger the company, the more ai they use. Big companies have the most to gain from the reduced-workforce ai sales pitch, and the biggest (meta, google, microsoft, etc) need a return on their ai investment (I’ve yet to hear of any demonstrable roi).

          It makes sense that anyone in those companies would see it as normal, but it strikes me as an observer bias or frequency illusion. There’s so much ai hype. That is, after all, where the ad money and investments are flowing, but I also see a ton of skepticism, fatigue, and general disenchantment with it, which aligns with my experiences: that it doesn’t compare to a good system of books, notes, and bookmarks-- and that’s not even considering the costs (monetary, environmental, social, and political) which seem completely oversized. So that’s why I remain skeptical of the claim that normal people use ai.

          • pixxelkick@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            18 hours ago

            I also participate in nearly a dozen different coding oriented discord channels over numerous frameworks/languages

            Across countless individuals ranging from total newbs to professionals, basically everyone at least to some degree uses LLMs in some manner for coding, and it’s an openly discussed and common topic.

            You are deeply out of touch with what programmers are actually doing if you seriously think folks who still havent worked an LLM into their process somehow arent the minority.

            • pemptago@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              17 hours ago

              Confirmation bias and anecdotal information, but hey, feel free to speak in absolute terms. If you read the article you’d realize you’re making a claim that not even Mark Zuckerberg, one month ago, is making.

    • i_stole_ur_taco@lemmy.ca
      link
      fedilink
      arrow-up
      5
      arrow-down
      8
      ·
      2 days ago

      That’s exactly it and why I can’t take this article very seriously.

      Just because AI is writing some code doesn’t mean it gets credit as the developer. A human still puts their name beside it. They get all the credit and all the responsibility.

      A piece of code I struggled with for days and some vibe-coded slop look identical in a PR.

      And for that reason we can be certain that tons and tons of FOSS projects are using it. And the maintainers might not even know it.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        1 day ago

        A piece of code I struggled with for days and some vibe-coded slop look identical in a PR.

        TBF that doesn’t say much for your coding.

        Just because people use generated slop, that doesn’t mean “AI” exists, much less that it’s making valuable contributions beyond summarizing/plagarizing Stackexchange.

      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        2 days ago

        Well there’s a huge difference between “slop” and actually fine code.

        As long as the domain space isn’t super esoteric, and the framework is fairly mature, most LLMs will generate not half bad results, enough to get you 90% of the way there.

        But then that last 10% of refining amd cleaning up the code, fixing formatting issues, tweaking names, etc is what seperates the slop for them “you can’t even tell an AI helped with this” code

        I have projects that prolly a good 5% to 10% of the code is AI generated, but you’d never know cuz I still did a second pass over it to sanity check and make sure its good

        • ragepaw@lemmy.ca
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          1 day ago

          100%

          I took some code and scripts I wrote and passed them through AI. A lot of it was tightened up, and even better, it added comments and turned some things into functions so they were reusable.

          I parsed everything it did to sanity check it. Really, use it like a junior developer. “Hey helper, write me a piece of code that does X.” You always double check the junior.