You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
Is ai use normal though? Maybe for you and many others but the existence of these communities, articles, and folks who just don’t get much out of it despite industry cramming it down everyone’s throat would suggest it’s anything but normal.
I don’t doubt it’s normalized in big companies. I imagine the bigger the company, the more ai they use. Big companies have the most to gain from the reduced-workforce ai sales pitch, and the biggest (meta, google, microsoft, etc) need a return on their ai investment (I’ve yet to hear of any demonstrable roi).
It makes sense that anyone in those companies would see it as normal, but it strikes me as an observer bias or frequency illusion. There’s so much ai hype. That is, after all, where the ad money and investments are flowing, but I also see a ton of skepticism, fatigue, and general disenchantment with it, which aligns with my experiences: that it doesn’t compare to a good system of books, notes, and bookmarks-- and that’s not even considering the costs (monetary, environmental, social, and political) which seem completely oversized. So that’s why I remain skeptical of the claim that normal people use ai.
I also participate in nearly a dozen different coding oriented discord channels over numerous frameworks/languages
Across countless individuals ranging from total newbs to professionals, basically everyone at least to some degree uses LLMs in some manner for coding, and it’s an openly discussed and common topic.
You are deeply out of touch with what programmers are actually doing if you seriously think folks who still havent worked an LLM into their process somehow arent the minority.
Confirmation bias and anecdotal information, but hey, feel free to speak in absolute terms. If you read the article you’d realize you’re making a claim that not even Mark Zuckerberg, one month ago, is making.
“AI” is nowhere because it doesn’t exist. Sure, there are programs that are good at summarizing Stackexchange but is that so really amazing? Maybe it saves devs a few seconds? Do we credit “AI” with amazing writing when people use grammar correction? The hype is so inane. Don’t feed into it with this nonsense.
As the article explains, they haven’t been able to find any meaningful contributions to actual problems. I’m sure that plagarized summaries can help with your boilerplates/etc but that’s not “AI”.
“AI” is a very broad term. Back when I went to university, my AI course started out with Wumpus World. While this is an extremely simple problem, it’s still considered “AI”.
The enemies in computer games that are controlled by the computer are also considered “AI”.
Machine learning algorithms, like recommender algorithms, and image recognition are also considered “AI”
LLMs like ChatGPT and Claude are also “AI”
None of these things are conscious, self aware or intelligent, yet they are part of the field called “AI”.
These are however not “AGI” (Artificial General Intelligence). AGI is when the machine becomes conscious, and self aware. This is the scenario that all the sci-fi movies portray. We are still very far away from this stage.
Yes, we do care that it’s unintelligent because that’s the reason it can’t be trusted with anything important. This is not being pedantic. This technology is unreliable dogshit. We’ll still be having this conversation in 2030 if it hasn’t cooked us all or lost it’s undeserved hype.
That’s exactly it and why I can’t take this article very seriously.
Just because AI is writing some code doesn’t mean it gets credit as the developer. A human still puts their name beside it. They get all the credit and all the responsibility.
A piece of code I struggled with for days and some vibe-coded slop look identical in a PR.
And for that reason we can be certain that tons and tons of FOSS projects are using it. And the maintainers might not even know it.
A piece of code I struggled with for days and some vibe-coded slop look identical in a PR.
TBF that doesn’t say much for your coding.
Just because people use generated slop, that doesn’t mean “AI” exists, much less that it’s making valuable contributions beyond summarizing/plagarizing Stackexchange.
Well there’s a huge difference between “slop” and actually fine code.
As long as the domain space isn’t super esoteric, and the framework is fairly mature, most LLMs will generate not half bad results, enough to get you 90% of the way there.
But then that last 10% of refining amd cleaning up the code, fixing formatting issues, tweaking names, etc is what seperates the slop for them “you can’t even tell an AI helped with this” code
I have projects that prolly a good 5% to 10% of the code is AI generated, but you’d never know cuz I still did a second pass over it to sanity check and make sure its good
I took some code and scripts I wrote and passed them through AI. A lot of it was tightened up, and even better, it added comments and turned some things into functions so they were reusable.
I parsed everything it did to sanity check it. Really, use it like a junior developer. “Hey helper, write me a piece of code that does X.” You always double check the junior.
All over tbh, most devs are using it to some degree now.
However it’s not an “AI” FOSS contribution, it’ll be Doug contributing, or Pardep, or whatever.
They just quietly used AI like a normal person for some basic parts of the code to get it done faster, then tweaked it to look better.
I’d expect most FOSS projects with contributions in the past 6months have bits and pieces, a line here or there, written in AI
That’s the thing, when the AI performs well, you wouldn’t even be able to tell AI was used
Yeah, I mean most Foss projects have code copied from stack exchange since decades.
AI mostly just copies from stack exchange too, so its really just copying from stack exchange with extra steps.
Most?
Where did you get that information from?
From my observations, it’s barely anyone.
Then you are deeply out of touch with communities. I rarely encounter anyone who hasn’t used LLMs in their coding workflows in some way at all.
Is ai use normal though? Maybe for you and many others but the existence of these communities, articles, and folks who just don’t get much out of it despite industry cramming it down everyone’s throat would suggest it’s anything but normal.
I work at a very big company with humdreds of debelopers.
We gotamdatory training awhile ago, and it’s now very much normalized as a concept.
It’s no longer a question of does a dev use it, it’s a question of how much
Some use it rarely, others a lot.
I don’t doubt it’s normalized in big companies. I imagine the bigger the company, the more ai they use. Big companies have the most to gain from the reduced-workforce ai sales pitch, and the biggest (meta, google, microsoft, etc) need a return on their ai investment (I’ve yet to hear of any demonstrable roi).
It makes sense that anyone in those companies would see it as normal, but it strikes me as an observer bias or frequency illusion. There’s so much ai hype. That is, after all, where the ad money and investments are flowing, but I also see a ton of skepticism, fatigue, and general disenchantment with it, which aligns with my experiences: that it doesn’t compare to a good system of books, notes, and bookmarks-- and that’s not even considering the costs (monetary, environmental, social, and political) which seem completely oversized. So that’s why I remain skeptical of the claim that normal people use ai.
I also participate in nearly a dozen different coding oriented discord channels over numerous frameworks/languages
Across countless individuals ranging from total newbs to professionals, basically everyone at least to some degree uses LLMs in some manner for coding, and it’s an openly discussed and common topic.
You are deeply out of touch with what programmers are actually doing if you seriously think folks who still havent worked an LLM into their process somehow arent the minority.
Confirmation bias and anecdotal information, but hey, feel free to speak in absolute terms. If you read the article you’d realize you’re making a claim that not even Mark Zuckerberg, one month ago, is making.
“AI” is nowhere because it doesn’t exist. Sure, there are programs that are good at summarizing Stackexchange but is that so really amazing? Maybe it saves devs a few seconds? Do we credit “AI” with amazing writing when people use grammar correction? The hype is so inane. Don’t feed into it with this nonsense.
As the article explains, they haven’t been able to find any meaningful contributions to actual problems. I’m sure that plagarized summaries can help with your boilerplates/etc but that’s not “AI”.
“AI” is a very broad term. Back when I went to university, my AI course started out with Wumpus World. While this is an extremely simple problem, it’s still considered “AI”.
The enemies in computer games that are controlled by the computer are also considered “AI”.
Machine learning algorithms, like recommender algorithms, and image recognition are also considered “AI”
LLMs like ChatGPT and Claude are also “AI”
None of these things are conscious, self aware or intelligent, yet they are part of the field called “AI”.
These are however not “AGI” (Artificial General Intelligence). AGI is when the machine becomes conscious, and self aware. This is the scenario that all the sci-fi movies portray. We are still very far away from this stage.
Thats like saying search engines dont exist.
AI definitely exists. Its basically just a slightly faster way to get code from stack exchange, except with less context and more uncertainty
If the only point you can make is picking apart that LLMs don’t “count” as AI, then sorry mate but 2022 called, ot wants it’s discussion back.
No one really cares about this distinction anymore. It’s like literally vs figuratively.
LLMs are branded under the concept of AI, arguing it doesn’t count is not a discussion people really care about anymore in the industry.
Yes, we do care that it’s unintelligent because that’s the reason it can’t be trusted with anything important. This is not being pedantic. This technology is unreliable dogshit. We’ll still be having this conversation in 2030 if it hasn’t cooked us all or lost it’s undeserved hype.
Lol, wait, is that why you were contending the “AI” title?
lmao
Quiet you, I can’t hear the echoes in this chamber anymore
That’s exactly it and why I can’t take this article very seriously.
Just because AI is writing some code doesn’t mean it gets credit as the developer. A human still puts their name beside it. They get all the credit and all the responsibility.
A piece of code I struggled with for days and some vibe-coded slop look identical in a PR.
And for that reason we can be certain that tons and tons of FOSS projects are using it. And the maintainers might not even know it.
The credit should go to the author on stack exchange.
TBF that doesn’t say much for your coding.
Just because people use generated slop, that doesn’t mean “AI” exists, much less that it’s making valuable contributions beyond summarizing/plagarizing Stackexchange.
Well there’s a huge difference between “slop” and actually fine code.
As long as the domain space isn’t super esoteric, and the framework is fairly mature, most LLMs will generate not half bad results, enough to get you 90% of the way there.
But then that last 10% of refining amd cleaning up the code, fixing formatting issues, tweaking names, etc is what seperates the slop for them “you can’t even tell an AI helped with this” code
I have projects that prolly a good 5% to 10% of the code is AI generated, but you’d never know cuz I still did a second pass over it to sanity check and make sure its good
100%
I took some code and scripts I wrote and passed them through AI. A lot of it was tightened up, and even better, it added comments and turned some things into functions so they were reusable.
I parsed everything it did to sanity check it. Really, use it like a junior developer. “Hey helper, write me a piece of code that does X.” You always double check the junior.