- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
The New Luddites Aren’t Backing Down::Activists are organizing to combat generative AI and other technologies—and reclaiming a misunderstood label in the process.
The New Luddites Aren’t Backing Down::Activists are organizing to combat generative AI and other technologies—and reclaiming a misunderstood label in the process.
I wonder how much support this will get - it’s not the tool that’s the problem, but how it gets used.
It was actually the same thing with the original luddites. They didn’t oppose the new tool but the way it was used.
From the article :
Why couldn’t it do that part too? - purely based on a simple high-level objective that anyone can formulate. Which part exactly do you think is AI-resistant?
I’m not talking about today’s models, but more like 5-10 years into the future.
That’s what I’ve been arguing with a fellow programmer recently. Right now you have to tell these programmer LLMs what to do on a function-by-function basis, because it doesn’t have enough capacity to think on a project level. However, that’s exactly what can be improved by scaling the neural network up. Right now the LLMs are limited by hardware, but they’re still using off-the-shelf GPUs that were designed for a completely different use case. The accelerators designed for AI are currently in the preproduction phase, very close to getting used in the AI data centers.
Best explanation of the problem with AI and our jobs I’ve seen:
I’m not worried that AI can do my job. I’m worried that my boss will be convinced it can.