Artists hold leverage that they can wield if they organize strategically along material lines. What if illustrators unionized to mandate human oversight in AI-assisted comics? What if musicians demanded royalties each time their style trains a model?
The answer to this latter question would seem to be an enormous centralisation of royalties in the hands of a few artists whose style is in style at any given time, and no more actual work for many future artists. Not a ton different from the current state of affairs, just further accelerated.
I can’t object to the broad strokes of the argument. It’s a fair point. It still feels premature to use the term “AI”, but I don’t have any objections to the field of machine learning in general. We just happen to be living through a period in which some of the most gormless motherfuckers on this earth are using it for nefarious purposes.
I think i disagree with the idea of “democratizing creativity” that the article is predicated on. Perhaps i am misunderstanding what is meant by this, but to my eyes they can be a tool within creativity processes, but they dont democratize creativity. I might be way off base here, so yk, read my words critically. I also have only read the first couple paragraphs, so again, read my words critically!
To my understanding of creativity, LLMs cannot democratize creativity because they remove/replace the creative process. If i want to create an image, and i prompt some software to make it for me, then i have not actually created anything. I have not been creative. It is functionally the same as asking someone else to draw a thing for me.
The article gives some examples tho:
They enable a nurse to visualize a protest poster, a factory worker to draft a union newsletter, or a tenant to simulate rent-strike scenarios.
The protest poster, the nurse must have already had an idea of the protest poster, what it should say, what it should look like, what it should communicate, something about it. Even ignoring the central issue of LLMs not actually understanding anything and just being a statistical association of tokens, the LLM doesnt aid in creativity; it serves as a place to outsource creativity to. The nurse may use its output in further creative processes, but that output itself is not the nurses creativity.
Given the starting point of the nurse first having ideas for the poster (a creative process) and then prompting the LLM (an outsourcing process), theres a few ways forward. One is to basically copy/paste it to the posterboard, such as through writing the output on the board. This is a mechanically creative process (the nurse has created a new physical thing) but is not an idea-ly creative process (the nurse has not created a new idea). Another is to take the output and modify it, adapt it, change it. This is not a mechanically creative process (no new thing has been created) but is an idealy creative process (new idea has been created/modified/etc). But to my eyes this latter scenario is comprised of two seperate creative processes and one outsourcing process. It can look like a single creative process, but really it is comprised of multiple processes (at least to my eyes and understanding). It is functionally the same as asking your friend to look things up for you on the internet, except because of how LLMs work youre getting back some mishmash of all the things the friend found instead of the things themselves.
As far as drafting a union newsletter and simulating rent strike scenarios, im not going to dig into it cause its the same argument as the protest poster, but the newsletter would be e.g. feeding recent union events into an LLM and asking it to summarize (again you have not been creative you have outsourced your creativity), and as far as simulating a rent strike, well, try asking an LLM to be a GM to see how well simulations work. LLMs are statistical models, they are not capable of reason or logic. The appearance of such is due to the statistical associations resembling logic, not an underlying “p→q” logical process.
Basically to my eyes LLMs are a tool that can be utilized as a part of an overarching creative process composed of subprocesses but are not themselves creative. The purpose of creativity is to be creative! Thats not to say that nothing that touches an LLM is creative, but rather that the LLM use is an outsourcing of parts of the creative processes.
Ok reading further, i think i agree with the author on the way creativity functions and perhaps misunderstand what they mean by democratizing creativity. That being said, theres some places i think the author is reaally overestimating what LLMs are capable of. E.g.
A camera captures the world as it exists while AI visualizes worlds that could be.
This is a pretty sweeping statement that i think is wrong. An LLM cannot visualize worlds that could be. Its a statistical model throwing together statistically linked output. The LLM (i hate that the author uses the word AI, its become a meaningless term) spits out the statistically linked tokens, it doesnt go in with an idea of a world or visualize a world that could be, it shoves together the tokens that best match its input.
Idk, maybe im way off base here
No I agree with your initial criticisms of the little essay.
Copying the most relevant bit from another comment I left here:
Pushing people to rely on unreliable tools is a bad idea. Why should “a factory worker draft a union newsletter” with an LLM when the union newsletter will inherently be worse because it was made by an LLM that produces flat-toned bullshit? You lose the human and personal voice, the fire and zeal that are needed for organizing, by passing it through an LLM. Or “rent strike scenario simulations” done by an LLM are completely unreliable and worthless. " enable a nurse to visualize a protest poster" using an LLM to emphasize the importance of the skilled human labor that nurses provide? An absurd and self-undermining tactic.
Basically to my eyes LLMs are a tool that can be utilized as a part of an overarching creative process composed of subprocesses but are not themselves creative. The purpose of creativity is to be creative! Thats not to say that nothing that touches an LLM is creative, but rather that the LLM use is an outsourcing of parts of the creative processes.
Here’s a couple examples of “creative use of an LLM” that I thought were fine and were truly creative:
-
Having some generative model generate video scenes following a theme that someone wanted and stitching them together into a music video that had a coherent theme for a song that he wrote. That took effort and the LLM couldn’t have made the whole music video.
-
Using one of those voice models trained on celebrities’ voices to make Taylor Swift sing “Get Low” by Lil John. Because it’s a funny idea someone wanted to see happen and they used the tools to make it happen. Now there’s major ethical problems with literally putting words into someone else’s mouth but that’s a different issue from whether it is a creative endeavor or not.
-
AI has its uses and applications.
For example I have to write reports at work that I know for a fact NO ONE reads. Very definition of meaningless busywork and im sure there are other office jobs where stuff like this happens. So instead of spending hours writing them like some co-workers I just let AI write them and go over it in half an hour and am done. Of course my job isnt something important or essential but if theres one thing Capitalism isnt lacking its bullshit jobs.
Funny that the Article mentions soul and AI cause as a spiritual person I consider AI art as completely soulless and I say this as someone who has seen AI pieces I thought were “nice”
This is not even me saying dont use AI for your 1 person indie project but I do think it cheapens the final product.
LLMs do not generally democratize creativity, they undermine creativity and prevent people from developing the skills to exercise their creativity.
When it comes to creative fiction:
IP be damned, generative LLMs (I refuse to call these "AI"s because not only are they not intelligent, they’re a dead end that will never lead to actual AIs) produce shit results and that’s my biggest problem with them. They do not innovate, they merely regurgitate and repackage the most statistically common things in their training set. I’ve read short stories by LLMs and their style and tone are terrible. There’s nothing of substance to them. Reading their stories is like eating a box of Krispy Kreme doughnuts, which are simultaneously light and fluffy and full of poison. The first one is ok. The 2nd one is just more of the same. The third is nauseating.
Ask an LLM to write a story about the pope, then ask the LLM to write a story fictionalizing a chess game, and it will feel exactly the same. It’s extremely superficial and just has nothing interesting to say.
When it comes to visual art: you ask some generative model to generate you an image of something and it will usually look wrong and will have an overused style that has been ruined by being used for so much low quality slop. I get disgusted seeing anything that looks LLM generated because it looks like overused slop, like eating too much of the same food and then not being able to make myself eat it again for months. And again, there’s no stylistic innovation in regurgitation.
Both of these criticisms can be applied to human-made writing and human-made visuals, and I do apply them as well! There’s so much slop out there that people write and draw which is boring or offputting, and the problem is that LLMs are trained on that so they reproduce that!
When it comes to educational text, programming, and providing factual answers: LLMs are bullshit machines, which means they’re optimized to sound plausible, not to provide correct answers. And mathematically it is impossible to solve the bullshitting problem (again, these are called “hallucinations” but that makes it sound like a mistake or an aberration rather than what it actually is: an unavoidable artifact of how these things work). But since there are no tells to warn you that they’re wrong, none of their output can be trusted. I have used LLMs to write me disposable or simple code I was too lazy to write myself, but only because I have the expertise to vet it and confirm by reading that it’s not going to do something horrible. Even then, if I’ve wanted to keep it or incorporate it into something I was doing I’ve often had to clean it up myself, or change the comments because the comments are just wrong, and the style is all inconsistent, and in those cases it’s easier to just rewrite it from scratch. But when I’ve tried to quality-check ChatGPT by asking it technical questions I know the answers to, it gives me a mix of truth and lies that sound equally plausible unless you know better. The most dangerous kind of lie is that kind.
Pushing people to rely on unreliable tools is a bad idea. Why should “a factory worker draft a union newsletter” with an LLM when the union newsletter will inherently be worse because it was made by an LLM that produces flat-toned bullshit? You lose the human and personal voice, the fire and zeal that are needed for organizing, by passing it through an LLM. Or “rent strike scenario simulations” done by an LLM are completely unreliable and worthless. " enable a nurse to visualize a protest poster" using an LLM to emphasize the importance of the skilled human labor that nurses provide? An absurd and self-undermining tactic.
I don’t know or care whether it’s inherently immoral for leftists to use LLMs, but I do care that it is just a bad idea. I wouldn’t make a Marxist Case for investing in NFTs!
I’ve used Deepseek, by the way, which is perhaps a more party-line approved LLM, and it was still awful in all the ways ChatGPT was
I respect the attempt and the premise that Marxists never oppose tech on principle but this article is written after the war. Deep-learning-based data generation technology has reached stagnation and now we now know that:
- it’s quicker to code by yourself, Copilot is shit and vibe coding produces unusable crap
- AI images will always have this distinct shitty look and be inexploitable due to inconsistency
- their writing will always be sloppy and incoherent on the long run
- they can’t analyse a big data set for you, all they do is hallucinante shit from the data they’re trained on
So best thing an AI can do is help you with inspiration, a bit like being unable to choose between two dishes for your lunch and tossing a coin. It’s a technological brainstorming helper. It’s not a game changer by any means.
it’s quicker to code by yourself, Copilot is shit and vibe coding produces unusable crap
It really depends on the developer. Some can certainly produce work faster with tools and it slows some down. The research on this really isn’t there yet, as the METR study found a decrease and the Cui study found an increase (although both have some pretty major flaws, especially the METR study’s sample size of 16).
Images haven’t reached their stagnation yet, and video generation is getting really good. You have almost definitely seen AI videos or images without realizing.
You’re right that it’s overhyped, but not nearly to the extent that you’re suggesting.