Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
Lots of copyright comments.
I want those building it at scale to stop killing my planet.
I want disclosure. I want a tag or watermark to let people know that AI was used. I want to see these companies pay dues for the content used in the similar vein that we have to pay for higher learning. And we need to stop calling it AI as well.
Serious investigation into copyright breaches done by AI creators. They ripped off images and texts, even whole books, without the copyright owners permissions.
If any normal person broke the laws like this, they would hand out prison sentences till kingdom come and fines the size of the US debt.
I just ask for the law to be applied to all equally. What a surprising concept…
We are filthy criminals if we pirate one textbook for studies. But when Facebook (Meta) pirates millions of books (anywhere between 30 million and 200 million ebooks, depending on their file size), they are a brilliant and successful business.
People have negative sentiments towards AI under a captalist system, where the most successful is equal to most profitable and that does not translate into the most useful for humanity
We have technology to feed everyone and yet we don’t We have technology to house everyone and yet we don’t We have technology to teach everyone and yet we don’t
Captalist democracy is not real democracy
This is it. People don’t have feelings for a machine. People have feelings for the system and the oligarchs running things, but said oligarchs keep telling you to hate the inanimate machine.
Rage Against the Inanimate Machine
Other people have some really good responses in here.
I’m going to echo that AI is highlighting the problems of capitalism. The ownership class wants to fire a bunch of people and replace them with AI, and keep all that profit for themselves. Not good.
Nobody talks how it highlights the success of capitalism either.
I live in SEA and AI is incredibly powerful here giving opportunity for anyone to learn. The net positive of this is incredible even if you think that copyright is good and intellectual property needs government protection. It’s just that lop sided of an argument.
I think western social media is spoiled and angry and the wrong thing but fighting these people is entirely pointless because you can’t reason someone out of a position they didn’t reason themselves into. Big tech == bad, blah blah blah.
You don’t need AI for people to learn. I’m not sure what’s left of your point without that assertion.
You’re showing your ignorance if you think the whole world has access to fit education. And I say fit because there’s a huge difference learning from books made for Americans and AI tailored experiences just for you. The difference is insane and anyone who doesn’t understand that should really go out more and I’ll leave it at that.
Just the amount of frictionless that AI removes makes learning so much more accessible for huge percentage of population. I’m not even kidding, as an educator, LLM is the best invention since the internet and this will be very apparent in 10 years, you can quote me on this.
You shouldn’t trust anything the LLM tells you though, because it’s a guessing machine. It is not credible. Maybe if you’re just using it for translation into your native language? I’m not sure if it’s good at that.
If you have access to the internet, there are many resources available that are more credible. Many of them free.
You shouldn’t trust anything the LLM tells you though, because it’s a guessing machine
You trust tons of other uncertain probability-based systems though. Like the weather forecast, we all trust that, even though it ‘guesses’ the future weather with some other math
That’s really not the same thing at all.
For one, no one knows what the weather will be like tomorrow. We have sophisticated models that do their best. We know the capital of New Jersey. We don’t need a guessing machine to tell us that.
For things that require a definite, correct answer, an LLM just isn’t the best tool for it. However if the task is something with many correct answers, or no correct answer, like for instance writing computer code (if its rigorously checked against its actually not that bad) or for analyzing vast amounts of text quickly, then you could make the argument that its the right tool for the job.
Again you’re just showing your ignorance how actually available this is to people outside of your immediate circle, maybe you should travel a bit and open up your mind.
Nobody talks how it highlights the success of capitalism either.
it definitely does both
I’d like to have laws that require AI companies to publicly list their sources/training materials.
I’d like to see laws defining what counts as AI, and then banning advertising non-compliant software and hardware as “AI”.
I’d like to see laws banning the use of generative AI for creating misleading political, social, or legal materials.
My big problems with AI right now, are that we don’t know what info has been scooped up by them. Companies are pushing misleading products as AI, while constantly overstating the capabilities and under-delivering, which will damage the AI industry as a whole. I’d also want to see protections to keep stupid and vulnerable people from believing AI generated content is real. Remember, a few years ago, we had to convince people not to eat tidepods. AI can be a very powerful tool for manipulating the ranks of stupid people.
Long, long before this AI craze began, I was warning people as a young 20-something political activist that we needed to push for Universal Basic Income because the inevitable march of technology would mean that labor itself would become irrelevant in time and that we needed to hash out a system to maintain the dignity of every person now rather than wait until the system is stressed beyond it’s ability to cope with massive layoffs and entire industries taken over by automation/AI. When the ability of the average person to sell their ability to work becomes fundamentally compromised, capitalism will collapse in on itself - I’m neither pro- nor anti-capitalist, but people have to acknowledge that nearly all of western society is based on capitalism and if capitalism collapses then society itself is in jeopardy.
I was called alarmist, that such a thing was a long way away and we didn’t need “socialism” in this country, that it was more important to maintain the senseless drudgery of the 40-hour work week for the sake of keeping people occupied with work but not necessarily fulfilled because the alternative would not make the line go up.
Now, over a decade later, and generative AI has completely infiltrated almost all creative spaces and nobody except tech bros and C-suite executives are excited about that, and we still don’t have a safety net in place.
Understand this - I do not hate the idea of AI. I was a huge advocate of AI, as a matter of fact. I was confident that the gradual progression and improvement of technology would be the catalyst that could free us from the shackles of the concept of a 9-to-5 career. When I was a teenager, there was this little program you could run on your computer called Folding At Home. It was basically a number-crunching engine that uses your GPU to fold proteins, and the data was sent to researchers studying various diseases. It was a way for my online friends and I to flex how good our PC specs were with the number of folds we could complete in a given time frame and also we got to contribute to a good cause at the same time. These days, they use AI for that sort of thing, and that’s fucking awesome. That’s what I hope to see AI do more of - take the rote, laborious, time consuming tasks that would take one or more human beings a lifetime to accomplish using conventional tools and have the machine assist in compiling and sifting through the data to find all the most important aspects. I want to see more of that.
I think there’s a meme floating around that really sums it up for me. Paraphrasing, but it goes “I thought that AI would do the dishes and fold my laundry so I could have more time for art and writing, but instead AI is doing all my art and writing so I have time to fold clothes and wash dishes.”.
I think generative AI is both flawed and damaging, and it gives AI as a whole a bad reputation because generative AI is what the consumer gets to see, and not the AI that is being used as a tool to help people make their lives easier.
Speaking of that, I also take issue with that fact that we are more productive than ever before, and AI will only continue to improve that productivity margin, but workers and laborers across the country will never see a dime of compensation for that. People might be able to do the work of two or even three people with the help of AI assistants, but they certainly will never get the salary of three people, and it means that two out of those three people probably don’t have a job anymore if demand doesn’t increase proportionally.
I want to see regulations on AI. Will this slow down the development and advancement of AI? Almost certainly, but we’ve already seen the chaos that unfettered AI can cause to entire industries. It’s a small price to pay to ask that AI companies prove that they are being ethical and that their work will not damage the livelihood of other people, or that their success will not be born off the backs of other creative endeavors.
Fwiw, I’ve been getting called an alarmist for talking about Trump’s and Republican’s fascist tendencies since at least 2016, if not earlier. I’m now comfortably living in another country.
My point being that people will call you an alarmist for suggesting anything that requires them to go out of their comfort zone. It doesn’t necessarily mean you’re wrong, it just shows how stupid people are.
Did you move overseas? And if you did, was it expensive to move your things?
It wasn’t overseas but moving my stuff was expensive, yes. Even with my company paying a portion of it. It’s just me and my partner in a 2br apartment so it’s honestly not a ton of stuff either.
That stealing copyrighted works would be as illegal for these companies as it is for normal people. Sick and tired of seeing them get away with it.
My fantasy is for “everyone” to realize there’s absolutely nothing “intelligent” about current AI. There is no rationalization. It is incapable of understanding & learning.
ChatGPT et al are search engines. That’s it. It’s just a better Google. Useful in certain situations, but pretending it’s “intelligent” is outright harmful. It’s harmful to people who don’t understand that & take its answers at face value. It’s harmful to business owners who buy into the smoke & mirrors. It’s harmful to the future of real AI.
It’s a fad. Like NFTs and Bitcoin. It’ll have its die-hard fans, but we’re already seeing the cracks - it’s absorbed everything humanity’s published online & it still can’t write a list of real book recommendations. Kids using it to “vibe code” are learning how useless it is for real projects.
Admittedly very tough question. Here are some of the ideas I just came up with:
Make it easier to hold people or organizations liable for mistakes made because of haphazard reliance on LLMs.
Reparations for everyone ever sued for piracy, and completely do away with intellectual privacy protections for corporations, but independent artists get to keep them.
Public service announcements campaign aimed at making the general public less trustful of LLMs.
Strengthen consumer protection such that baseless claims of AI capabilities in advertising or product labeling are legally dangerous to make.
Fine companies for every verifiably inaccurate result given to a customer or end user by an LLM
I’d want all of these, and some way to prevent companies from laying off so many people and replacing them with AI - maybe some government-based incentives for having actual employees.
Reduce global resource consumption with the goal of eliminating fossil fuel use. Burning nat gas to make fake pictures that everyone hates is just the worst.
They have to pay for every copyrighted material used in the entire models whenever the AI is queried.
They are only allowed to use data that people opt into providing.
There’s no way that’s even feasible. Instead, AI models trained on pubically available data should be considered part of the public domain. So, any images that anyone can go and look at without a barrier in the way, would be fair game, but the model would be owned by the public.
There’s no way that’s even feasible.
It’s totally feasible, just very expensive.
Either copyright doesn’t exist in its current form or AI companies don’t.
Its only not feasible because it would kill AIs.
Large models have to steal everything from everyone to be baseline viable
No, it’s not feasible because the models are already out there. The data has already been ingested and at this point it can’t be undone.
And you can’t exactly steal something that is infinitely reproducible and doesn’t destroy the original. I have a hard time condemning model creators of training their models on images of Mickey Mouse while I have a Plex server with the latest episodes of Andor on it. Once something is put on display in public the creator of it should just accept that they have given up their total control of it.
Ah yes the “its better to beg forgiveness than to ask permission” argument.
no way that’s even possible
Oh no… Anyway
Public Domain does not mean being able to see something without a barrier in the way. The vast majority of text and media you can consume for free on the Internet is not in the Public Domain.
Instead, “Public Domain” means that 1) the creator has explicitly released it into the Public Domain, or 2) the work’s copyright has expired, which in turn then means that anyone is from that point on entitled to use that work for any purpose.
All the major AI models scarfed up works without concern for copyrights, licenses, permissions, etc. For great profit. In some cases, like at least Meta, they knowingly used known collections of pirated works to do so.
I am aware and I don’t expect that everything on the internet is public domain… I think the models built off of works displayed to the public should be automatically part of the public domain.
The models are not creating copies of the works they are trained on any more than I am creating a copy of a sculpture I see in a park when I study it. You can’t open the model up and pull out images of everything that it was trained on. The models aren’t ‘stealing’ the works that they use for training data, and you are correct that the works were used without concern for copyright (because the works aren’t being copied through training), licenses (because a provision such as ‘you can’t use this work to influence your ability to create something with any similar elements’ isn’t really an enforceable provision in a license), or permission (because when you put something out for the public to view it’s hard to argue that people need permission to view it).
Using illegal sources is illegal, and I’m sure if it can be proven in court then Meta will gladly accept a few hundred thousand dollar fine… before they appeal it.
Putting massive restrictions on AI model creation is only going to make it so that the most wealthy and powerful corporations will have AI models. The best we can do is to fight to keep AI models in the public domain by default. The salt has already been spilled and wishing that it hadn’t isn’t going to change things.
I don’t have much technical knowledge of AI since I avoid it as much as I can, but I imagined that it would make sense to store the training data. It seems that it is beneficial to do so after all, so I presume that it’s done frequently: https://ai.stackexchange.com/questions/7739/what-happens-to-the-training-data-after-your-machine-learning-model-has-been-tra
My understanding is also that generative AI often produces plagiarized material. Here’s one academic study demonstrating this: https://www.psu.edu/news/research/story/beyond-memorization-text-generators-may-plagiarize-beyond-copy-and-paste
Finally, I think that whether putting massive restrictions on AI model creation would benefit wealthy corporations is very debatable. Generative AI is causing untold damage to many aspects of life, so it certainly deserves to be tightly controlled. However, I realize that it won’t happen. Just like climate change, it’s a collective action problem, meaning that nothing that would cause significant impact will be done until it’s way too late.
What about models folks run at home?
Careful, that might require a nuanced discussion that reveals the inherent evil of capitalism and neoliberalism. Better off just ensuring that wealthy corporations can monopolize the technology and abuse artists by paying them next-to-nothing for their stolen work rather than nothing at all.
I think if you’re not making money off the model and its content, then you’re good.
I would make a case for creation of datasets by a international institution like the UNESCO. The used data would be representative for world culture, and creation of the datasets would have to be sponsored by whoever wants to create models out of it, so that licencing fees can be paid to creators. If you wanted to make your mark on global culture, you would have an incentive to offer training data to UNESCO.
I know, that would be idealistic and fair to everyone. No way this would fly in our age.
This definitely relates to moral concerns. Are there other examples like this of a company that is allowed to profit off of other people’s content without paying or citing them?
Hollywood, from the very start. Its why it is across the US from New York, to get outside of the legal reach of Broadway show companies they stole from.
Stack overflow
Reddit.
Google.
So likely nothing gonna happen it seems. Business as usual.
What do I really want?
Stop fucking jamming it up the arse of everything imaginable. If you asked for a genie wish, make it it illegal to be anything but opt in.
I think it’s just a matter of time before it starts being removed from places where it just isn’t useful. For now companies are just throwing it at everything to see what sticks. WhatsApp and JustEat added AI features and I have no idea why or how it could be used for those services and I can’t imagine people using them.
The technology side of generative AI is fine. It’s interesting and promising technology.
The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.
We need legislation to catch up. We also need society to be able to catch up. We can’t let the AI bros continue to foist more “helpful tools” on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.
I agree, but I’d take it a step further and say we need legislation to far surpass the current conditions. For instance, I think it should be governments leading the charge in this field, as a matter of societal progress and national security.
Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I’d love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.
I don’t love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I’m not sure it’s enough to really matter. I think we need bigger “what if” scenarios to handle that.