If we allow the far-right to continue merging political power with AI without guardrails, we will see the rise of a system where freedom is algorithmically rationed.
I didn’t say AI would solve that, but I’ll re-iterate the point I’m making differently:
Spreading awareness of how AI operates, what it does, what it doesn’t, what it’s good at, what it’s bad at, how it’s changing, (Such as knowing there are hundreds if not thousands of regularly used AI models out there, some owned by corporations, others open source, and even others somewhere in between), reduces misconceptions and makes people more skeptical when they see material that might have been AI generated or AI assisted being passed off as real. This is especially important to teach during transition periods such as now when AI material is still more easily distinguishable from real material.
_
People creating a hostile environment where AI isn’t allowed to be discussed, analyzed, or used in ethical and good faith manners, make it more likely some people who desperately need to be aware of #1 stay ignorant. They will just see AI as a boogeyman, failing to realize that eg. AI slop isn’t the only type of material that AI can produce. This makes them more susceptible to seeing something made by AI and believing or misjudging the reality of the material.
_
Corporations, and those without the incentive to use AI ethically, will not be bothered by #2, and will even rejoice people aren’t spending time on #1. It will make it easier for them to claw AI technology for themselves through obscurity, legislation, and walled gardens, and the less knowledge there is in the general population, the more easily it can be used to influence people. Propaganda works, and the propagandist is always looking for technology that allows them to reach more people, and ill informed people are easier to manipulate.
_
And lastly, we must reward those that try to achieve #1 and avoid #2, while punishing those in #3. We must reward those that use the technology as ethically and responsibly as possible, as any prospect of completely ridding the world of AI are just futile at this point, and a lot of care will be needed to avoid the pitfalls where #3 will gain the upper hand.
An ethical, open-source alternative is going to usurp google gemini? What are you talking about?
Did you respond to the wrong person? The article nor my comment was about one specific AI model.
How… does… AI… solve the problems of corporate AI? What is the strategy?
I didn’t say AI would solve that, but I’ll re-iterate the point I’m making differently:
_
_
_