They published a deliberately harmful tool against the advice of civil society, experts and competitors. They are not only reckless but tasked since their foundation with the mission to create chaos. Don’t forget the idea behind OpenAI in the beginning was to damage the advantage that Google and Facebook had on AI by releasing machine learning technology in open source. They definitely did it and now they are expanding their goals. They are not in for the money (ChatGPT will never be profitable), they are playing a bigger game.
deliberately harmful tool ???
I am using it, and yes, it can be inaccurate sometimes, but deliberately harmful?
The link that you gave is not about this AI, but potential danger of some future AGI, which would have to be more powerful than this one.
OpenAI released ChatGPT without systems to prevent or compensate these harms and being fully aware of the consequences, since this kind of research has been going on for several years. In the meanwhile they’ve put some paper-thin countermeasures on some of these problems but they are still pretty much a shit-show in terms of accountability. Most likely they will get sued into oblivion before regulators outlaw LLMs with dialogical interfaces. This won’t do much for the harm that open-source LLMs will create but at least will limit large-scale harm to the general population.
There are entire fields of research on that. Or do you believe the internet, a technology developed for military purposes, an infrastructure that supports most of the economy, the medium through billions of people experience most of reality and build connections, is free from ideology and propaganda?
They published a deliberately harmful tool against the advice of civil society, experts and competitors. They are not only reckless but tasked since their foundation with the mission to create chaos. Don’t forget the idea behind OpenAI in the beginning was to damage the advantage that Google and Facebook had on AI by releasing machine learning technology in open source. They definitely did it and now they are expanding their goals. They are not in for the money (ChatGPT will never be profitable), they are playing a bigger game.
Pushing the AI panic is not just a marketing strategy but a way to build power. The more they are considered dangerous, the more regulations will be passed that will impact the whole sector. https://fortune.com/2023/05/30/sam-altman-ai-risk-of-extinction-pandemics-nuclear-warfare/
deliberately harmful tool ???
I am using it, and yes, it can be inaccurate sometimes, but deliberately harmful?
The link that you gave is not about this AI, but potential danger of some future AGI, which would have to be more powerful than this one.
This paper explain a taxonomy of harms created by LLMs: https://dl.acm.org/doi/pdf/10.1145/3531146.3533088
OpenAI released ChatGPT without systems to prevent or compensate these harms and being fully aware of the consequences, since this kind of research has been going on for several years. In the meanwhile they’ve put some paper-thin countermeasures on some of these problems but they are still pretty much a shit-show in terms of accountability. Most likely they will get sued into oblivion before regulators outlaw LLMs with dialogical interfaces. This won’t do much for the harm that open-source LLMs will create but at least will limit large-scale harm to the general population.
I can only imagine what would happen if these authors were to write about internet.
There are entire fields of research on that. Or do you believe the internet, a technology developed for military purposes, an infrastructure that supports most of the economy, the medium through billions of people experience most of reality and build connections, is free from ideology and propaganda?