Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Interesting…
How does a country like Vietnam or China tackle AI?
Frankly, AI might have its uses, and I’ve found it useful here and there, but perhaps the cons outweigh the pros…
Frankly, AI might have its uses, and I’ve found it useful here and there, but perhaps the cons outweigh the pros…
If I used a knife to write I shouldn’t be surprised that I don’t get good results. The other way around as well.
Present AIs/LLMs, like any other tool, has its places to shine and places where you shouldn’t even think about using it, but as a new tool we are still figuring everything out while at the same time new versions are appearing. So we should be careful about how we use it, but for what it works great it should be used as any other tool and hopefully it gets used more and more for the people rather than for capital.
edit to better reply to you: The biggest con of AIs right now is capitalism.
How does a country like Vietnam or China tackle AI?
I don’t know how they are using it but I’ll give a few ideas about how they could be used to help improve countries, like:
A simple use is to try to get rid of as much work/human hours of work as possible and distribute this time to the whole population to enjoy life. In such a case it would be much easier to ask everyone to work hard to make new data to train the AIs to replace then by basically writing everything as if explaining to a new worker or giving the AI your emails, or even recording your talks with coworkers, to make more good quality data that will reduce everyone’s workload.
Also, if a country were to ask each person to write a text about their lives, their strenghts, theirs problems, and an AI was trained on it the government, and the people, could ask it about what should be fixed in the country, how things could be fixed, and such things. Even if LLMs were simple “next token predictors” asking it to explain what people see as problems and how people think they can be solved could help governance in a Communist country massively.
I’m sure there are even better things to do though.
If I used a knife to write I shouldn’t be surprised that I don’t get good results. The other way around as well.
Present AIs/LLMs, like any other tool, has its places to shine and places where you shouldn’t even think about using it, but as a new tool we are still figuring everything out while at the same time new versions are appearing. So we should be careful about how we use it, but for what it works great it should be used as any other tool and hopefully it gets used more and more for the people rather than for capital.
edit to better reply to you: The biggest con of AIs right now is capitalism.
I don’t know how they are using it but I’ll give a few ideas about how they could be used to help improve countries, like:
A simple use is to try to get rid of as much work/human hours of work as possible and distribute this time to the whole population to enjoy life. In such a case it would be much easier to ask everyone to work hard to make new data to train the AIs to replace then by basically writing everything as if explaining to a new worker or giving the AI your emails, or even recording your talks with coworkers, to make more good quality data that will reduce everyone’s workload.
Also, if a country were to ask each person to write a text about their lives, their strenghts, theirs problems, and an AI was trained on it the government, and the people, could ask it about what should be fixed in the country, how things could be fixed, and such things. Even if LLMs were simple “next token predictors” asking it to explain what people see as problems and how people think they can be solved could help governance in a Communist country massively.
I’m sure there are even better things to do though.
All good points.