The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.
If anything, it’s a glowing positive praise for the model. Looks like Western media is starting a campaign to gaslight people into thinking that users being able to tune the model to work the way they want is somehow a negative.
Sinophobes are bad that people are flocking to DeepSeek because it is open source unlike ChatGPT
Communism bad, finds CIA-backed study.
Hey, I just thought of a more accurate meaning for the CIA acronym. Colonial Invasion Agency.
They created “ccp-narrative-bench” to measure political bias and it is exactly what you expect it to be. It is described on page 53 of this document: https://www.nist.gov/system/files/documents/2025/09/30/CAISI_Evaluation_of_DeepSeek_AI_Models.pdf
Also aren’t these the guys who put a backdoor in a cryptographic algorithm?
that’s absolutely fucking hilarious
users being able to tune the model to work the way they want is somehow a negative
I find it funny that Musk keeps tweaking Grok to spout his beliefs but like, one answer that is remotely out of his ‘reality’ and there he goes in again.
This is the problem with closed models controlled by corps in a nutshell.
relevant-ish
“As ML is still a (relatively) recent field of study, especially outside the realm of abstract mathematics, few works have been conducted on the political aspect of LLMs, and more particularly about the alignment process and its political dimension. This process can be as simple as prompt engineering but is also very complex and can affect completely unrelated questions. For example, politically directed alignment has a very strong impact on an LLM’s embedding space and the relative position of political notions in such a space. Using special tools to evaluate general political bias and analyze the effects of alignment, we can gather new data to understand its causes and possible consequences on society. Indeed, by taking a socio-political approach, we can hypothesize that most big LLMs are aligned with what Marxist philosophy calls the ’dominant ideology.’ As AI’s role in political decision-making—at the citizen’s scale but also in government agencies—such biases can have huge effects on societal change, either by creating new and insidious pathways for societal uniformity or by allowing disguised extremist views to gain traction among the people.”
a good read