The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.
If anything, it’s a glowing positive praise for the model. Looks like Western media is starting a campaign to gaslight people into thinking that users being able to tune the model to work the way they want is somehow a negative.
“It’s unsafe (to us) because it lets people (the riff raff) use it in a way we do not approve of”
bingo
Another unbiased study by the Burger Institute for Preserving Burger Hegemony
It’s hilarious how they can’t complain that the model is controlled by evil see see pee since it’s open, so they’re now complaining that users being able to tune it the way they like is somehow nefarious. What happened to all the freeze peach we were promised.
deleted by creator
Likely the ones you’re hallucinating.
The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.
From a state control perspective it is. It’s unreliable for state purposes. The AI is less able to stick to a programmed narrative.
heh, like other models are safe and reliable ;-)
You should only use models that are safely and reliably tuned to spew capitalist talking points.