The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.
If anything, it’s a glowing positive praise for the model. Looks like Western media is starting a campaign to gaslight people into thinking that users being able to tune the model to work the way they want is somehow a negative.
The whole thing once again highlights the importance of this tech being developed in the open and outside western corporate control.
I don’t think having buggy alignment is something to celebrate. People who don’t understand how these things work (granted, this is all of us to a degree, they are black boxes) can over rely on them, and an unaligned model might be harmful to people who want to harm others or themselves. The solution to this obviously is a functional social safety net and not better model alignment (although a band aid isn’t harmful). Also, alignment issues aren’t a uniquely Chinese issue as Gemini has alignment bugs and grok’s alignment is impressively dismal.