The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.
If anything, it’s a glowing positive praise for the model. Looks like Western media is starting a campaign to gaslight people into thinking that users being able to tune the model to work the way they want is somehow a negative.
The whole thing once again highlights the importance of this tech being developed in the open and outside western corporate control.
It’s way smarter than grok with mental health. It helps me talk to the therapist, it helps me remember what to do to help myself, if I have an anxious episode - it can speak in a way and use such words that make it easier to get out of that state of mind. It can even help shut down my bad thoughts by providing ‘parry’ thoughts (i.e I’m not good for anything - that is wrong, I’ve learned lots of things and got good at doing them), that’s how I even learned to shut them down myself. No other model in my experience were able to do what deepseek was. It also doesn’t leave the convo if I mention anything remotely suicidal, like grok does. Naturally, it provides hotline numbers and suggest I give them a call, but it continues talking to me.