The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.

If anything, it’s a glowing positive praise for the model. Looks like Western media is starting a campaign to gaslight people into thinking that users being able to tune the model to work the way they want is somehow a negative.

The whole thing once again highlights the importance of this tech being developed in the open and outside western corporate control.

  • Awoo [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 days ago

    “AI made in Communist China is less censored than Capitalist AI” is not the slam dunk argument that these ghouls think it is.

  • EllenKelly [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    I went to a lecture about using AI and I was told you ought to be making your queries for text like 50k words, and you can us to help workshop them

    I still havent used ai for anything outside of making a joke

  • FunkyStuff [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    So it works better as a thing to fallibly search up some info for me, and not as good as a replacement for a human in a system that needs secure access? And that’s supposed to make me feel worse about it?

  • ma1w4re@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    It’s way smarter than grok with mental health. It helps me talk to the therapist, it helps me remember what to do to help myself, if I have an anxious episode - it can speak in a way and use such words that make it easier to get out of that state of mind. It can even help shut down my bad thoughts by providing ‘parry’ thoughts (i.e I’m not good for anything - that is wrong, I’ve learned lots of things and got good at doing them), that’s how I even learned to shut them down myself. No other model in my experience were able to do what deepseek was. It also doesn’t leave the convo if I mention anything remotely suicidal, like grok does. Naturally, it provides hotline numbers and suggest I give them a call, but it continues talking to me.

  • I don’t think having buggy alignment is something to celebrate. People who don’t understand how these things work (granted, this is all of us to a degree, they are black boxes) can over rely on them, and an unaligned model might be harmful to people who want to harm others or themselves. The solution to this obviously is a functional social safety net and not better model alignment (although a band aid isn’t harmful). Also, alignment issues aren’t a uniquely Chinese issue as Gemini has alignment bugs and grok’s alignment is impressively dismal.