The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.

If anything, it’s a glowing positive praise for the model. Looks like Western media is starting a campaign to gaslight people into thinking that users being able to tune the model to work the way they want is somehow a negative.

  • Tofutefisk @lemmygrad.ml
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    1 天前

    relevant-ish

    Propaganda is all you need

    “As ML is still a (relatively) recent field of study, especially outside the realm of abstract mathematics, few works have been conducted on the political aspect of LLMs, and more particularly about the alignment process and its political dimension. This process can be as simple as prompt engineering but is also very complex and can affect completely unrelated questions. For example, politically directed alignment has a very strong impact on an LLM’s embedding space and the relative position of political notions in such a space. Using special tools to evaluate general political bias and analyze the effects of alignment, we can gather new data to understand its causes and possible consequences on society. Indeed, by taking a socio-political approach, we can hypothesize that most big LLMs are aligned with what Marxist philosophy calls the ’dominant ideology.’ As AI’s role in political decision-making—at the citizen’s scale but also in government agencies—such biases can have huge effects on societal change, either by creating new and insidious pathways for societal uniformity or by allowing disguised extremist views to gain traction among the people.”