AI chatbots tend to choose violence and nuclear strikes in wargames::undefined

  • Arkaelus@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    10 months ago

    This says more about us than it does about the chatbots, considering the data on which they’re trained…

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 months ago

      Yeah, it says that we write a lot of fiction about AI launching nukes and being unpredictable in wargames, such as the movie Wargames where an AI unpredictably plans to launch nukes.

      Every single one of the LLMs they tested had gone through safety fine tuning which means they have alignment messaging to self-identify as a large language model and complete the request as such.

      So if you have extensive stereotypes about AI launching nukes in the training data, get it to answer as an AI, and then ask it what it should do in a wargame, WTF did they think it was going to answer?

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      I’d say it does to an extent, dependant on the source material. If they were trained on actual military strategies and tactics as their source material with proper context, I’d wager the responses would likely be different.

      • remotelove@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        Totally. Properly trained AI would probably just flood a country with misinformation to trigger a civil war. After it installs a puppet government, it can leverage that countries resources against other enemies.

        • The Snark Urge@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Let’s think here… I’ve always heard history is written by the victors, which logically implies historians are the most dangerous people on the planet and ought to be detained. 🧐

      • IninewCrow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Lol … to an AI, humans on any and all sides can’t win a nuclear war … but AI can.

      • BearOfaTime@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        10 months ago

        Not that I want one, but the propaganda around nuclear war has been pretty extensive.

        Michael Chrichton wrote about it in the late 90s if I remember right. He made some very interesting points about science, the politicization of science, and “Scientism”.

        “Nuclear Winter” for example, is based on some very bad, and very incorrect, math.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    10 months ago

    You don’t say. Chatbots are trained off of average raging people on the internet, there is no way they can be in a position of military power.

  • namewok
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    10 months ago

    Maybe it comes to the conclusion that violence is objectively good

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      10 months ago

      Just imagine where world population would be today if WWI hadn’t removed 1% of the population! (yes, sarcasm)

      The reality is violence is part of the human condition, and it’s been an effective tool throughout time, or else people wouldn’t use it.

      Also part of the human condition is trying to teach each new successive generation to be better than ourselves. We each only get ~80 years, and those first 20 are crucial.