Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.


  • Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
  • The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
  • The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
  • The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
  • The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
  • cygon@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 months ago

    I agree that a lot of human behavior (on the micro as well as macro level) is just following learned patterns. On the other hand, I also think we’re far ahead - for now - in that we (can) have a meta context - a goal and an awareness of our own intent.

    For example, when we solve a math problem, we don’t just let intuitive patterns run and blurt out numbers, we know that this is a rigid, deterministic discipline that needs to be followed. We observe and guide our own thought processes.

    That requires at least a recurrent network and at higher levels, some form of self awareness. And any LLM is, when it runs (rather than being trained), completely static, feed-forward (it gets some 2000 words (or 32000+ as of GPT-4 Turbo) fed to its input synapses, each neuron layer gets to fire once and then the final neuron layer contains the likelihoods for each possible next word.)