- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
Last year marked the 40th anniversary of the almost-apocalypse.
On Sept. 26, 1983, Russian Lt. Col. Stanislav Petrov declined to report to his superiors information he suspected to be false, which detailed an inbound U.S. nuclear strike. His inaction prevented a Russian retaliatory strike and the global nuclear exchange it would have precipitated. He thus saved billions of lives.
Today, the job of Petrov’s descendants is much harder, chiefly due to rapid advancements in artificial intelligence. Imagine a scenario where Petrov receives similar alarming news, but it is backed by hyper-realistic footage of missile launches and a slew of other audio-visual and text material portraying the details of the nuclear launch from the United States.
It is hard to imagine Petrov making the same decision. This is the world we live in today.
Recent advancements in AI are profoundly changing how we produce, distribute and consume information. AI-driven disinformation has affected political polarization, election integrity, hate speech, trust in science and financial scams. As half the world heads to the ballot box in 2024 and deepfakes target everyone from President Biden to Taylor Swift, the problem of misinformation is more urgent than ever before.
False information produced and spread by AI, however, does not just threaten our economy and politics. It presents a fundamental threat to national security.
Although a nuclear confrontation based on fake intelligence may seem unlikely, the stakes during crises are high and timelines are short, creating situations where fake data could well tilt the balance toward nuclear war.
The evolution of nuclear systems has led to further ambiguity in crises and shorter timeframes for verifying intelligence. An intercontinental ballistic missile from Russia could reach the U.S. within 25 minutes. A submarine-launched ballistic missile could arrive even sooner. Many modern missiles carry ambiguous payloads, making it unclear whether they are nuclear-tipped. AI tools for verifying the authenticity of content are not sufficiently reliable, making this ambiguity difficult to resolve in a short window.
The likeliest nuclear hotspots are also the arenas involving actors with low levels of trust on both sides — be that the U.S. and the Chinese Communist Party or Russia, or India and Pakistan. Even if communication is established in a narrow time frame, leaders will be forced to weigh the compelling disinformation as evidence against an opposing government’s calls to stand down.
Even if the U.S. can guard against such disinformation there is no guarantee that other nuclear-armed states would have the technical capacity to do so. A single strike from another actor could precipitate global nuclear war, even if the U.S. had done its due diligence to reject the spurious intelligence.
The national security risks extend beyond nuclear exchange.
The concern among U.S. officials about Russia’s continuing disinformation campaign about American military-biological labs in Ukraine not only stems from its potential to delegitimize the Ukrainian war effort but also something more sinister. If Ukrainians started getting sick because of a novel pathogen in Donetsk, and it began to spread across Europe, Putin’s regime could leverage the last 18 months of propaganda to assign blame to the U.S., making the attribution of biological attacks — already a difficult task in conflict zones — that much harder.
The problem of fake information is also relevant at the level of response. As COVID-19 demonstrated, the proliferation of misinformation led to a less effective public health response and many more infections and deaths. A future response could be significantly hampered by ordinary citizens’ ability to manufacture compelling false information about a pathogen’s origins and remedies, mirroring the quality and style of a scientific journal.
In cybersecurity, spearphishing — the practice of deceiving a person with specifically targeted false information or authority — likewise proliferates with the emergence of advanced generative AI systems. State-of-the-art AI systems allow more actors with less technical expertise to craft believable narratives about their positions and requests, allowing them to extract information from unwitting victims.
The same tactics deployed for financial schemes have been used against personnel occupying important positions in government. Some of these efforts were successful. With ever-improving AI systems lowering the barriers to carrying out such attacks, they may become far more effective and frequent.
Clearly, AI-powered disinformation is a fundamental risk to safety and security. A central strategy to mitigate this threat must start at the source. The most powerful systems — produced by a handful of tech companies — must be scrutinized for such disinformation risks before they are developed and deployed. Systems presenting the potential for such harm must be prevented from release until safeguards are in place to eliminate these risks.
Such a strategy is not only necessary to protect our democracy and economy. It is crucial for the protection of our national security and the safety of all Americans.
And yet everyone knows that.
The real threat isn’t our credulity. It’s our cynicism