GPS News  
NUKEWARS
By 2040, artificial intelligence could upend nuclear stability
by Staff Writers
San Francisco CA (SPX) Apr 25, 2018

illustration only

A new RAND Corporation paper finds that artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040.

While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks, according to the paper.

During the Cold War, the condition of mutual assured destruction maintained an uneasy peace between the superpowers by ensuring that any attack would be met by a devastating retaliation. Mutual assured destruction thereby encouraged strategic stability by reducing the incentives for either country to take actions that might escalate into a nuclear war.

The new RAND publication says that in coming decades, artificial intelligence has the potential to erode the condition of mutual assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed.

Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, researchers say. This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.

"The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history," said Edward Geist, co-author on the paper and associate policy researcher at the RAND Corporation, a nonprofit, nonpartisan research organization. "Much of the early development of AI was done in support of military efforts or with military objectives in mind."

He said one example of such work was the Survivable Adaptive Planning Experiment in the 1980s that sought to use AI to translate reconnaissance data into nuclear targeting plans.

Under fortuitous circumstances, artificial intelligence also could enhance strategic stability by improving accuracy in intelligence collection and analysis, according to the paper. While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation.

Researchers say that given future improvements, it is possible that eventually AI systems will develop capabilities that, while fallible, would be less error-prone than their human alternatives and therefore be stabilizing in the long term.

"Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes," said Andrew Lohn, co-author on the paper and associate engineer at RAND. "There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk."

RAND researchers based their perspective on information collected during a series of workshops with experts in nuclear issues, government branches, AI research, AI policy and national security.

The perspective is part of a broader effort to envision critical security challenges in the world of 2040, considering the effects of political, technological, social, and demographic trends that will shape those security challenges in the coming decades.

Funding for the Security 2040 initiative was provided by gifts from RAND supporters and income from operations.

The research was conducted within the RAND Center for Global Risk and Security, which works across the RAND Corporation to develop multi-disciplinary research and policy analysis dealing with systemic risks to global security. The center draws on RAND's expertise to complement and expand RAND research in many fields, including security, economics, health, and technology.

"Will Artificial Intelligence Increase the Risk of Nuclear War?" is available here


Related Links
RAND Corporation
Learn about nuclear weapons doctrine and defense at SpaceWar.com
Learn about missile defense at SpaceWar.com
All about missiles at SpaceWar.com
Learn about the Superpowers of the 21st Century at SpaceWar.com


Thanks for being here;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Contributor
$5 Billed Once


credit card or paypal
SpaceDaily Monthly Supporter
$5 Billed Monthly


paypal only


NUKEWARS
Threat of nuclear weapons use growing, UN warns
Geneva (AFP) April 23, 2018
A top UN official sounded the alarm Monday over a new, looming arms race and warned that the risk that devastating nuclear weapons could be used was on the rise. "The threat of the use, intentional or otherwise, of nuclear weapons is growing," the UN's representative for disarmament affairs, Izumi Nakamitsu, told a preliminary review meeting of the Nuclear Non-Proliferation Treaty (NPT). The United States, which holds one of the world's largest nuclear arsenals, also warned the conference that t ... read more

Comment using your Disqus, Facebook, Google or Twitter login.



Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle

NUKEWARS
US treaty with Native Americans put to test in Supreme Court salmon case

How NASA and John Deere Helped Tractors Drive Themselves

China hits US sorghum with anti-dumping measure

Fishing 'nomads': corralling carp on China's Thousand Island Lake

NUKEWARS
Integrating optical components into existing chip designs

New qubit now works without breaks

Sensor strategy a boon for synthetic biology

Polarization has strong impact on electrons, study shows

NUKEWARS
Northrop Grumman to support Japan's E-2C Hawkeye

State Dept. approves $1.2B sale of helicopters, missiles to Mexico

Northrop to repair technology on Hawkeyes, Lockheed to upgrade C-130 aircraft

Russian aircraft provider stops doing business with NATO

NUKEWARS
Faster EV chargers to allay range anxiety

Global carmakers gear up for China's auto show as sector opens

German police arrest Porsche manager over diesel scandal

Jack Ma says Alibaba 'doing a lot of research' on driverless cars

NUKEWARS
World Bank shareholders approve $13 bln capital increase

Beijing says welcomes Mnuchin visit for crunch trade talks

Escalating trade dispute could derail recovery, put 'many jobs at risk': WTO chief

China targets US, EU with rubber trade case

NUKEWARS
Warming climate could speed forest regrowth in eastern US

Warming climate could speed forest regrowth in eastern US

Poland illegally cut down ancient forest, EU court rules

Palm trees are spreading northward - how far will they go?

NUKEWARS
Europe poised to launch ocean-monitoring satellite

Eye in the Sky: Bill Gates Backs Real Time Global Satellite Surveillance Network

Airbus adds extra precision to Sentinel-3 satellite altimetry

The 'radical' ways sunlight builds bigger molecules in the atmosphere

NUKEWARS
Course set to overcome mismatch between lab-designed nanomaterials and nature's complexity

This 2-D nanosheet expands like a Grow Monster

A treasure trove for nanotechnology experts

UCLA researchers develop a new class of two-dimensional materials









The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.