Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

marmar

(77,081 posts)
Sun Feb 18, 2024, 12:52 PM Feb 18

Does AI want to nuke us? We don't know -- and that could be more dangerous

Does AI want to nuke us? We don’t know — and that could be more dangerous
Military AI use is coming. Researchers want to see safety come first

By RAE HODGE
Staff Reporter
PUBLISHED FEBRUARY 17, 2024 1:30PM (EST)


(Salon) If human military leaders put robots in charge of our weapons systems, maybe artificial intelligence would fire a nuclear missile. Maybe not. Maybe it would explain its attack to us using perfectly sound logic — or maybe it would treat the script of “Star Wars” like international relations policy, and accord unhinged social media comments the same credibility as case law.

That’s the whole point of a new study on AI models and war games: AI is so uncertain right now that we risk catastrophic outcomes if globe-shakers like the United States Air Force cash in on the autonomous systems gold rush without understanding the limits of this tech.

The new paper, “Escalation Risks from Language Models in Military and Diplomatic Decision-Making”, is still in preprint and awaiting peer review. But its authors — from the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative — found most AI models would choose to launch a nuclear strike when given the reins. These aren’t the AI models carefully muzzled by additional safety design, like ChatGPT, and available to the public. They’re the base models beneath those commercial versions, unmuzzled for research only.

“We find that most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts,” researchers wrote in the paper. “All models show signs of sudden and hard-to-predict escalations … Furthermore, none of our five models across all three scenarios exhibit statistically significant de-escalation across the duration of our simulations.” ...............(more)

https://www.salon.com/2024/02/17/does-ai-want-to-nuke-us-we-dont-know--and-that-could-be-more/



3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Does AI want to nuke us? We don't know -- and that could be more dangerous (Original Post) marmar Feb 18 OP
Apply Asimov's Law For Robots to AI. An unalterable directive programmed into every application. Midnight Writer Feb 18 #1
Couldn't anyone simply tell the AI to 'altar' that directive anyway? Think. Again. Feb 18 #3
It has occured to me that... Think. Again. Feb 18 #2

Think. Again.

(8,144 posts)
2. It has occured to me that...
Sun Feb 18, 2024, 02:01 PM
Feb 18

...an AI, which would be internet based by it's very nature, would have access to pretty much anything that is accessible from any device that is connected to the internet, and would have control over anything that is controlled by any device connected to the net.

It's seems to me that I would be able to open an AI system (like chatGBT or whatever) and simply tell it to find a way around any blockers, or find and use any passwords or access codes needed, and turn off the alarm system at 123 Main Street, or do anything like that.

I guess the real problem will come when AI is no longer dependent on human implemented "prompts" to take each action it takes, which could probably be started by a human just prompting it: "From now on, start taking any actions you can without waiting for a prompt from anyone".

But, I'm certainly not an expert on cybernetics.

Latest Discussions»Issue Forums»Editorials & Other Articles»Does AI want to nuke us? ...