Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

If an autonomous machine kills someone, who is responsible?

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Science Donate to DU
 
parasearchers Donating Member (264 posts) Send PM | Profile | Ignore Sat Aug-22-09 02:31 AM
Original message
If an autonomous machine kills someone, who is responsible?
Within a decade, we could be routinely interacting with machines that are truly autonomous – systems that can adapt, learn from their experience and make decisions for themselves. Free from fatigue and emotion, they would perform better than humans in tasks that are dull, dangerous or stressful.

Already, the systems we rely on in our daily lives are being given the capacity to operate autonomously. On the London Underground, Victoria line trains drive themselves between stations, with the human "driver" responsible only for spotting obstacles and closing the doors. Trains on the Copenhagen Metro run without any driver at all. While our cars can't yet drive themselves, more and more functions are being given over to the vehicle, from anti-lock brakes to cruise control. Automatic lighting and temperature control are commonplace in homes and offices.

The areas of human existence in which fully autonomous machines might be useful – and the potential benefits – are almost limitless. Within a decade, robotic surgeons may be able to perform operations much more reliably than any human. Smart homes could keep an eye on elderly people and allow them to be more independent. Self-driving cars could reduce congestion, improve fuel efficiency and minimise the number of road accidents.

But automation can create hazards as well as removing them. How reliable does a robot have to be before we trust it to do a human's job? What happens when something goes wrong? Can a machine be held responsible for its actions?


http://parasearcher.blogspot.com/2009/08/if-autonomous-machine-kills-someone-who.html
Printer Friendly | Permalink |  | Top
Ozymanithrax Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Aug-22-09 02:35 AM
Response to Original message
1. Until the machine is recognized as an individual uner law...
the owner of the machine will be responsible.
Printer Friendly | Permalink |  | Top
 
create.peace Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Aug-22-09 02:49 AM
Response to Reply #1
2. and if a corporation owns it?.....nt
Printer Friendly | Permalink |  | Top
 
Ozymanithrax Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Aug-22-09 03:04 AM
Response to Reply #2
3. Corproations must follow laws.
They have corporate officers that give orders and set policy.

If corporate officers can go to jail for swindling millions from workers and investors, be held responsible for poorly designed products, then they can go to jail for setting loose a killing machine.
Printer Friendly | Permalink |  | Top
 
baldguy Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Aug-22-09 06:34 AM
Response to Reply #2
6. The Prresident of the corporation is responsable.
Printer Friendly | Permalink |  | Top
 
create.peace Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Aug-22-09 01:02 PM
Response to Reply #6
8. are you positive about that?
Printer Friendly | Permalink |  | Top
 
orleans Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Aug-22-09 03:34 AM
Response to Original message
4. is there going to be a quiz on this? n/t
Printer Friendly | Permalink |  | Top
 
parasearchers Donating Member (264 posts) Send PM | Profile | Ignore Sat Aug-22-09 03:54 AM
Response to Reply #4
5. If a machine leaves a train station at forty miles an hr
then vaporizes New Jersey at 12 o'clock, how many AI were needed to change the light bulb?
Printer Friendly | Permalink |  | Top
 
Meldread Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Aug-22-09 09:46 AM
Response to Original message
7. It's owner.
It's the same if your dog attacks someone, you're responsible. It doesn't matter that the dog is an autonomous animal, as an owner you're expected to have some measure of control over it.
Printer Friendly | Permalink |  | Top
 
Posteritatis Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Aug-22-09 01:15 PM
Response to Original message
9. I'd treat an autonomous robot the same as a pet or working animal in that regard
Edited on Sat Aug-22-09 01:15 PM by Posteritatis
If the neighbor's dog attacks me, like the little pest tried to yesterday, the neighbors are responsible. If I'm attacked by an animal in a zoo, the zoo's owners are responsible.

Depending on the level of autonomy, there would probably be exceptions. If I undertook great effort to climb into a tiger cage then I'd probably be asking for it; my city's animal bylaws don't consider an animal to be dangerous if it attacks a human in defense of its litter or in response to being attacked itself. (This is a wonderful thing, IMO; I approve of animal bylaws which also protect the animals.) If I attack a forestry or construction robot composed entirely of chainsaws without good reason, I probably deserve at least some of whatever I get.

If we're talking something that's developed enough to have actual self-awareness or independence - a machine autonomous enough to, say, be paying its own apartment rent or something - then it's responsible for its own actions. Of course, that's a little ways off yet.
Printer Friendly | Permalink |  | Top
 
parasearchers Donating Member (264 posts) Send PM | Profile | Ignore Sat Aug-22-09 09:59 PM
Response to Reply #9
10. Do you think you would have
Edited on Sat Aug-22-09 10:01 PM by parasearchers
Something like Turing Agents (like from William Gibson's Neuromancer) who would police the high level AIs (self aware ones) behaviour?
Printer Friendly | Permalink |  | Top
 
Posteritatis Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 12:18 AM
Response to Reply #10
11. Possibly, though then we'd be off in a different enough world that I could only guess
Edited on Sun Aug-23-09 12:19 AM by Posteritatis
If we're talking human or near-human level intelligence and reasoning ability, there's no reason they wouldn't be subject to the same laws as, say, I am, with the same enforcement mechanisms (police, the legal system, social pressures, etc). In practice it'd obviously be a little more nuanced than that, and I'd assume (or hope!) there'd be some training involved for what might be very different thought processes.

It'd also depend a lot on what we'd allow them to do; a sapient AI with the right to own itself, move freely, have property and vote would be very different (and to me, less troubling) than one which could keep up an abstract conversation with me but was required by law to have a human owner. The former I wouldn't mind seeing dealt with via traditional, if specially trained, police; the latter I wouldn't mind seeing agitated for via the courts.

If they were below human level intelligence, it would probably be people whose job was policing us to make sure we were handling, training and using our charges properly. I can see an organization calling itself the ASPCAI showing up
Printer Friendly | Permalink |  | Top
 
parasearchers Donating Member (264 posts) Send PM | Profile | Ignore Sun Aug-23-09 12:58 AM
Response to Reply #11
12. then the real question is
wouls AIs be outfitted with something aproximating the Three laws of Robotics? Or has that already gone by the wayside with the 1st gen combat robats(albeit primitive as they are) that already exist?
Printer Friendly | Permalink |  | Top
 
Posteritatis Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 01:53 PM
Response to Reply #12
13. Probably both and other options
The Three Laws themselves could actually be exceedingly dangerous; there's been some fiction exploring what happens if they interpret the First Law too literally and decide we all have to be locked up in ergonomic padded containers or something. If something got autonomous enough there'd probably be some safeguards, at least until they were developed enough that they could grok law.

I don't think something like that could go by the wayside just based on current precedent - if the US military is designing robots in one direction there's no reason that, say, the Polish military, or Japanese civilians, or anyone else couldn't come up with some that have different codes of behavior.

Generally speaking I think any intelligent-robot-inhabited future is going to be a lot more complex and nuanced than discussions of the idea usually expect, and they'd get more complex and more nuanced the more intelligent the machines got, especially if they did so in different ways than we do.
Printer Friendly | Permalink |  | Top
 
slackmaster Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 02:27 PM
Response to Original message
14. The last human who had the power to stop it
And didn't.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Tue Apr 30th 2024, 02:25 PM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Science Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC