Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Question about ethics

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Science Donate to DU
 
Orrex Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 03:48 PM
Original message
Question about ethics
Would it be ethical to create a population of intelligent robots that were happy in their role as servants? To create, in essence, a population that truly existed only to serve?

What would be the ethical implications of such a development?
Printer Friendly | Permalink |  | Top
greyl Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 03:54 PM
Response to Original message
1. That's a thorny one. Are they powered by nuke events?
Edited on Sun Aug-23-09 03:54 PM by greyl
You might need to establish what you mean by intelligent. Is that synonymous with conscious and self-aware?
Printer Friendly | Permalink |  | Top
 
Orrex Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 05:00 PM
Response to Reply #1
6. For this discussion, I think that "conscious and self aware" will serve nicely
Printer Friendly | Permalink |  | Top
 
greyl Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Aug-24-09 12:35 AM
Response to Reply #6
12. I was going to ask if they'd have free will, but if they're "happy in their role",
I suppose it doesn't matter. At that point, I think it would depend solely on what ends they're used for.
Printer Friendly | Permalink |  | Top
 
krispos42 Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Aug-24-09 12:37 AM
Response to Reply #6
13. If so, then probably not.
If we wanted robotic servants, we should keep them dumb enough in that regard so they're not self-aware. Otherwise it's the same as creating a human life; just in a different shell.
Printer Friendly | Permalink |  | Top
 
greyl Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Aug-24-09 01:16 AM
Response to Reply #13
15. Interesting turn that it might be more moral to withhold self-awareness from them.
Edited on Mon Aug-24-09 01:20 AM by greyl
Is self-awareness clumsy baggage, or is it a freeing vehicle?
Wouldn't it necessarily be different in this robotic case than it is for things that are alive? In other words, what would these robots actually be apprehending about themselves? If they're immortal so long as the physical machine is kept in running condition and doesn't get destroyed, they don't suffer from denial of death and all of its side effects.
If they are perfectly doing what they were made to do, how could that cause them distress of any kind?
Is it possible that each robot would be responsive enough to experience that it acquires what we would consider to be individuality or personhood? Even so, if it were guaranteed they remain "happy in their role", what's the problem?
Printer Friendly | Permalink |  | Top
 
krispos42 Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Aug-24-09 01:34 AM
Response to Reply #15
16. I think self-awareness without the freedom to act is baggage
And artificially making them happy with their job seems to be a bit shady, too. I mean, imagine if we genetically engineered, say, a person to be happy fulfilling a traditional submissive "woman's" role. Say he or she loves being a stay-at-home domestic-type person... cleaning, doing laundry, cooking, gardening, decorating, etc. Would that be immoral?

Would it be immoral to genetically engineer somebody that's happy fulfilling the role of a preacher? Say he or she loves being the center of attention, preaching, counseling, performing rites, singing hymns, etc. Would that be immoral?

And would it be immoral to push this person out of this situation?


Not that the domestic spouce or the preacher shouldn't be immoral for doing what they love... the burden IMO is on the person doing the creating.


It could be awefully misused, IMO. The perfect prostitute, the perfect slave, the perfect national leader...



I think if we're going to be creating artificially sentient life-forms, we'd better not know what we're making. After all, we don't know what we're making when we conceive a child.

:shrug:
Printer Friendly | Permalink |  | Top
 
greyl Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Aug-24-09 01:52 AM
Response to Reply #16
17. I'm not convinced that artificial self-awareness deserves the same sacred standing as a human's.
I agree with you on the immorality you describe in the human situations, but I don't think self-awareness alone is a sufficient and necessary condition to proceed as though these robots deserve human rights. I'm wary of premature anthropomorphication. ;)
Plus, I just can't get past the "happy in their role" element in this hypothetical.
Printer Friendly | Permalink |  | Top
 
krispos42 Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Aug-24-09 02:05 AM
Response to Reply #17
18. Not only are they self-aware, but they have feelings
Some things make them happy, some sad, etc. Is setting what makes these feelings immoral?

How does this differ from how we teach children? We use reward and punishment to "program" our children to a certain extent, but it's not a complete process. Trying to remove the sex drive, for example, really won't work.

Should we "teach" budding AI's instead of simply programming them?

And if we programs AI robots in this manner and don't see a problem with it, will it drift into our offspring?
Printer Friendly | Permalink |  | Top
 
Posteritatis Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 04:07 PM
Response to Original message
2. In my own opinion that would depend on their design/intelligence
Edited on Sun Aug-23-09 04:08 PM by Posteritatis
For purposes of the discussion I'm assuming that the robots are doing ethical things in the course of their service - not engaging in crime, being used in combat - and that they're not being treated primarily as disposable. Any of those would heighten my ethical objections independently of the robots' intelligence, assuming there was any intelligence at all as opposed to, say, what a typical car or laptop has now.

Something smart enough to discuss novels with me capably, or genuinely emote? I'd never feel comfortable with something like that being designed to serve. If it wasn't that intelligent but had the outward appearance of it for form's sake (assuming for the sake of discussion that we can spot a difference, e.g. a non-intelligent robot that is programmed to speak well and politely), it would be another matter, though I'd probably prefer it be clear that it wasn't intelligent enough for me to want to classify it as a slave.

If it had the intelligence of, say, an African grey or a particularly bright dog or something, and was designed as a service robot? That'd be something else.

For a gray area, I wonder what I'd think of a highly intelligent robot which was not only designed to serve, but designed to feel joy and fulfilment in the task while being aware of that design? Would it be ethical to design such a being? If so, would it be ethical for me to want them "freed" from that design, assuming they got a form of happiness from said design and weren't being harmed? I don't know.

I actually read a short story recently about a robot which spent time in all three of those states; it's "Zima Blue" by Alastair Reynolds, the last story in a short story collection by the same name. I heartily recommend it, and anything else by the author.
Printer Friendly | Permalink |  | Top
 
Aragorn Donating Member (784 posts) Send PM | Profile | Ignore Sun Aug-23-09 07:33 PM
Response to Reply #2
9. p.s.
Alistair Reynolds is my fave sc-fi author of the past 10 years. But Asimov and Clarke had a lot more to say on this subject.
Printer Friendly | Permalink |  | Top
 
Posteritatis Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 08:16 PM
Response to Reply #9
10. Definitely; I was recommending Reynolds for that specific story on this topic (nt)
Printer Friendly | Permalink |  | Top
 
bananas Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 04:11 PM
Response to Original message
3. Is it ethical to raise cows and sheep solely for food?
Cows and sheep are more intelligent than any machines we've built so far.

I think we're going to see a co-evolution of machines and humans, similar to how cats and dogs have co-evolved with humans, and how insects and hummingbirds have co-evolved with flowers.

Printer Friendly | Permalink |  | Top
 
Posteritatis Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 04:14 PM
Response to Reply #3
4. Interesting perspective
I wish I had some way of checking back in two hundred or two thousand years and see if that co-evolution happened, and if so, in what ways.

(Not sure what's with all the robot threads on DU the last couple of days, but it's like brain candy to me.)
Printer Friendly | Permalink |  | Top
 
parasearchers Donating Member (264 posts) Send PM | Profile | Ignore Mon Aug-24-09 12:37 AM
Response to Reply #4
14. well you never know, 100k years from now
time cameras could exsist to "capture" us from now and you might get the chance to see :)
Printer Friendly | Permalink |  | Top
 
Warpy Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 04:28 PM
Response to Reply #3
5. I don't honestly think cows and sheep volunteer for the slaughterhouse
Edited on Sun Aug-23-09 04:28 PM by Warpy
It seems cattle prods are needed to get them into the killing chutes.

The only parallel here would be Al Capp's Shmoos, a race of creatures that happily leapt upon the plate and sliced the first steak for you off their rear ends, then died in ecstasy at the thought of being eaten.

The whole story is at http://en.wikipedia.org/wiki/Shmoo

I present it here as the parallel to a Utopia in which cheerful robots do all the work.
Printer Friendly | Permalink |  | Top
 
sharesunited Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 05:35 PM
Response to Reply #5
7. Such a coincidence to stumble upon a slaughterhouse sub-thread.
Have you ever heard of Temple Grandin? This is a lady I just learned about. She is autistic and holds a Phd in Animal Science.

She is credited with influencing the design of up to a quarter of the slaughterhouses in the industrialized world.

Her autism gives her a particular insight into transporting and funneling animals into the slaughterhouse process while minimizing the alarm and stress they experience.

And her inability to form emotional relationships makes her particularly suited to engage in the mechanics of slaughter.

http://en.wikipedia.org/wiki/Temple_Grandin

Printer Friendly | Permalink |  | Top
 
caraher Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Aug-23-09 10:55 PM
Response to Reply #5
11. So Douglas Adams swiped the idea...
I remember the critter in the "Restaurant at the End of the Universe" peddling his own flesh as food; never realized before it had been done already!
Printer Friendly | Permalink |  | Top
 
Warpy Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Aug-24-09 07:16 AM
Response to Reply #11
19. Pretty much
although geniuses do tend to think alike from time to time.

The similarity ended pretty quickly, since Capp's Shmoo story line was a satire on Utopianism that invariably ended up with the genocide of the Shmoon as a way to save humanity. A couple of them always escaped in a cave, though, so the story line could be resurrected again by popular demand.

Adams, on the other hand, was satirizing the false morality of eating something that was involuntarily slaughtered and turned into food versus something whose basic desire was to be eaten and the way the latter creeped the characters out and the former didn't.

Printer Friendly | Permalink |  | Top
 
Aragorn Donating Member (784 posts) Send PM | Profile | Ignore Sun Aug-23-09 07:24 PM
Response to Reply #3
8. too slow for that
but I think a machine is a machine. If it's programmed to act happy, it's still programmed.
Printer Friendly | Permalink |  | Top
 
Orsino Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Aug-24-09 07:58 AM
Response to Original message
20. I know what that would do to us.
Humans already have, to varying degrees, a willingness to treat one another as chattel. Imagine our arrogance when a race was created to reinforce that tendency.

What would we do to the inevitable robot that escaped its programming, or to the robot that engineered another without built-in servility?

Printer Friendly | Permalink |  | Top
 
Odin2005 Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Aug-24-09 08:54 AM
Response to Original message
21. It would depend on if the robots were programmed to be sapient or not.
Edited on Mon Aug-24-09 08:59 AM by Odin2005
robots with a sapient AI would be persons and deserve all the "human" rights of personhood, no matter what behavioral predispositions are programmed into them (just as humans have differing personalities). And in any case a sapient AI would by nature be mentally flexible enough so that it would probably not be possible have such a narrow disposition programmed into it.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Thu Apr 25th 2024, 10:29 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Science Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC