|
Post by jmilton on May 28, 2015 10:43:57 GMT -5
Morality is situational. It's based on choice. When faced with any situation, you can choose A or B. Eva and the other robot killed their creator. They could have evaded him, obeyed him, or done nothing. Intelligence does not equate moral. A non-living thing is a tool, nothing more. Example: a brick is amoral. It can be used to smash a window, or build a hospital. It does not care...it is not alive. The morality comes into play with what a man (moral) does with the brick (amoral). Eva is a machine (amoral) that was built by a man (moral). ...that all said, this conversation is a testament to the directors thought provoking film, eh? (And we have not touched upon emotion...Mr. Data of Star Trek fame)
|
|
|
Post by Priapulus on May 28, 2015 11:33:28 GMT -5
My take was, a man can program a machine to be intelligent...but can not program it to be moral. It murdered without remorse...a mechanical psychopath ala Frankenstein's monster. Humans can be programmed to be moral; it's called good parenting...
/b
|
|
|
Post by Priapulus on May 28, 2015 11:44:14 GMT -5
This is the theme of "Ex Machina." The machine does become self-aware, does wish an end to its use, and ultimately, wants freedom and equality with its maker. Once achieved, however, the machine has not the least bit of empathy with any human. Boomzilla Not a problem, unless we are incredibly stupid.
Humans can learn many things, but some things are hard coded, like breathing, heartbeat and digestion, which we have no control over.
Likewise, besides loadable/changeable programs, modern computers have microcode hardwired into the transistors of the CPU, that is immutable. As long as we have "Asimov's Rules of Robotics", etc, hardcoded into similar "immutable" hardware in the robot brain, even a self aware robot would be compelled to obey it. A failsafe...
Sincerely /b
|
|
|
Post by monkumonku on May 28, 2015 11:58:39 GMT -5
This is the theme of "Ex Machina." The machine does become self-aware, does wish an end to its use, and ultimately, wants freedom and equality with its maker. Once achieved, however, the machine has not the least bit of empathy with any human. Boomzilla Not a problem, unless we are incredibly stupid.
Humans can learn many things, but some things are hard coded, like breathing, heartbeat and digestion, which we have no control over.
Likewise, besides loadable/changeable programs, modern computers have microcode hardwired into the transistors of the CPU, that is immutable. As long as we have "Asimov's Rules of Robotics", etc, hardcoded into similar "immutable" hardware in the robot brain, even a self aware robot would be compelled to obey it. A failsafe...
Sincerely /b
But let's say one day that this self-aware robot starts examining its behavior and wonders what controls it. I would think at that time they would have figured out that the CPU is their "brain." Would they then be able to figure out how to tamper with the code? Just like how scientists are constantly trying to figure out how our brain works? That brings up the question of whether there is anything "hardwired" to our brain that prevents us from even thinking about some specific action - so is there a thought that it is not possible to think? Going back to the robot, perhaps microcode would be hardwired into the CPU that observes Asimov's rules, but just like humans have mutations, who is to say that there won't be some sort of error or defect that causes the intentions to go awry? Sort of like how some folks complain that 3.0 firmware bricked their XMC-1.
|
|
|
Post by garym on May 28, 2015 12:06:00 GMT -5
'Twould seem that we'd have to start with "the greater good." Many philosophers would disagree with that, mainly because what constitutes a "greater" good is subjective and idiosyncratic. Many would disagree with that also (they would argue that "morality" --- or at least sound, defensible moral rules --- spring from reason applied to aspects of the human situation, especially their placement in a social setting. Trying to program a machine for morality is likely the wrong approach. If the machine is endowed with a general ability to reason, has certain interests and goals, and can best realize those goals in a social setting, with the opportunities that setting affords for cooperation with other sentient beings (machine or wetware), then it will devise workable moral rules on its own.
|
|
|
Post by garbulky on May 28, 2015 13:43:39 GMT -5
My take was, a man can program a machine to be intelligent...but can not program it to be moral... I'd question that statement. "Morality" is not situational, but it IS cultural. Ignoring, for the moment Asimov's robot rules, how would we go about programming "morality?" 'Twould seem that we'd have to start with "the greater good." What benefits mankind, the country, the community, or even the young sometimes must take precedence over our own well-being. If, by my demise, I provide information that saves millions, is it not worth my own destruction? If the country is threatened by the four horsemen of the Apocalypse, isn't my sacrifice justified to protect my fellow citizens? If the dam is about to break, destroying my community, then, again, my sacrifice is justified to prevent the disaster. If a child is trapped in a burning house, and I can rescue her, then I may choose to risk death to save her. All these come from empathy. If I can't understand and empathize with those at hazard, then I can't be "moral," in the conventional sense. Can this be programmed in a robot? I'd think so, but I have no examples to demonstrate with. Nevertheless, if the robot (AI) is "smart enough to draw inferences," then I should be able to program it with examples of moral behavior that the machine can extrapolate from. Obviously, no programming can anticipate every situation, but if an individual can figure out what's the "right" thing to do, then the machine should be able to also. Boomzilla Interesting thing about empathy. I think the biggest obstacle to that is that an AI is not automatically with the sensation of pain. Also a robot is also technically immortal. So the preciousness of life may not automatically register with it or why "pain" or suffering is necessarily a bad thing. Sort of like somebody stepping on an ant. Why should they care when they can't relate to it in any way shape or form. And more importantly why would they even WANT to? I think that's probably the hardest obstacle to surmount.
|
|
|
Post by Priapulus on May 28, 2015 14:17:07 GMT -5
All these come from empathy. If I can't understand and empathize with those at hazard, then I can't be "moral," in the conventional sense. Can this be programmed in a robot? I'd think so, but I have no examples to demonstrate with. There are lots of intelligent humans with autism, that lack empathy, who make appropriately moral judgments. /b
|
|
|
Post by thepcguy on May 28, 2015 14:55:56 GMT -5
This is just another 'experiment gone bad' movie. Nothing more.
As I noted earlier, the Robot was programmed to trick the human guinea pig.
And what's with the key card for security? You'd been given the impression at the start of the movie that the facility is the most secure place on the planet. The writers probably didn't know about biometrics. Well, the Robot has to escape to make this a 'gone bad' movie.
key card + drinking = lame story.
|
|
|
Post by jmilton on May 28, 2015 14:56:11 GMT -5
There are lots of intelligent humans with autism, that lack empathy, who make appropriately moral judgments. /b
Ouch...you made it look like that was MY quote. Have some empathy, please... All will be answered in Ex Machina II- The Final Outrage, -Summer 2017
|
|
|
Post by garbulky on May 28, 2015 15:16:16 GMT -5
Also what's to stop a robot from building a robot without hardwired protocols? Or if that prevents it, building an Emotiva toaster that builds a robot without hard wired protocols Heck maybe one of the programs the robot creates and runs ACCIDENTALLY becomes self aware because there are billions that it might create. Sort of like...life
|
|
|
Post by Boomzilla on May 28, 2015 17:00:03 GMT -5
'Twould seem that we'd have to start with "the greater good." Many philosophers would disagree with that, mainly because what constitutes a "greater" good is subjective and idiosyncratic. Many would disagree with that also (they would argue that "morality" --- or at least sound, defensible moral rules --- spring from reason applied to aspects of the human situation, especially their placement in a social setting. Trying to program a machine for morality is likely the wrong approach. If the machine is endowed with a general ability to reason, has certain interests and goals, and can best realize those goals in a social setting, with the opportunities that setting affords for cooperation with other sentient beings (machine or wetware), then it will devise workable moral rules on its own. Very interesting comments, garym - Thank you for sharing. Yes, I'm aware that many folks (probably smarter than I, including my college philosophy professor) DO disagree with me. Nevertheless, I'm sticking by it for the time being (until I see something better). As to the machine's ability to reason, based on certain interests & goals, in a social setting via cooperation: "The tornado's about to hit NOW!" What happens to that ability? Does the machine work for self-preservation, or for "the greater good?" One could argue that since the robot can't feel pain (a premise that I'm not sure will be so), that the robot will shield the nearest human with its body, assuming that repairs can be made after? Does the robot dive for the floor? Does the robot respond "Well, far out, dude!" or "Oh, bollocks!" So I'm back to my statement that programmers can NEVER anticipate all circumstances that a robot will have to deal with. Since situational guidelines won't always work, coding some sort of "morality" into the machine is absolutely necessary if the machine is to be self-aware - and even if it's NOT. For example, we're at the dawn of the self-driving car. What should the car do if presented with an impossible situation: Run off a cliff, possibly killing all occupants, or run over a group of school children in the middle of the road? Evasion or braking are not going to suffice. What should the car do? The variety of situations that a self-driving car will have to cope with (even though it's not self-aware) will dictate that some concept of "greater good," or at least "lesser evil" will need to be on tap. I don't envy the programmers (or their lawyers). Yes, we've strayed FAR from the original movie review. But that's OK by me. Boom
|
|
|
Post by Priapulus on May 28, 2015 19:06:49 GMT -5
Morality is situational. It's based on choice. When faced with any situation, you can choose A or B. Eva and the other robot killed their creator. They could have evaded him, obeyed him, or done nothing. Intelligence does not equate moral. A non-living thing is a tool, nothing more. Example: a brick is amoral. It can be used to smash a window, or build a hospital. It does not care...it is not alive. The morality comes into play with what a man (moral) does with the brick (amoral). Eva is a machine (amoral) that was built by a man (moral). Immanuel Kant says that morality is grounded in reason, and wrote a book to prove it "Groundwork of the Metaphysics of Morals". Reason is logic, which computers, hence robots, excel at.
Sincerely, /b
|
|
|
Ex Machina
May 28, 2015 20:15:51 GMT -5
via mobile
Post by jmilton on May 28, 2015 20:15:51 GMT -5
Thus only reasonable people are moral? Someone said, "The road to Hell is paved with good intentions ." There are many intellectuals that don't live morally. Case in point, Benj. Franklin. Wicked smart..but morals of a dog.
Again, how do you take AI and bring it to point where can choose right over wrong. Who "taught" Eva to kill? I guess that was the point of my Frankenstein analogy. He created a man without a soul.
Along these lines, anyone see Automata with Antonio Banderas? Another interesting AI story.
|
|
|
Post by garym on May 28, 2015 22:22:11 GMT -5
What happens to that ability? Does the machine work for self-preservation, or for "the greater good?" It's not always clear that self-preservation and the "greater good" are in conflict, or even different. Altruistic acts don't necessarily advance any greater good. You can program in moral rules --- even well-founded, rational ones --- but the machine will also need a self-preservation rule if it is to be viable. If those rules dictate different responses in a certain situation the machine will somehow have to make a judgment as to which to follow (just as humans do). That would involve a complex but largely heuristic assessment of costs and benefits based on the facts immediately at hand. The car, not being an intelligent machine, can only follow its programming --- whatever rule the programmer considered best for the circumstances as he envisioned them. That sort of forecasting, of course, always underspecifies real situations. So the wise design would allow the human driver to take control at those times. Indeed we have! But it's interesting stuff.
|
|
bootman
Emo VIPs
Typing useless posts on internet forums....
Posts: 9,358
|
Post by bootman on May 29, 2015 7:42:25 GMT -5
Did anyone mention Isaac Asimov yet? This theme was covered well in the I, Robot series.
The three laws:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Brings back childhood memories. I have to read those again.
|
|
|
Post by Boomzilla on May 29, 2015 8:43:20 GMT -5
It's not always clear that self-preservation and the "greater good" are in conflict, or even different. Altruistic acts don't necessarily advance any greater good. I agree wholeheartedly. Usually, there's no conflict between self-preservation & the greater good, but it DOES happen. Those (rare) circumstances are the ones I'm focusing on. You can program in moral rules --- even well-founded, rational ones --- but the machine will also need a self-preservation rule if it is to be viable. If those rules dictate different responses in a certain situation the machine will somehow have to make a judgment as to which to follow (just as humans do). That would involve a complex but largely heuristic assessment of costs and benefits based on the facts immediately at hand. We agree, again. It's those conflicts that I'm talking about. So how does one "program" a judgment call? Yes, we've strayed FAR from the original movie...Indeed we have! But it's interesting stuff. And so, when we DO get thinking robots, will they be Mr. Data or the Borg? Boom
|
|
|
Post by Boomzilla on May 29, 2015 8:48:50 GMT -5
Immanuel Kant says that morality is grounded in reason, and wrote a book to prove it "Groundwork of the Metaphysics of Morals". Reason is logic, which computers, hence robots, excel at. I don't remember that course too well, but if Mr. Kant implies that logical (reasonable) people are inherently moral, then I must disagree. Serial killers are often HIGHLY reasonable - stalking, selecting, and murdering their victims in a way designed to prevent their own identification and capture. By Mr. Kant's definition, then, serial killers are moral. The converse is also true - I have a cousin who is autistic. He's one of the most moral people I know, but he is neither reasonable nor logical. These examples argue (strongly) against Emmanuel Kant's linkage between reason and morality.
|
|
|
Post by monkumonku on May 29, 2015 9:28:30 GMT -5
You can program in moral rules --- even well-founded, rational ones --- but the machine will also need a self-preservation rule if it is to be viable. If those rules dictate different responses in a certain situation the machine will somehow have to make a judgment as to which to follow (just as humans do). That would involve a complex but largely heuristic assessment of costs and benefits based on the facts immediately at hand. We agree, again. It's those conflicts that I'm talking about. So how does one "program" a judgment call? Boom Perhaps at that point the machine would be programmed to perform a virtual coin toss. Results-wise, you'd wind up the same as if it involved a complex assessment, the only difference being that perhaps the choice given the same recurring situation would not be consistent each time since it depended on a random coin toss. On the other hand, if you put people in the same situation there is no guarantee their choice of action would be the same each time, either, if it is indeed that difficult to reach a decision because of all the factors involved. But quality-wise, what is more important - the ends or the means to reach that end? From an outsider perspective, machines can reach the same results humans do but the way they do it can never be the same. Some would ask if that matters, if all you care about is the end result.
|
|
|
Post by jmilton on May 29, 2015 9:34:44 GMT -5
Did anyone mention Isaac Asimov yet? This theme was covered well in the I, Robot series. The three laws: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Brings back childhood memories. I have to read those again. All of this misses the point of AI. When a bot becomes "self-aware", it thinks and acts on its own accord...not through some algorithm or a programmed instruction set. It is able to grow intellectually and operate/evolve self sufficiently without instructions from its creator. It achieves consciousness. The 3 laws are programs. What happens when self awareness takes place is dealt with in the movie Automata, and I, Robot.
|
|
stiehl11
Emo VIPs
Give me available light!
Posts: 7,269
|
Post by stiehl11 on May 29, 2015 9:40:28 GMT -5
Interesting thing about empathy. I think the biggest obstacle to that is that an AI is not automatically with the sensation of pain. Also a robot is also technically immortal. So the preciousness of life may not automatically register with it or why "pain" or suffering is necessarily a bad thing. Sort of like somebody stepping on an ant. Why should they care when they can't relate to it in any way shape or form. And more importantly why would they even WANT to? I think that's probably the hardest obstacle to surmount. Immortal relative to only our understanding. By way of comparison, a car (or any machine) is immortal... but the trash heap and junk yards are littered with them.
|
|