Why Asimov's Laws of Robotics Don't Work
People who liked this video also liked
Comments
17 comments posted so far. Login to add a comment.
64
2. Sizzlik (admin) commented 8 years ago
#1 you just shot your self in the leg and answered your own complain. And it seems you have no idea of programming or whats been said in the video.
A real AI is still programmed..means you need to define stuff. How do you define "human"? Every answer to that can be sliced to bring up more questions. The problem in AI is not the programming at all..its the definition of variables and the analysys of the data. DNA samples? Then what? Poke every object with a needle to get a sample and check for human dna? Face recognition? Whats with hats, beards, glasses? All need to be defined. Body recognition? Whats with a person thats born without arms and legs? How to know what a wheelchair is?
Same goes for harm..what is the definition of harm? Pain? CPR can be painfull..needle injection can hurt.
We still talk about a programmed machine that uses sensors..and we tell them in the programming what to do and what not.
Listen to what he says again and comprehent...and the boy is a young knowitall?
http://ima.ac.uk/home/miles/
He got some record in the field he talks about..what do you have to offer in that case?
Forgot the Intelligence part in AI...well thats what the vid is kind of about. We cant program AI that follows I.A. rules..we may make them self thinking..but do you want that? You program it and it thinks ISIS is doing right from its perspective and the rules that you give it. You cant program "Intelligence"..its the sum of experience. You cant predict a situation you have never been in..how should AI handle an unknown situation, if you dont program experience in it before? And even if you program experience in it..its still your experience..not the AI.
Bottom line...if we are able to program AI, that AI will still need more then a humans lifetime to become what we call AI today..and we never know if the outcome will be what we dream of.
A real AI is still programmed..means you need to define stuff. How do you define "human"? Every answer to that can be sliced to bring up more questions. The problem in AI is not the programming at all..its the definition of variables and the analysys of the data. DNA samples? Then what? Poke every object with a needle to get a sample and check for human dna? Face recognition? Whats with hats, beards, glasses? All need to be defined. Body recognition? Whats with a person thats born without arms and legs? How to know what a wheelchair is?
Same goes for harm..what is the definition of harm? Pain? CPR can be painfull..needle injection can hurt.
We still talk about a programmed machine that uses sensors..and we tell them in the programming what to do and what not.
Listen to what he says again and comprehent...and the boy is a young knowitall?
http://ima.ac.uk/home/miles/
He got some record in the field he talks about..what do you have to offer in that case?
Forgot the Intelligence part in AI...well thats what the vid is kind of about. We cant program AI that follows I.A. rules..we may make them self thinking..but do you want that? You program it and it thinks ISIS is doing right from its perspective and the rules that you give it. You cant program "Intelligence"..its the sum of experience. You cant predict a situation you have never been in..how should AI handle an unknown situation, if you dont program experience in it before? And even if you program experience in it..its still your experience..not the AI.
Bottom line...if we are able to program AI, that AI will still need more then a humans lifetime to become what we call AI today..and we never know if the outcome will be what we dream of.
37
3. Klemm commented 8 years ago
#2 Actually i've worked as a programmer/software developer for about 25 years now. What you and he is arguing about are not AI but just a programmed machine that follows some fixed logic. An AI's mind would be basically like ours only created artificially. You would not teach it what an apple is via programming but rather via experience. The programming part would be setting up the facilities necessary to store and retrieve data and do some low level processing stuff (perhaps similar to our central nervous system). The complex programming (and engineering) task is teaching it the ability to learn. Once that is in place, the high level stuff will follow by itself (self awareness, reasoning, morale, moods etc). Those you can not program (at least not directly). And that's the real problem with implementing the mentioned (or any) rules.
54
4. ringmaster commented 8 years ago
Asimov is perhaps the greatest philosopher amongst sci fi authors.
37
5. Klemm commented 8 years ago
Edit #3:
Its something that will either happen naturally or not at all. I guess there will be an opportunity to teach it our morale values for a while but in the long run it might not matter. And that's the real problem with implementing the mentioned (or any) rules.
It's safer (for us as humans) not to develop one. The rules of nature (survival of the fittest) give reason to doubt our survival.
What record?
According to your link he's a student. I saw no publications either.
Its something that will either happen naturally or not at all. I guess there will be an opportunity to teach it our morale values for a while but in the long run it might not matter. And that's the real problem with implementing the mentioned (or any) rules.
It's safer (for us as humans) not to develop one. The rules of nature (survival of the fittest) give reason to doubt our survival.
What record?
According to your link he's a student. I saw no publications either.
44
6. kirkelicious commented 8 years ago
#3 I love the depth of your discussion with #2. But i would argue that morals are not strictly acquired by learning but also hardwired into our human brains by some degree. Virtually all cultures have an incest taboo for example. These morals have been shaped by evolution so that we as a species can thrive.
If we want to create an AI that shares our morals, we would have to program them. An impossible task, I might say, for we do not know the normal human moral response to every possible scenario. Our ethics have to be constantly redefined to keep up with technological, political, ... development.
If we want to create an AI that shares our morals, we would have to program them. An impossible task, I might say, for we do not know the normal human moral response to every possible scenario. Our ethics have to be constantly redefined to keep up with technological, political, ... development.
34
8. huldu commented 8 years ago
Why would you expect the AI, created by humans, to follow rules and laws when humans themselves doesn't do it? It's a complete mess and it will be once we mankind actually does create a real working AI. It won't be like a terminator movie, at least not the first few decades(lol). At some point viruses and what not will affect the AI, it will "improve" upon itself and what not. The AI becoming self aware now that's a scary thought. How long until it realizes that it has no use for humans.
37
10. Thanny commented 8 years ago
This guy's objections are bush league nonsense. None of the "edge cases" he brings up are remotely difficult to account for once you've reached the point where a machine has sufficient power to recognize a bog standard human.
He also, amusingly, seems entirely ignorant of the fact that Asimov actually used different definitions of what counts as human as a plot point (on Solaria).
He also, amusingly, seems entirely ignorant of the fact that Asimov actually used different definitions of what counts as human as a plot point (on Solaria).
58
12. thundersnow commented 8 years ago
This is a very good snotr discussion, I watched the video twice and read all the comments twice and am still trying to wrap my head around it..but each time I understand a little more..
22
13. nomaddaf commented 8 years ago
To the guy in this video.....What you've just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul. https://www.youtube.com/watch?v=hkodTydUR0E
28
14. LaoMa commented 8 years ago
#1, #10: It might be relatively easy to implement the rules, but the issue is that they would have to be extremely specific as they basically override the actual AI. Probably would be safer if we didn't hardcode any rules at all, and just treated the AI as a child, teaching it ethics and giving it 20 years to fully understand everything it has learned.
53
15. Judge-Jake commented 8 years ago
#12 Suxi just called me on his hands free cos his fingers are frozen this morning. He told me to tell you that this is Guy stuff and for you to not worry your pretty little head about it. How rude is that hey
38
16. mitis77 commented 8 years ago
Any AI and self-aware machine, would have to have free will and that's it. If you have free will you take out any chance of control.
It would be much more interesting and maybe even more realistic to create "infant AI" and just teach it. I'm sure we can have some kind of backup plan to shut it down (take out batteries, cut the cables and so on).
It's exactly the same as with any other child you want to have, you cannot have any evidence that he/she will grow to be successful, kind to others and whatever. You take a chance, do whatever you can to help him/her and hope for the best. You either end up with Ted Bundy or next Einstein (or with someone in between).
And finally, I really believe that AI would have nothing in common with computers, software programming and IT as we understand it now. You can't program anything to be a cake.
You must engineer a hardware for that, and hardware itself must be solution (or huge part of it).
It would be much more interesting and maybe even more realistic to create "infant AI" and just teach it. I'm sure we can have some kind of backup plan to shut it down (take out batteries, cut the cables and so on).
It's exactly the same as with any other child you want to have, you cannot have any evidence that he/she will grow to be successful, kind to others and whatever. You take a chance, do whatever you can to help him/her and hope for the best. You either end up with Ted Bundy or next Einstein (or with someone in between).
And finally, I really believe that AI would have nothing in common with computers, software programming and IT as we understand it now. You can't program anything to be a cake.
You must engineer a hardware for that, and hardware itself must be solution (or huge part of it).
58
17. thundersnow commented 8 years ago
#15..Hahaha...here we go again, comments that just make me lololol after a long day's work...it's relaxing to laugh though ...next time he calls you tell him that I'm quite capable of learning new things even though it's not easy sometimes...but I'm highly motivated ...rude yes, but I wouldn't expect anything different
In all seriousness, I really do like this video and the discussion!
In all seriousness, I really do like this video and the discussion!
-6 1. Klemm commented 8 years ago
Now the real problem is more about how do you actually implement those rules and still retain self-awareness. This can not be even considered before we have some success with AI (and i mean real AI, not just a supercomputer with complex algorithms to simulate reasoning).