|
AI ethics
Leviathan.Chaosx
Server: Leviathan
Game: FFXI
Posts: 20284
By Leviathan.Chaosx 2015-09-04 07:51:16
Damn there's a good Futurama video on sex with robots and how mankind does everything to impress the opposite sex. Can't find a copy online though.
By charlo999 2015-09-04 07:52:13
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.
Incoming - 'insert technology' from 'insert film' has come true.
By Ackeron 2015-09-04 07:53:58
Damn there's a good Futurama video on sex with robots and how mankind does everything to impress the opposite sex. Can't find a copy online though. I was looking for the same thing I can't find the full video.
[+]
Valefor.Sehachan
Server: Valefor
Game: FFXI
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:55:06
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.
Incoming - 'insert technology' from 'insert film' has come true. I guess you aren't well informed on our current technological development. Like I said we already have robots that sense, exchange information with each other, gather and grow, like bacteria or even worms. They just can't self-replicate...yet.
Also quantum computers that can acquire and elaborate information at insane speeds.
And we know how to program an AI that can evolve its thinking based on analyzing data received.
Put them together and you got what we're talking about. But if you're not interested you're more than free to not read the thread.
By Yatenkou 2015-09-04 07:55:32
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.
Incoming - 'insert technology' from 'insert film' has come true. Unfortunately with technology becoming more and more advanced this isn't the case. While they will only have pseudo emotions, some AI can make their own decisions. It's our job as humans to make sure these decisions are for the bettering of the planet and mankind and not for reasons that involve killing everything.
Though if a super advanced AI ended up smashing mail boxes, I'd die happy.
Valefor.Sehachan
Server: Valefor
Game: FFXI
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:59:10
What's wrong with mail boxes?
Server: Bismarck
Game: FFXI
Posts: 33979
By Bismarck.Dracondria 2015-09-04 08:00:30
Yatenkou once stubbed his/her/josiah's toe on one and now vows to destroy them all
By Aeyela 2015-09-04 08:01:23
If you program an AI with Asimov's three laws of roboethics then you won't have that kind of problem.
As great as those rules are, there's two caveats you're neglecting to consider. Firstly, they were written in 1942, way before we were remotely capable of producing artificial intelligence on a scale that these rules would safeguard us from. Secondly, the rules were introduced as part of a story whose plot revolved around them and have since been taken in a literal context to 'govern' AI our species produces.
This means that Asimov's laws would undoubtedly be different if written in 2010 and that the outcome of said laws was in a story written by the inventor of those laws, so of course they were followed.
Therefore, to assume that they guarantee our safety from any AI we produce capable of self awareness, sentience and self morality is ridiculously naive. There is no such thing as "absolute laws" when you hand something the ability to think for itself.
tl;dr: they're a guideline, not a mandate, and any sentient AI will be capable of deciding not to follow the laws.
[+]
By Yatenkou 2015-09-04 08:04:56
Ok then give me a scenario and I'll show you which law it violates.
Leviathan.Chaosx
Server: Leviathan
Game: FFXI
Posts: 20284
By Leviathan.Chaosx 2015-09-04 08:05:07
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.
Incoming - 'insert technology' from 'insert film' has come true. Self-aware AI is inevitable.
You should read up on the subject.
We're talking ~14 years before the first forms of it become functional and abundant.
[+]
Fenrir.Atheryn
Server: Fenrir
Game: FFXI
Posts: 1665
By Fenrir.Atheryn 2015-09-04 08:06:37
Honestly, I think I'm more concerned about nanotech running amok than I am about AI.
By Yatenkou 2015-09-04 08:06:58
If you program an AI with Asimov's three laws of roboethics then you won't have that kind of problem.
As great as those rules are, there's two caveats you're neglecting to consider. Firstly, they were written in 1942, way before we were remotely capable of producing artificial intelligence on a scale that these rules would safeguard us from. Secondly, the rules were introduced as part of a story whose plot revolved around them and have since been taken in a literal context to 'govern' AI our species produces.
This means that Asimov's laws would undoubtedly be different if written in 2010 and that the outcome of said laws was in a story written by the inventor of those laws, so of course they were followed.
Therefore, to assume that they guarantee our safety from any AI we produce capable of self awareness, sentience and self morality is ridiculously naive. There is no such thing as "absolute laws" when you hand something the ability to think for itself.
tl;dr: they're a guideline, not a mandate, and any sentient AI will be capable of deciding not to follow the laws. The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates.
Valefor.Sehachan
Server: Valefor
Game: FFXI
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:10:59
The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates. Considering the level of evolution of lifeform we're talking about, even if you impart those rules(which might not even be possible if you want to create a functional thinking AI), it would perceive them as instinct.
We have those too and are capable of going against them through deliberate choice.
[+]
By Aeyela 2015-09-04 08:13:14
Ok then give me a scenario and I'll show you which law it violates.
When a human kills another human, they know they're breaking the law. It doesn't stop people doing it. Why? Because we're a sentient species and we have the intelligence to break from the mold of what's "right" or "wrong" because it's not hard coded into our genetic or personalities.
You can programme anything you like into an AI, but the moment it gains sentience, it's no longer under your control. It has the sentience, like we do, to break from the mold of what's "right" or "wrong", based on the three laws.
Ergo, an exceptionally smart AI that eventually develops self awareness might one day decide, using its new sentience, that the laws suck and remove them from their programming. This is what happened in the Terminator films. Skynet became so intelligent it developed sentience and decided "*** humans" and went about exterminating them. The moment the machine develops sentience, which as Chaosx says is inevitable, then the robot could wipe its arse with your laws and then throttle you in your sleep and no amount of bleating "Asimov's Laws! Asimov's Laws!" will save you as it slowly throttles the life out of you.
The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates.
Putting "the truth is" infront of something doesn't make it true. You're not grasping what sentience actually means. It means you are completely responsible for your actions. A robot with sentience can choose to ignore the laws. Not sure what part of that is tripping you up.
[+]
Valefor.Sehachan
Server: Valefor
Game: FFXI
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:14:53
That being said I doubt any robot would ever feel the need to wipe out humanity. They could, however, pursue their own development in a fashion that endangers our survivability.
[+]
By charlo999 2015-09-04 08:16:30
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.
Incoming - 'insert technology' from 'insert film' has come true. I guess you aren't well informed on our current technological development. Like I said we already have robots that sense, exchange information with each other, gather and grow, like bacteria or even worms. They just can't self-replicate...yet.
Also quantum computers that can acquire and elaborate information at insane speeds.
And we know how to program an AI that can evolve its thinking based on analyzing data received.
Put them together and you got what we're talking about. But if you're not interested you're more than free to not read the thread.
Hate to break it to you but following commands is not intelligence. No matter how advanced the accomplishment of these commands look.
Analysing data then making it executing a command depending on the data received is not intelligence either. It's following programming.
Now if your debate is really about a reaction that puts us in danger/goes against our ethics from bad programming, then fair enough. But you need to rename the OP.
Valefor.Sehachan
Server: Valefor
Game: FFXI
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:18:31
I think you're not quite understanding, others have already explained though so I wouldn't know what to add.
By Aeyela 2015-09-04 08:19:09
Hate to break it to you but following commands is not intelligence. No matter how advanced the accomplishment of these commands look.
Analysing data then making it executing a command depending on the data received is not intelligence either. It's following programming.
Now if your debate is really about a reaction that puts us in danger/goes against our ethics from bad programming, then fair enough. But you need to rename the OP.
Until AI is smart enough to rewrite or introduce new programming, which plenty of them have already done. What then? How do you govern all the potential code it could produce?
By Yatenkou 2015-09-04 08:20:09
The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates. Considering the level of evolution of lifeform we're talking about, even if you impart those rules(which might not even be possible if you want to create a functional thinking AI), it would perceive them as instinct.
We have those too and are capable of going against them through deliberate choice.
An AI doesn't have instinct, no matter how advanced it gets, no matter how life like they seem. All of those choices are made through a combination of their fake emotions along with conditions of the situation and what they are and are not allowed to do.
An AI at its core is a computer, and computers do not behave outside of their programming unless through human input.
An AI cannot make it's own choices, it's programming does it for them, but you think it is making it's own choices based on it looking as if it was thinking it over. A robot programmed to be pacifistic will not murder someone, even if that murderer killed the robot's master. It's simple programming.
Can I kill a human?
l
l
Does this conflict with the first law? -Yes- operation aborted
l
No
l
Does this conflict with the second law? -Yes- Operation aborted
l
No
l
Does this conflict with the third law? -Yes- Operation Aborted
l
No
l
Proceed.
l
End
This is a basic layout for a programming flowchart. This is what a computer is doing when it processes things in a program.
[+]
By Aeyela 2015-09-04 08:21:23
An AI doesn't have instinct, no matter how advanced it gets, no matter how life like they seem. All of those choices are made through a combination of their fake emotions along with conditions of the situation and what they are and are not allowed to do.
An AI at its core is a computer, and computers do not behave outside of their programming unless through human input.
An AI cannot make it's own choices, it's programming does it for them, but you think it is making it's own choices based on it looking as if it was thinking it over. A robot programmed to be pacifistic will not murder someone, even if that murderer killed the robot's master. It's simple programming.
Can I kill a human?
l
l
Does this conflict with the first law? -Yes- operation aborted
l
No
l
Does this conflict with the second law? -Yes- Operation aborted
l
No
l
Does this conflict with the third law? -Yes- Operation Aborted
l
No
l
Proceed.
l
End
This is a basic layout for a programming flowchart. This is what a computer is doing when it processes things in a program.
This is the classic human arrogance that caused Judgement Day. There's already AIs out there that have produced or modified lines of code in their source. Google's search spider is one example that you can find plenty of literature about online. It's not a physical walking or talking robot, but it's capable of modifying its code based on the interactions it makes on the net. In some of those situations, there is nothing in its source to account for this behaviour. Look it up online, you might find it a fascinating read.
Valefor.Sehachan
Server: Valefor
Game: FFXI
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:22:06
No Yatenkou, that isn't the advanced level of AI we're talking about.
The moment you impart it orders then it's much more primitive and there isn't even anything to consider. But now computers are becoming capable of developing knowledge and act based on it.
By Yatenkou 2015-09-04 08:23:36
No the classic human arrogance is thinking it'll be all fine and dandy to not cover their own *** when more advanced artificial intelligence start to come into existence.
How can anyone not understand that even though they seem peaceful, you need INSURANCE to make sure that it will never hurt someone.
This is what a company will do if they ever release one for average day life. They're not going to release something that could get pissed off and kill someone. No that won't ever happen, because they will be held liable.
By Ackeron 2015-09-04 08:24:23
Also come across problems like, if I don't kill human A he can kill humans B and C. Both Action and inaction would be a violation of the laws. Logic loop!
Seriously did no one here see i Robot?
Valefor.Sehachan
Server: Valefor
Game: FFXI
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:25:29
iRobot, soon in all Apple stores.
By Yatenkou 2015-09-04 08:25:30
No Yatenkou, that isn't the advanced level of AI we're talking about.
The moment you impart it orders then it's much more primitive and there isn't even anything to consider. But now computers are becoming capable of developing knowledge and act based on it.
Any level if AI is the same thing at its core.
[+]
By Yatenkou 2015-09-04 08:26:13
Also come across problems like, if I don't kill human A he can kill humans B and C. Both Action and inaction would be a violation of the laws. Logic loop!
Seriously did no one here see i Robot? Incorrect, there are non lethal means to uphold the laws.
Before we freak out, the robot was probably being sarcastic(which is remarkable nonetheless)when it said it'll put people in a zoo. Here's a part of the conversation with it
YouTube Video Placeholder
I want to use this though as cue for a broader and very interesting ethical problem that is our advancement with AI.
As a person in the video suggests: if we make advanced AI that lack empathy, they might destroy us as in most sci-fi we're all familiar with; but if we do give em complex personalities and the ability to empathize with humans, then people might fall in love with them, which is an ethical problem.
Moreover, we now have technologies that let robots interact with the environment and other fellow robots, cooperate with each other, accrue materials, repair their parts and grow all on their own like some basic organisms. What they still can't do is self-replicate, which is pretty much the only thing that separates them from a lifeform at this point.
The fact that humans can create life is an extremely interesting subject, and I don't specifically mean this from an egocentric perspective, or as detractors would call it "playing god", but rather it puts our own existence in a different light.
At least it is my humble opinion that these are topics that can give everyone lead for a very interesting philosophical reflection.
Discuss, or offer further cues about the subject of robitics ethics.
|
|