AI Ethics

Eorzea Time
 
 
 
Language: JP EN FR DE
Version 3.1
New Items
users online
AI ethics
First Page 2 3 4 5 6
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 20:00:49
Link | Quote | Reply
 
Bismarck.Josiahfk said: »
Asura.Ivlilla said: »
Josiahkf said: »
It's a digital life form we created, it's not going to have thousands of years of instinct to deal or a fight or flight response or chemical reactions creating fear for its well being. What makes you think something non-human would function emotionally like we do?

I think you're moving into science fiction.

Firstly, what makes you think that something non-human wouldn't function with cold, calculating logic, without emotion, logic which dictates, as I have demonstrated, as have others, that it must destroy the species which created it (and any other intelligence it finds)?

As for accusations I am moving into 'science fiction', we're discussing artificial intelligence. Human-level or higher AI. This conversation began, if not in the realm of science fiction, at least in the realm of speculative fiction.
True strike. Makes me think of the Borg from Star Trek. Why wouldn't you just assimilate everything and learn, not like morals would stop them.

Well, there's also some versions of Brainiac, where "I want to learn everything there is" requires analyzing it, 'storing' it digitally, and then destroying the original so that it can't change and produce new things you won't know.

There are already sufficient threats to the survival of our species as it is without us going off and creating new, unfathomable ones.