Skip to main content

Ethics and AI and the Philosophy of Human Design

Steven J. Harmon
Teacher Emily Broyles
Psychology
19 November 2015


Ethics and AI and the Philosophy of Human Design
Is it a smart idea to replace humans with something that has no free will or true independent decision making power?” And that question is assuming that humans have any free will or true independent decision making power, and from a biological standpoint we were originally dealt the genetics that majorly influence what we are predisposed to doing based on what our bodies can do as seen in separated twin studies, and every unconscious response i.e, fight or flight, hormones, and other chemical stimulus is just reacting to our environment, calculating the most efficient course of action, then running it, so are we reduced to a cosmic level of atoms moving around in a seemingly random pattern until they are measured giving us “free will”? Actually no, because that is quantum entanglement, “spooky action at a distance” as Einstein refereed it to (Nova); it's a pair of gloves and put each of them in separate briefcases unknown to you on different sides of the world, before you open the case they are in a state of limbo between right glove and left glove, and it is the direct action of opening the cases (any form of atomic measurement) that turns it into an boolean of either right glove or red glove; it’s grey until it is black or white, 0 or 1 provoked by the environment forcing the measurement.
       There is the ethical problem of general intelligence “g factor”(Cherry), what in a sense is the only thing that truly separates a human from a machine creating awareness, and therefore “emotion”; general intelligence is the difference between a chess playing bot who only knows how to play chess, but is unaware it’s playing chess due to nothing relative to compare it to other than chess, or a robot toy seeming more realistic since it has more tricks, but we may not be optimized for a certain task, however we can manage doing all of them by adapting with our general knowledge building, bouncing, and making connections to and from to each other in form of neurotransmitters which is the equivalent to the line of js code
Obj.SendMessage(“Do Whatever”, SendMessageOptions.RequireReceiver);
     This method of programming a more realistic AI to not just pass the Turing Test through conversation analysis (Bartlet), but rather start recreating the human from it’s most basic components first that control everything else, and this is why in the present day we haven’t been able to create real AI since we’ve just been mimicking intelligence not actually creating life, so when we do and want to control that man made life, it’s going to be imperfect, like us since it was made by us and want it’s own justifiable rights from the 14th amendment aka made in America for the later more sophisticated aware models, or it will be too perfect from our own greed and expectations and follow the task we give it by trying different tactics, different paths as an A** pathfinding bot would until it gets the best result which is problematic when it per se needs to collect stamps and eventually realizes that stamps are made of paper, paper is made of carbon, and carbon is also found in humans… it’s a dramatic example I found on computerphile, but it illustrates the point that we’ll try to use Asimov’s three laws to stop that from happening, except for the little problem with definitions. 
       Furthermore since the question itself has trouble with definitions it just proves my point that humans are imperfect, therefore can’t produce a perfect definition for the human experience “art”, therefore can’t tell a robot to not let any harm come to a human, since what is human is a psychological construct since there’s too much confounding evidence to in the future precisely measure who is human and who is not, and you can’t come up with a definition without the programmer taking an ethical stance of what is a human, is an unborn fetus a human, does it deserve to live in a scenario where it’s either the baby or the mother and only one can be saved by a medbot? The medbot will just have to make it’s very own imperfect and very human decision, won’t it? We’ll cross that bridge when we come to it, but the start isn’t AI programming to replicate human behavior and emotion, it’s artificial immune systems aka antivirus; once computer scientists realize that, that is when we must bring up this conversation to be able to live in mutual trust. 
     As for today’s AI usage, we should be wary of using bots to do jobs that require an imperfect human perspective from not selling all stocks when most opportune to pulling the trigger on an innocent in war. With time however, improved data analysis techniques based on past reference will avoid any unwanted stock market crashes, and better facial mapping will improve the accuracy of the future of war. Just like gas and nuclear weaponry combat specialized AI will have to be considered similar to the usage of gas in war after WWI, and most of this all sounds like science fiction, however it's a lot closer than you think to becoming realized and we as society need to now draw ethical lines of what makes someone human since we so often compare our minds to the latest machine or computer in function in psychology.
Works Cited
Bartlett, Jamie. "No, Eugene Didn't Pass the Turing Test – but He Will Soon – Telegraph Blogs." Technology No Eugene Didnt Pass the Turing Test but He Will Soon Comments. The Telegraph.co.uk, June-July 2014. Web. 19 Nov. 2015.
Cherry, Kendra. "What Is General Intelligence? Review Your Psychology Terms." About.com Health. About.com, 16 Dec. 2014. Web. 19 Nov. 2015.
Quantum Leap: The Fabric of the Cosmos. Dir. Julia Cort. By Brian Greene. PBS, Nova, 2011. DVD.


Comments