The Morality of Robot Servitude
Could robots ever be an ethical subject? And, if so, would it then be morally impermissible for the robots to work for us, resulting in a kind of slavery? Would the creation of the robots in general be morally permissible? In this paper, I will explore Peterson’s position on robot servitude, chapter four of Donaldson and Kymlicka’s Zoopolis, and John Weckert’s position on robot servitude in “Playing God,” and I will argue against robot servitude as well as against the creation of these artificial people (APs). Steve Peterson himself who argues for the moral permissibility of robot servitude does have his own reservations as he mentions himself. Although he isn’t sure why his intuitions tell him otherwise, his written piece supports robot servitude, but he does believe that something about robot servitude is ethically fishy. This is because, although robot servitude is wrong, the inherent wrongdoing lies with the programmer himself.
Consider this moral permissibility of robot servitude in relation to genetically modifying animals. The selective breeding of dogs renders them less aggressive, more playful or cuter. Often this selective breeding leaves the animals with health issues and serious breathing problems. You may think it’s acceptable under certain circumstances if some dogs don’t have health issues as a result of being cuter. However, I do agree with Weckert that the problem lies not in the outcome or the harm caused, but rather the creation of the artificial people instead. (Weckert 1). Similarly, in the animal breeding situation, although the health issues and breathing problems were definitely wrong, the inherent wrong lies in genetically engineering the animals to be cuter or more servile to begin with. On the same token, the inherent wrong of robot servitude is the creation of robots by the programmers to begin with.
After exploring John Weckert’s reasons against playing God and genetic engineering, ultimately, his argument against playing God and genetic engineering fails to be convincing. While I agree with John Weckert in his position of arguing against robot servitude, I do not agree with Weckert’s argument of why robot servitude is morally impermissable. In essence, the following is John Weckert’s arguments against robot servitude: In fact, Weckert suggests that creating robots and genetic engineering is not morally permissible because humans should not play God and interfere in matters which they have no business in. By creating artificial people, programmers are playing God; playing God is morally wrong. Playing God is having interfered where humans shouldn’t have, like in nature (Weckert 87). Like Frankenstein’s creation of a monster, the wrongdoing lies in the interference with nature and the creating the artificial person or monster, rather than that the creation of Frankenstein led to harm (Weckert 87). From a secular perspective, playing God is not quite religious as it is to be interfering with nature. Modifying nature by breeding animals, creating ‘Frankenfoods’ and creating artificial people cross the boundary of nature’s domain. Even if humans are considered part of nature, humans have some differences from the rest of nature in that we are autonomous and have free will so we also have responsibilities for what we do although other creatures don’t have responsibilities for what they do (Weckert 91). Since only humans are autonomous, it is wrong to interfere with nature. Weckert does hold an important aspect of an argument when he references autonomy; however, robot servitude is wrong because the artificial people would be essentially robbed of their autonomy by being programmed to enjoy specific simple tasks. The creation of artificial people is morally impermissible and wrong because the programmer would not be respecting the autonomy of the artificial person by creating him/her in the first place to serve the programmer.
By programming an artificial person to enjoy serving us, this takes away some of the robot’s ethical significance. As a pre-programmed creation the artificial person would not have autonomy, which is essential to personhood. When I reference the word autonomy, I mean “the ability to decide and execute a life plan of the subject’s own choosing (Peterson 291).” The robot would do what the programmer programmed the artificial person to do, regardless of whether or not the robot gets pleasure from this activity. It is not possible to respect a robot’s autonomy if you’ve created this being for your own purposes. The artificial person only gets pleasure from the task because the programmer programmed the robot to gain pleasure from doing activities which the programmer wanted to get done. The function of the artificial person is not inherent, rather it was a function designated by somebody else. Similarly, animals have been bred to be obedient and serving to us. We have bred animals to have characteristics that look more pleasing to us although this has been harmful to the animals (Donaldson and Kymlicka 82). Consider the case of Labrador Retrievers which was proposed by Peterson: ethical subjects designed to enjoy fetching things. When the dog fetches, he fetches the ball because someone else modified his breeding for him to enjoy fetching that ball because it would satisfy the person modifying the dog for the dog to enjoy fetching. The dog himself is not autonomous in its actions. By engineering the desire for the dog to fetch the ball, the desire is not the dog’s own desire. Due to this breeding, the animals have lost their autonomy and have become dependent on us for everything. Like this kind of breeding of animals, robot creation and servitude is wrong because it takes away the autonomy of the artificial people. We would be treating the robots as a means to our own ends by programming these artificial people to serve our own interests. On the same token, by breeding animals to be more compliant and servile, we are not respecting the autonomy of the animals and instead modifying them to cater to our preferences and be obedient to us, treating the animals as a means to an end.
An objector could argue that if the artificial person was programmed with an equally strong desire to sculpt, look after your children and do your laundry, then the artificial person has autonomy to choose what they want to do (Peterson 291). However, this equality of strong desires does not equate to overall autonomy. Yes, the artificial people can pick the method by which they serve you, but the artificial people will choose what method to serve you by the tasks which the programmer had programmed the artificial people to gain pleasure from. The artificial people would still be coerced into having a strengthened desire to serve in some way, regardless of what activity of servitude the robot would want to do. The robot would only have autonomy in choosing in what way he/she wanted to serve his programmers. Consider the case of a parent and their grown child. If the parent only gives the option for the child to choose to become a lawyer, a doctor, and an engineer, then the child does not actually have free will to choose and the child is not truly autonomous; the parents are limiting the child. It is wrong to decide someone else’s life plan. By being programmed in the first place with desires to do tasks, the artificial people have not control over their life and therefore have no autonomy; their creation in the first place made it clear that they had no autonomy since the programmer chooses the tasks the robot enjoys to do and gains pleasure from doing. In the same way, if a robot is only programmed to have strong desires for different methods of serving, the robot is not truly autonomous and does not truly have free will; the programmers are limiting the robots. In addition, if one desire is made stronger than the other desires, then the artificial person does not have autonomy and was developed to serve for a specific purpose.
Weckert also argues that the creation of artificial people would be Playing God because it would be interfering with nature and therefore the creation of artificial people would be unnatural. However, often when technology is uncommon or new, people do argue that the new technology is “unnatural” (Peterson 292) maybe out of fear or discomfort. As time passes, people grow comfortable with this same technology; as more time passes and this new technology becomes old, it is also considered “natural” (Peterson 287). Creating an artificial person is not morally impermissible because artificial people would be unnatural. Rather, one reason robot servitude is morally impermissible is due to the kind of life that the artificial people would lead.
Consider that creating the artificial person would lead to the artificial person living an unfulfilled life. Creating artificial people with desires as simple as doing laundry is morally wrong because the artificial people will never be subject to the higher pleasures of life and will therefore live an unfulfilled life (Peterson 292). Although the artificial people will gain pleasure from tasks like doing laundry, the pleasure they would feel would be like a human who has a fine meal or even like having sex in comparison to the kind of higher pleasure which a human gains from intellect and imagination. Consider this unfulfilled life objection to robot servitude in relation to animal breeding: Dogs and other animals are bred to be servile and obedient to us, lessening the potentially higher goals that animals had in the wild before they were domesticated. Now, these animals who were bred to be obedient and have physical characteristics more pleasing to us do tricks and live a very simple life with simple goals of being obedient while they could have goals more specific to finding prey in the wilderness. By adjusting and modifying these animals and creating these robots, both the animals and the robots are doomed to an unsuccessful life full of only simple pleasures rather than higher pleasures. Simple pleasures include activities like doing laundry while higher pleasures are more meaningful achievements (Peterson 292). Peterson references a quote by John Stuart Mill, “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied (Peterson 292).” The artificial person would be a like a “fool satisfied” or a “pig satisfied” in these analogies (Peterson 292); this analogy would hold true for the previously posed example of a dog bred to be obedient and servile to us.
Robot servitude will condition us to be desensitized (Peterson 294). By allowing the processes of robot servitude, humans will become comfortable with treating roots as servants and will begin to treat other non-artificial humans similarly. It will be harder for humans to recognize sacrifices by other natural humans and therefore they will become desensitized. By having artificial people continually do our dirty work for us, we may start to expect humans to do our dirty work for us although they have no desire to, unlike the artificial person who was programmed to desire to do work that humans don’t want to do. For our own sake, it is important that we treat robots well at the very least by not putting them in servitude to us or creating them. This desensitization objection holds true whether or not one believes that artificial people have any moral significance.
An objector could argue that robots or artificial people do not, in fact, have moral worth and could never be ethical subjects. An artificial person could never be a person because they would be so programmed. In fact, Torrance has argued that only “organic” creatures could have ethical significance and personhood (Peterson 285); to be “organic” a creature must be carbon based, autopoietic (self-organized and self-maintained), and originally purposeful. Because robots could never be ethical subjects, there is no wrongdoing in robot servitude. Feeling guilty for robot servitude is like feeling guilty for using your washing machine, which is completely absurd (Peterson 283).
Robots could have personhood and could be “organic” creatures. It is not impossible for custom made DNA making up a person-o-matic to be carbon based, autopoietic (self-organized and self-maintained), and originally purposeful. However, even if robots could never be ethical subjects, humans must treat artificial people well and not as a means o an end. This would go towards the Kantian perspective; Kantianism holds true that humans should not treat animals badly as a means to an end. If humans treated animals in this way, then Kant worried that humans would begin to develop an inclination to treating other humans as a means to an end. Although Kant thought humans treating animals as a means to an end should be avoided, he didn’t think it was morally impermissible unless a human started to treat another human as a means to an end. This same logic could be applied to treating robots as servants. If humans began to treat artificial humans as servants who were a means to an end, then those same humans could develop an inclination to treating other humans as a means to an end in the same way, which would be morally impermissible. Humans should avoid treating robots as servants so that they do not treat humans as servants, despite whether or not the artificial people have any ethical significance, because it is wrong to treat a human as a means to an end.
Robots are ethical subjects and it would be morally impermissible for the artificial people to work for us, resulting in a kind of slavery. After exploring John Becker’s argument against robot servitude, I have rejected his reasoning that robot servitude is wrong because humans should not interfere with nature and play God. Instead, the creation of artificial people in general is not morally permissible either for many different reasons including autonomy, desensitization, and an unfulfilling life. Considering the similar situation of selective breeding of animals to render them less aggressive, more playful or cuter, this bears similarities to the robot servitude question. In fact, the modification of animals and the creation of artificial people/robot servitude are both morally impermissible.
Donaldson, Sue and Will Kymlicka. 2011. Zoopolis: A Political Theory of Animal Rights. New York: Oxford University Press.
Petersen, Steve. 2011. Robot Servitude. In Robot Ethics, eds. Patrick Lin, George Bekey & Keith Abney. Cambridge: MIT Press.
Weckert, John. “Playing God.” The Ethics of Human Enhancement, 2016, pp. 87–99., doi:10.1093/acprof:oso/9780198754855.003.0006.