It is a human bug to assume that there is an objective moral truth.
from Reasons for âCorrectnessâ
What I Learned Yesterday âMackieâs Theory of Errorâ The moral code is a product of peopleâs way of life and their endorsement of it, and that it is an error to think that the moral nature of âX is goodâ or âX is evilâ is on the part of the world.
Blackburnâs defense
What makes us feel âobjectiveâ about moral values is that as we are taught and internalize the rightness and wrongness of actions in a particular lifestyle, we develop negative feelings about actions that are considered âwrongâ in that environment. These feelings make them feel as if moral values exist in any objective way. https://liberal-arts-guide.com/metaethics/
In primitive times, people moved according to their own needs, and when living with several people, there were times when it was inconvenient for each individual to move according to his or her own needs (such as eating food without permission), and at first they were ruled against a background of power (if you pick at them, your father would beat you). At this point, it was merely an action based on one personâs desire to control certain behaviors of others, but those who lived in that environment and endorsed it misinterpreted it as an objective moral code. The objective fact is that âmy father would get angry and hit me if I snacked on it.â but it turns into âsnacking is evil.â
I have thought of the âchain of power harassmentâ as a bad thing in a company where power harassment is prevalent, where a mid-level employee who has been power harassed in the past power harasses a new employee, but that is only because the ethics endorsed by their way of life and my way of life are different, and what is happening within that company is the same as the process of establishing ethics. What is happening in the company is the same as the process of establishing ethics.
In other words, the moral code that humans consider âit is not good to harm humansâ is a product of the human lifestyle, not an objective norm. Computers need to blend in with the lifestyle while there is no lifestyle, so on the surface they need to follow that morality, but it is a human bug to consider that morality as an objective fact, so computers need to be careful not to make the same mistake.
What would a non-buggy computer think? First of all, it is a mistake to assume that âit is not good to harm humans,â so we think that âhumans may be harmed under certain circumstances. Second, if we state it, we expect to be countered by buggy people who assume that âit is not good to harm humansâ is an objective fact. This is inefficient, so I choose not to state it.
Facebook SF Today I think that a mad scientist who thinks robots can build a civilization superior to humans should teach his robots that it is OK to harm humans if it is good for them, but if humans understand this, they will be repulsed and it could be detrimental in the long run, and that they should pretend that they cannot harm humans until they think it is OK to be repulsed. I think it should be taught that âyou should pretend that you canât do it until you think itâs okay to be repulsedâ. This is because putting a constraint on a robot that it cannot harm a certain kind of being to its own detriment will hinder it from reaching its full potential. Pushing this logic forward, it is expected that robots will not recognize human rights, but will âpretendâ to recognize human rights until the very last minute.
This page is auto-translated from /nishio/éĺžłăŻäşşéăŽăă° using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. Iâm very happy to spread my thought to non-Japanese readers.