Superintelligence as Moral Philosopher

Document Type


Publication/Presentation Date



Abstract: Non-biological superintelligent artificial minds are scary things. Some theorists believe that if they came to exist, they might easily destroy human civilization, even if destroying human civilization was not a high priority for them. Consequently, philosophers are increasingly worried about the future of human beings and much of the rest of the biological world in the face of the potential development of superintelligent AL This paper explores whether the increased attention philosophers have paid to the dangers of superintelligent AI is justOed. I argue that, even if such a thing is developed and even if it is able to gain enormous knowledge, there are several reasons to believe that the motivation of such an AI will be more complicated than what most theorists have supposed thus far. In particular, I explore the relationship between a superintelligent AI's intelligence and its moral reasoning, in an effort to show that there is a realistic possibility that the AI will be unable to act, due to conflicts between various goals that it might adopt. Although no firm conclusions can be drawn at present, I seek to show that further work is needed and to provide a framework for future discussion.

This document is currently not available here.