Show simple item record

FieldValueLanguage
dc.contributor.authorSternhell, Robert
dc.date.accessioned2018-09-28
dc.date.available2018-09-28
dc.date.issued2017-01-01
dc.identifier.urihttp://hdl.handle.net/2123/18836
dc.description.abstractIf we create an entity of greater intelligence to us, a superintelligence, it has the possibility to explode in intelligence, creating more and more intelligent entities. If the intelligence explosion argument holds, then the most important step to developing a powerful superintelligence is the development of the first superintelligence. This paper presents objections to the possibility of humans developing this first superintelligence. I argue that this is because we lack required knowledge about them, due to our epistemic position of not being superintelligent. I engage primarily with arguments from David Chalmers and Nick Bostrom about what superintelligences are and the nature of the intelligence explosion. I add my own detail to these areas and explore how to increase intelligence. I argue that my objections stem from flawed expectations of superintelligence such that we ought to change them. I then present my own alternative expectations for superintelligence.en_AU
dc.language.isoen_AUen_AU
dc.publisherDepartment of Philosophyen_AU
dc.rightsThe author retains copyright of this thesisen_AU
dc.subjectartificial intelligenceen_AU
dc.subjectAIen_AU
dc.subjectThe Singularityen_AU
dc.subjectintelligenceen_AU
dc.subjectintelligence explosionen_AU
dc.subjectsuperintelligenceen_AU
dc.titleCan we Build a Superintelligence Without Being Superintelligent?en_AU
dc.typeThesis, Honoursen_AU
dc.contributor.departmentDepartment of Philosophyen_AU


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.