5/26/2023 0 Comments Superintelligenz nick bostrom![]() ![]() This post goes beyond the argument in the paper to address further objections I’ve heard from Friendly AI and X-risk enthusiasts. ![]() This is for reasons I’ve written up in “Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument”, which sums up arguments made in several blog posts about Nick Bostrom’s book on the subject. The first and most significant point of departure between my work on this subject and Friendly AI research is that I emphatically don’t believe the most productive way to approach the problem of ethics in AI is to consider the problem of how to program a benign Superintelligence. I’ll take this as an invitation to elaborate on how I think existentialism offers an alternative to the Friendly AI perspective of ethics in technology, and particularly the ethics of artificial intelligence. ![]() This prompted a response by a pseudonymous commenter (in a sadly condescending way, I must say) who linked me to a a post, “Complexity of Value” on what I suppose you might call the elite rationalist forum Arbital. I positioned existentialism as an ethical perspective that contrasts with the perspective taken by the Friendly AI research community, among others. ![]() I made a few references to Friendly AI research in my last post on Existentialism in Design. ![]()
0 Comments
Leave a Reply. |