Coherent Extrapolated Volition = how the best of us think

Just stumbled upon this while reading the wiki article on Friendly AI.

Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, coherent extrapolated volition is people's choices and the actions people would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together." Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study human nature and then produce the AI which humanity would want, given sufficient time and insight, to arrive at a satisfactory answer. The appeal to an objective though contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

Wow, I love it! I love the seed concept also, and have contemplated such things myself many times. Let's do this.

(after googling a bit) Hm, not so easy. Apparently it's a lot of effort (from: https://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition). Let's still do this, I have some ideas!

Add a comment

Fields followed by * are mandatory

HTML code is displayed as text and web addresses are automatically converted.

Page top