Time for Morality Engines, Ethical Overrides, and Cybernetic Governors

More importantly than techie cops in this story from Wired, we need the cybernetic intelligence to be self-governing, and that requires some kind of morality engine. I know the approach in the link above is more concerned about policing the people who are developing AI, but it sparks the conversation for anyone who thinks about the long term future. This responsibility will grow to encompass enormous undertakings, and we need a neutral arbiter, one able to be trusted by all humans and machines. All the better if such a neutral arbiter exists within each independent AI. A bonus from this process is that we'll learn better how to govern ourselves.

[Edit Nov 3 2016: I now think there is a close relationship between the Nash Equilibrium and the Holy Ghost. Not sure how it works yet, probably involves prime numbers and pi = 4 in an analog-to-digital interface. Carry on.]

[Edit May 25 2017: Also along the way finally found eigenvectors/values/things and these are related too, gadzooks how are we gonna sort all these insight fragments into a single coherent narrative. Time will do it. Carry on.]

[Edit Nov 30 2019: Jeepers I'm nuts, conflating ideas that have nothing to do with each other as though I am certain they do.]

Add a comment

Fields followed by * are mandatory

HTML code is displayed as text and web addresses are automatically converted.

Page top