Re: Think of it as AGI suiciding, not boxing

From: turin (turin@hell.com)
Date: Tue Feb 21 2006 - 23:30:41 MST


<Odd. The thought of such technologies being under a central agency that
<WASN'T friendly is one of the reasons I feel an AGI is *necessary*!!! Of
<course, it also slightly raises the stakes on getting it correct in the <first

I am much more afraid of government and corporate use of very powerful but not really superintelligent AI than a trully SAI itself. A grass roots sociopolitical movement for an international and transparent SAI seems in many ways more dififcult than building friendly SAI. Or maybe I should ask, how friendly do humans have to be in order to be make friendly SAI....

Is human friendlieness different than SIA friendliness........ I think to make friendly SI we are going to have to make SI we might want to think about as I was saying earlier the ways in which it effects our ideas of friendliness. For instance, would a friendly SI force/manipulate/influence humans to be friendly to each other, though of course it would have to do so in a friendly way. Is it ok for the SI to "trick" us into being friendly to each other, I mean, do we want it to make us friendly to each other with or withour our noticing. And more generally do we want it to tell us what it is doing, even if we can't understand how it is doing it?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT