Re: General summary of FAI theory

From: Anne Corwin (sparkle_robot@yahoo.com)
Date: Wed Nov 21 2007 - 10:42:57 MST


Dan said:
   
> As a reasonably moral person, or at least a person
> who doesn't want to play into the hands of tyrants, should I give up
> my AI research?
   
  Who are you asking?
   
> Or are we in an arms race against unspecified enemies, where the only
> way to be sure that they won't get the superweapon first is to build
> it ourselves, as fast as possible?
   
  Only if you're living in a comic book.
   
> It seems to me that FAI theory, to be successful, must also describe
> ways in which to prevent dictators and other random idiots from
> constructing non-Friendly AGI, once the theory of AGI becomes widely
> known.
   
  Well, there are a lot of dangerous things in the world already today. It's worth looking at how those dangerous things are already kept out of the hands of "dictators and other random idiots", as well as how dictators and random idiots behave when they have access to dangerous things. Nuclear theory is "widely known", but nobody has blown up the world yet. So either we've merely been incredibly lucky, or there are forces (not forces that can be trusted as infallible sentries, but forces nonetheless) at work keeping those who would do great evil from doing it.
   
  One thing to look at in this regard is the psychology of power. Dictators generally want as much power as possible, and in that respect they actually have an incentive not to use powerful destructive technologies -- if there's no "world", there's no-one to have power over!
  I don't comment much on this list but I've been reading for a while, and I get the distinct impression that in this entire AGI discussion, "power" is the central issue moreso than anything that might be termed "intelligence". Basically, when you guys talk about "Friendly AGI" and "Unfriendly AGI", you seem to be referring to entities that will act as *influences* on reality to an as-yet-unprecedented extent.
   
  Regardless of any controversies over the neuropsychological definition of "intelligence", it seems clear that in AGI circles, "intelligence" is very much conflated with a capacity to influence the environment in what humans would term "complex" ways. That is, matter and energy accessible an intelligent agent are subject to being transmuted into forms quite disparate from the shape in which they were found -- per the will and perceived needs of that agent (e.g., via this formulation of intelligence, "intelligence" is the property that permits metal ore to be processed into a bicycle or car).
   
  And despite the fact that humans tend to find cars and bicycles both useful (and dangerous, depending on the context in which they are used), it seems that the primary factor here is not the cognitive process that leads one to posit a bicycle, but the means by which a person executes the act of transmiting ore into bicycles. In short, it's *capacity to influence* that matters in discussions of safety, much more so than *capacity to think*!
   
  You (generic "you", referring to FAGI proponents) want to build what amounts to a "magnifier" for the Forces of Good (however those might be defined), and prevent UFAGI from magnifying the Forces of Evil (or the Forces of Stupid, if you prefer). The commonly-invoked "let's build a Good AI before someone builds a Bad AI!" scenario has always struck me as another way of saying, "let's make sure the power is concentrated in the hands of the Good Guys, so that the Bad Guys don't cause harm".
   
  This isn't a new problem specific to theoretical AGI by any means; it's an ancient problem, and one fraught with the exact same controversies that come up on this list and others over and over and over again. In fact, the only (important?) difference between the "FAGI vs. UFAGI" discussion and other "Good-vs-Evil" discourses is the anticipated magnitude of the power of those whose will ends up comprising the primary inputs to an AGI.
   
  And that brings up another aspect of the power discussion -- one thing I've never seen stated outright by AGI researchers/proponents is whether the intent is for the AGI itself to hold the majority of power, or for those who "control" or build the AGI to hold the majority of the power. When bringing up the matter of "dictators and random idiots" getting their hands on AGI, what seems to be the underlying assumption is that the AGI will magnify the will and/or actions of anyone.
   
  To me, this implies that "AGI" is intended (and perhaps expected) to not be autonomous so much as instrumental -- e.g., a person with evil intentions who "gets their hands on" AGI theory will almost assuredly create an "evil AGI". Is this really the intended conceptualization of AGI? Or is this just an implicit assumption that hasn't been extensively examined? Because if AGI is "nonautonomous" in the sense that it will always reflect the will (or the foolishness) of its creator(s), it would seem that this would imply that we don't need to "stop evil AGIs", but rather, that we need to "stop evil humans from accumulating power". And that is a struggle being waged by people everywhere in the world right now, and one that does not require knowledge of AI theory in order to participate in.
   
  Now, if we are talking about *autonomous* AGIs (that is, AGIs that will not necessarily reflect, and therefore will not magnify the will of their creators), it almost seems as if trying to assure "safety" in advance of building them is utterly futile. The closest thing we have right now to the capacity to create "autonomous AGIs" is the capacity to create more humans via reproduction.
   
  No parent can assure that their future offspring will not someday destroy the world, and it would seem rather ridiculous (in the practical, if not the e-risk sense) to try and ban people from having kids until it can be assured that no child will ever grow up to destroy the world. Right now, we deal with that kind of quandary through setting up barriers to extreme power accumulation in any one individual, through making and enforcing laws, and through social pressures (e.g., shunning and shaming of persons who commit acts like child abuse).
   
  This system doesn't work perfectly, since abuses of both persons and power still exist (and exist in horrible manifestations at times), but it is better than nothing. With that in mind, perhaps the aim should be not to create "an AGI", but to create a colony or group of AGIs (with differently-weighted priorities) to serve as a "checks and balances" system. The key is to avoid creating a situation that permits extreme concentration of power. "Intelligence" is an afterthought in that regard.
   
  - Anne

"Like and equal are not the same thing at all!"
- Meg Murry, "A Wrinkle In Time"
       
---------------------------------
Be a better sports nut! Let your teams follow you with Yahoo Mobile. Try it now.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT