Re: General summary of FAI theory

From: justin corwin (outlawpoet@gmail.com)
Date: Wed Nov 21 2007 - 12:53:56 MST


Comments below:

On Nov 21, 2007 9:42 AM, Anne Corwin <sparkle_robot@yahoo.com> wrote:
> One thing to look at in this regard is the psychology of power. Dictators
> generally want as much power as possible, and in that respect they actually
> have an incentive not to use powerful destructive technologies -- if there's
> no "world", there's no-one to have power over!

One problem with this kind of safety factor is that it's limited by
attention and motivation. One obvious example is that in national
governance, officials are presented with options on such a scale that
decisions are made where deaths and disadvantages for many are, in
context, considered less important than other options, resulting in
extremely terrible situations created for people, by folk who would
largely consider themselves decent, and probably would never
intentionally inflict the concrete situation they caused in the
abstract.

RJ Smeed proposed that there is a certain amount of risk people are
willing to psychologically tolerate, and that this level of acceptable
risk is invariant as to scale and context. Which implies, among other
things, that car accidents will remain at a stable level regardless of
regulation (so long as the perception of risk does not change) and the
people will tolerate the same percentage of risk with national
economies that they do with household savings (which is quite
terrifying when corrected for scale). There is some evidence for this.

> I don't comment much on this list but I've been reading for a while, and I
> get the distinct impression that in this entire AGI discussion, "power" is
> the central issue moreso than anything that might be termed "intelligence".
> Basically, when you guys talk about "Friendly AGI" and "Unfriendly AGI", you
> seem to be referring to entities that will act as *influences* on reality to
> an as-yet-unprecedented extent.

As a precedent, human effects upon the environment are fairly widespread.

> You (generic "you", referring to FAGI proponents) want to build what amounts
> to a "magnifier" for the Forces of Good (however those might be defined),
> and prevent UFAGI from magnifying the Forces of Evil (or the Forces of
> Stupid, if you prefer). The commonly-invoked "let's build a Good AI before
> someone builds a Bad AI!" scenario has always struck me as another way of
> saying, "let's make sure the power is concentrated in the hands of the Good
> Guys, so that the Bad Guys don't cause harm".

You can also formulate it as safety features of a class of new
engineering designs, to be more power politics neutral. To be honest,
I've always seen outcompeting bad alternatives as a much better option
than constraining designs, though. Would you see syringe exchange
programs, or youth basketball programs, or doctor accredidation (all
of which are attempts to construct and deploy positive attempts
before, in place of, or in a superior position to bad alternates) as
power concentration?

> No parent can assure that their future offspring will not someday destroy
> the world, and it would seem rather ridiculous (in the practical, if not the
> e-risk sense) to try and ban people from having kids until it can be assured
> that no child will ever grow up to destroy the world. Right now, we deal
> with that kind of quandary through setting up barriers to extreme power
> accumulation in any one individual, through making and enforcing laws, and
> through social pressures (e.g., shunning and shaming of persons who commit
> acts like child abuse).

I'm fairly certain that most parents of such a hypothetical person
would support their neutralization, if it were possible. And we do, in
fact, try to prevent certain persons from having children indirectly,
by incarcerating them, or removing their children to safer situations.

> This system doesn't work perfectly, since abuses of both persons and power
> still exist (and exist in horrible manifestations at times), but it is
> better than nothing. With that in mind, perhaps the aim should be not to
> create "an AGI", but to create a colony or group of AGIs (with
> differently-weighted priorities) to serve as a "checks and balances" system.
> The key is to avoid creating a situation that permits extreme concentration
> of power. "Intelligence" is an afterthought in that regard.

A checks and balances system of the type you're envisioning, where
differing actors of opposing or orthogonal intent balance their
influence, is neccesary when certain intents and influences cannot be
removed or nullified, and that the stable situation of averaged
actions of those actors is preferable to letting those intents run
free. There are many situations, even now, where that's not the case.
You don't create a stable situation of rowdy children by providing the
children with equivalent and opposite positions and letting them
self-balance, you'd end up with a messy, violent, pecking order of
kids, because that's the stable result of their opposing and
orthogonal intents and actions they take, regardless of how you try to
organize it.

 Similarly, if you assume that AI goal systems, or more concretely, AI
actions, must have certain characteristics, and that you can't change
them, and that you must set them in opposition to each other to
control the outcomes in a stable fashion, I think you need to
determine exactly what you think would happen, and why that's better
than trying to remove or change those characteristics, or provide an
adult supervisor, so to speak.

-- 
Justin Corwin
outlawpoet@hell.com
http://outlawpoet.blogspot.com
http://www.adaptiveai.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT