From: Brian Atkins (firstname.lastname@example.org)
Date: Sun Nov 30 2003 - 12:34:31 MST
Perry E.Metzger wrote:
> I suspect (I'm sorry to say) that assuring Friendliness is impossible,
> both on a formal level (see Rice's Theorem) and on a practical level
> (see informal points made by folks like Vinge on the impossibility of
> understanding and thus controlling that which is vastly smarter than
> you are.) I may be wrong, of course, but it doesn't look very good to
I think you have some misconceptions... First off, the concept isn't to
provide perfect 100% assurance. No one is claiming that they can do so.
Although that would be great, in practice or even on paper it isn't
doable... we must settle for a more practical "best shot".
Secondly, as you say, "controlling" something like this is an
impossibility. Which is why we have never talked about attempting such a
thing. You have to build something that will be ok on its own, as it
outgrows our intelligence level, or else don't build it.
Thirdly, be careful not to confuse Friendliness with the Sysop scenario
idea, which is where Google shows me you brought up Rice's Theorem
previously. Two completely different things, where development of a FAI
does not imply such a scenario coming into existence.
> (I realize that I've just violated the religion many people here on
> this list subscribe to, but I have no respect for religion.)
Historically, the "F word" represents an ongoing attempt to come up with
some technical means to greatly increase the odds of safe AI. The
attempt remains unfinished, and is an area I wish more research was done
in by others. It's a technical area that any person or group attempting
work on real AI should face and address before proceeding with
construction of a full attempt. We can argue over the best form of it
(or even whether it really is needed) here just as if we were arguing
whether to use a SCSI or SATA RAID controller in a server we were
constructing, and that is one of the things this list was created for
and has been used for in the past.
So... I don't see any sacred cows to slaughter. Everyone here realizes
(or should realize) that this is still a very new and very unfinished
area of AI research. There are no guarantees it will ultimately pan out.
At most, I think you will find hope among some participants here that
this problem will ultimately be solvable to a degree that will allow
development of AI to proceed with real knowledge of the odds, and that
those odds will be much better than they are at the moment. We all
realize that this may yet turn out to be a dead end, but for now we feel
it is worth continuing its exploration.
> Keep in mind that likely, out there there are intelligent creatures
> created without regard to "Friendliness theory" that whatever you
> create is going to have to survive against. Someday, they'll encounter
> each other. I'd prefer that my successors not be wiped out at first
> glance in such an encounter, which likely requires that such designs
> need to be stupendous badasses (to use the Neil Stephenson term).
> Again, though, I'm probably violating the local religion in saying
Nope, this idea has been brought up at least one time previously (couple
years back I guess). Specifically, the idea that for some reason a FAI
is limited or unable to win a conflict with an external UFAI. I don't
see any reason why this would be so, and I don't recall anyone being
able to make a convincing argument for it last time around, but feel free...
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT