From: Thomas McCabe (email@example.com)
Date: Wed Nov 28 2007 - 19:11:44 MST
On Nov 28, 2007 8:43 PM, Harry Chesley <firstname.lastname@example.org> wrote:
> Robin Lee Powell wrote:
> > On Wed, Nov 28, 2007 at 08:49:39AM -0800, Harry Chesley wrote:
> >> First, to be useful, FAI needs to be bullet-proof, with no way for
> >> the AI to circumvent it.
> > If you're talking about circumvention, you've already missed the
> > point. An FA no more tries to circumvent its friendliness then you
> > have a deep-seated desire to want to slaughter babxes.
> >> This equates to writing a bug-free program, which we all know is
> >> next to impossible.
> > I don't know who "we all" is there, but they are wrong.
> > http://en.wikipedia.org/wiki/Six_Sigma
> > It's hard, and requires concerted effort, but when was the last time
> > you hard of a bug in an air traffic control program? It happens, but
> > it's an extremely rare thing isolated to *particular* ATC programs;
> > most of them are basically bug free. Same with the space shuttle.
> > Same with most hospital equipment.
> Those techniques work when you have a very well-defined specification of
> the application you're developing, and when you're willing to put lots
> of resources into making the implementation correct. Do you really
> believe that the AIs created in some random research lab or some garage
> will meet either of those criteria?
They had better darn meet both of those criteria, or we're hosed. What
do you think SIAI is for? To develop an AGI which *is* well-defined
and well-built, before some random research lab or garage kills us
> >> Second, I believe there are other ways to achieve the same goal,
> >> rendering FAI an unnecessary and onerous burden. These include
> >> separating input from output, and separating intellect from
> >> motivation. In the former, you just don't supply any output
> >> channels except ones that can be monitored and edited.
> > OMFG has that topic been done to death. Read the archives on AI
> > boxing.
> And nothing that I've read about it has yet convinced me. What I've seen
> seems to come down to one of two arguments: 1) Intelligence is like a
> corrosive substance that will leak out, overflow, or corrode any
> container. This seems too simplistic an argument to me. Intelligence is
> far to complex to be analyzed as a commodity.
> Or 2) intelligence is
> anthropomorphic in that, like us, it will never stand for being boxed up
> and, being so very smart, will figure out a way out of the box. That
> strikes me as too anthropomorphic. (Anthropomorphism has its place, but
> every part of it is not a required part of every AI.)
You are correct that "standing" for being "boxed up" is too
anthropomorphic, but more resources have higher expected utility to
the vast majority of utility functions.
> Nor do I buy the
> argument that a super-AI can talk its way out.
It's already been done, twice, with an ordinary human in place of the AI.
> (I'll leave out 3) that
> it will take over the world to get more computing power, since, although
> an entertaining thought, I don't see it as a serious scenario, more like
> the plot to a science fiction novel -- oh, wait, it's already been done,
> The God Machine by Martin Caidin.)
> > Why should we go to the effort of doing your research for you? How
> > arrogant is *that*?
> Please don't do any more than you feel like. But please do understand
> that my questions to this list are not just an attempt to stir up the
> ant hill (tempting though that may be). I am actively working on AI, and
> though I'm unlikely create a singularity, I do feel I should worry about
> these issues. At present, my AI architecture has no facilities for FAI
> as discussed here because I think it's a waste of time.
For the love of <whatever deity you do or do not believe in>, stop
working until you get a clear idea of what you've gotten yourself
into. See http://www.sl4.org/wiki/SoYouWantToBeASeedAIProgrammer and
what's necessary to actually program an AI.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT