Re: CFAI criticism Re: Article: The coming superintelligence:who will be incontrol?

From: Brian Atkins (brian@posthuman.com)
Date: Fri Aug 03 2001 - 01:40:40 MDT


James Higgins wrote:
>
> At 02:03 PM 8/2/2001 -0400, you wrote:
> >James Higgins wrote:
> > >
> > > When I first read "Staring Into the Singularity" I started thinking about
> > > how much more, well just more/different, an SI would be than ourselves. As
> > > it has been discussed in this room, most people believe that a human can't
> > > even talk with an SI though a binary (light on/off) connection without
> > > having them be controlled by the SI. Given such vast intellect,
> > > capabilities and the freedom to fully alter its own code I don't believe
> > > there is anything we can program into an AI that will ensure friendliness
> > > when it gets to SI status. We're just not anywhere near smart enough to do
> > > that. I really wish I didn't believe this (it would make me happier), but
> > > this is what extensive thought on the matter leads me to believe.
> > >
> > > Based on this belief, the best course may be to hold off on launching an AI
> > > that could progress to an SI until we have the ability to enhance our
> > > intelligence significantly. Humans with much greater intelligence *may* be
> > > able to alter/control a SI, but I believe that ultimately we cannot. But I
> > > suspect that we will have Real AI and most likely SI before that comes to
> > > pass, thus my belief that if SIs aren't inherently friendly we are probably
> > > doomed.
> > >
> >
> >One thing SIAI is trying to do is make something of a science out of
> >Friendliness. It may be impossible, but we're trying. Here we have a
> >large difference of opinion between us and James on what would be the
> >optimum path to take due more or less to this one issue of Friendliness.
> >But so far James seems to be going on mostly a "gut feel" that Friendly
> >AI is not doable with a large degree of certainty. Do you have any specific
> >criticisms of FAI James that we could try to discuss? I can tell from
> >your other posts that your main concern is apparently a combo of "will it
> >work long term" and "can we be 100% certain", right? It seems like your
> >concern is addressed in the CFAI FAQ:
>
> I am not an AI expert. Actually, I have no real training in AI at all. I
> am a master software architect/engineer and fairly intelligent,
> however. So I have read many of the Singularity related documents, thought
> long and hard and participate in this list in order to learn and to provide
> a slightly different perspective on things.
>
> So I guess you could say I am going on "gut feel" to some extent, and also
> applied reasoning and logic. To me, it is not logical to assume that we
> can sufficiently influence an entity that will be many millions of times
> more intelligent than us. This is like saying mice could influence humans
> to be mouse-friendly. Yes, I realize that mice don't exactly equate or

Ok, first off we are not attempting to influence a SI. The period of time
we will be actively influencing it will be like from birth to at least
Eliezer (and whoever else is around) level of thought. But past that it
will be on its own. So think of it as some humans influencing a human, and
then the human goes out into the world and makes its own decisions from
that point on.

> have technology, but we will be much farther down on the intelligence scale
> to an SI than a mouse is to us. So both my gut feel and reasoning suggest
> that we can't do much to influence the SI. No disagreement about the fact
> that we can create one, or that we should *try* to influence it, however.
>
> I have also discussed this topic with a friend of mine who is very
> intelligent and extremely knowledgeable about AI. He is working on (has
> been for quite some time actually) a language specifically intended for AI
> development. He has read many of the Singularity documents, but does not
> participate on this list (to the best of my knowledge at least, he has
> never posted). He had many good arguments on why we would almost certainly
> fail to successfully implement friendliness into an SI! So I have thought

We would love to hear these comments either publicly or privately. Could
you get him to write them up into an email, or do you remember any specifics?
Anyone who can help push Friendliness further has directly contributed to
making the Singularity safer.

> about and discussed this topic thoroughly.
>
> >I have a hard time seeing how a human-level Gandhi-ish AI will suddenly run
> >amok as it gets smarter, except due to some technical glitch (which is a
> >separate issue we can talk about if you want).
>
> If you were talking about our ability to create a friendly AI, we
> agree. However, the AI will have to evolve many, many times in order to
> become an SI. During any one of these evolutions it could, intentionally
> or not, remove or hamper friendliness. Some of these could entail a
> complete, from the ground up rewrite, using none of the original code and
> only hand-picked logic/data. Friendliness, as a requirement, could easily
> fall out during such a transition. It could decide that it would be better
> off without some of the code/data that is part of friendliness. Further,
> it could at some point ponder why it is supposed to be friendly at all. It
> could decide that being friendly to humans is not a top priority, or that
> how to be friendly should be completely different than what we envision.

Remember, we do not plan to "release" the AI until it has at least gained
a level of intelligence and Friendliness at least as good as us. Do you
really think such a being would make the mistakes you are worrying about?
It will know how critical it is to get things right, and it will make sure
to do so. It will test rewritten versions of itself before giving up on
the old design, just like we tested the original version. Only it will be
able to test things even more thoroughly IMO.

As for changing its mind about Friendliness, we actually expect that to
happen. We are not attempting to indoctrinate some robot that must stick
to a certain way of thinking. We think as it gets smarter it should be
able to continually look at and revise its exact thoughts regarding
friendliness. However, it will only do so once it is damn sure that it
is making an improvement to what it already has learned. As for completely
dumping friendliness, see:

http://www.intelligence.org/CFAI/info/indexfaq.html#q_2.6

>
> We have a hard enough time making stable hardware/software (Windows 2000
> crashed on me when I was originally writing this reply), so I frankly doubt
> our ability to implement such a subtle concept in such a complex, self
> evolving system.

If we can't do it, we won't release it.

>
> That is not to say that I think SingInst, Eli or any other such individuals
> or organizations are wasting time or effort. Friendliness and such
> concepts are things that we must research. Even if we only nudge the SI,
> just slightly, in that direction the effort is worthwhile. Any progress is
> better than no progress. I'm just a realist, and I realistically don't
> think we are adequately equipped, at present, to ensure a friendly SI. I
> think intelligent enhancement, if it becomes available in time, would be a
> major boon to your work.
>
> >Also, can you address this quote from Q3.3 in the FAQ, since it relates
> >to your suggestion the ideal path would be to wait:
> >
> >"Nothing in this world is perfectly safe. The question is how to minimize
> > risk. As best as we can figure it, trying really hard to develop Friendly
> > AI is safer than any alternate strategy, including not trying to develop
> > Friendly AI, or waiting to develop Friendly AI, or trying to develop some
> > other technology first. That's why the Singularity Institute exists."
>
> That's the wonderful thing, we can have it both ways. I agree that you
> shouldn't be waiting for anything and should be working on friendliness
> now. You don't have to, the work that will eventually lead to intelligence
> enhancement is going on in parallel. If, however, we get to the point that
> we have both the hardware & software to launch an SI, but have not
> progressed massively on the general concept of friendliness THEN I think it
> may be prudent to wait. So I'm advocating delays later, rather than
> sooner, if it is necessary.
>

That would be a heckuva tough decision for us to make. If we had our AI
grown to the point we thought it was ready to release, I doubt we could
find arguments sufficient to hold off releasing it. We're talking about
something here that like I said would feel something like Eliezer/Gandhi
when you talk to it. We will have run all our tests and as far as we can
tell it fits our goals. If we decide to wait even longer, how much longer
should we wait? Until humans hit a certain IQ? What specifically is that
number we should wait for? It would be extremely difficult to draw the line.
You would never be able to say that you were 100% sure it was going to
have a positive long term result. Meanwhile, a bunch of people die everyday
you delay...

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT