RE: SIAI's flawed friendliness analysis

From: Ben Goertzel (ben@goertzel.org)
Date: Sat May 24 2003 - 09:01:27 MDT


hi,

> OK. Let's look at the semi-hard take off scenario. I assume that what
> you mean is that for a period after the creation of a baby-AGI the
> humans around it will have to do a lot of work to build it up to a
> reasonable level of competence in the real world (lots of
> training/lots of
> new code development/lots of hand holding) But at some stage all this
> hard work will come together and the AGI (or a group of AGIs) will be
> able to drive it's/their own self-improvement without much input from
> humans. At that point we will get a hard take-off. Ben, have I
> interpreted your views accurately?

Yep.

There are different scenarios regarding mainstream acceptance of the
potential of the baby AI.

One case is where the world realizes a baby AGI is present and a takeoff is
imminent, and governments become involved.

Another case is where the state of progress is intentionally kept secret by
the AGI developers.

Another case is where the developers go public and almost nobody believes
them, or takes them fully seriously about the real potential of their work.

> If this is a reasonable summary, then it seems to me that we have to
> have a very, very reliable guess as to when to expect the transition to
> take-off to begin.

I'm not sure this will be possible. I think we will be able to say when
it's *remotely possible* versus when it's just too early-stage. But
distinguishing remotely possible from reasonably likely is going to be
hard -- the first time around ... which is the most important time of
course...

> My understanding of things is that SIAI feels that we cannot know when
> we are one the safe side of take-off so friendliness work should be done
> now. Ben on the other hand (I think) thinks that it would be 100% safe
> to have an early-model baby AGI in existence before much work was
> done on introducing life-compassion.
>
> Whether Ben is right I think depends on whether the lead time to go
> from an early-model baby AGI to the point of hard take-off is longer
> than the lead-time for AGI development teams and/or society to go
> from a vague idea of what we want in the way of AGI morality to the
> point where we can introduce it tangibly and securely into real AGIs.

I think that the appropriate manner of moving from vagueness to precision in
this matter is not clear right now, and will be much clearer once we have a
baby AGI doing the equivalent of crawling around and going "Goo goo goo."

> My own feeling is that there are lots of issues about what sort of
> morality we think AGIs should have that are not hardware/software
> dependent and that have, most likley, a longer leadtime than the early-
> model baby AGI to the point of hard take-off leadtime.

I don't deny that this is a possibility, but I don't clearly see this to be
the case.

I have chosen not to focus my own efforts in this direction, but I am
supportive of such efforts and will participate in pertinent discussions or
projects led by others.

> If that's so then we need to redouble the effort on AGI morality. The
> Singularity Institute has done very valuable work in the area which
> needs to be developed further. But I think there are aspects of the AGI
> morality issue that the Institute itself hasn't even flagged.

Such as?

> It would be quite interesting to conduct some sort of collaborative
> scoping exercise to identify what issues different people think we need
> to look at. If we could produce a single document that had all the big
> issues that each one of us thinks should be considered in the course of
> tackling AGI morality then we might be able to avoid talking past each
> other and from this document we might be able to generate an R&D
> agenda - moving in several direction at once as I don't anticipate that
> we will all be of one mind..
>
> What do you think of this suggestion?

It sounds like a fine idea to me -- except that I don't have time to write
the document at the moment, because I've prioritized other things ahead of
it. I'll be happy to contribute ideas to such a document and to review and
discuss such a document.

-- Ben Goertzel



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT