Re: How hard a Singularity?

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Jun 27 2002 - 02:50:34 MDT


Ben Goertzel wrote:

>
> Eliezer,
>
> I think that the creation of such an organization right now would be *a bit
> premature*, which is why I said 2-3 years in the future might make sense
> (and that may be overoptimistic, depending on how much progress anyone's AGI
> project makes during that time frame).
>
> However, I really think it's silly to call the idea "stupid" or "suicidal."
> At worst, in my view, it would be a useless sideshow; and at best it could
> serve to infuse a very big decision with some additional wisdom.

Unfortunately, at worse it could easily:

Politicize the entire process;
Freeze all development in bureaucracy and infighting;
Prematurely turn the wrong kind of attention on AGI development
leading to great fear and persecution;
Result in the co-opting of the work for immediately profitable
purposes and/or military use thus creating a more rather than
less dangerous world.

>
> Remember -- in my proposal, this is an *advisory* group anyway, so it
> wouldn't have real power over what an AGI's owner does with it.... How then
> could it be suicidal?
>

If it had much publicity, honor, clout or power all the above
scenarios are distinctly possible given normal human politics.
If the world takes it seriously at all it will have all of these
and stiff political competition to get on the committee and
conflicting agendas. I guarantee that the people (may their
black little hearts be praised) and the politicians (whose even
blacker hearts are generally damned) will not let the nerd only
put super-nerds on a committee such as this.

> What you propose instead seems to be: "The world should trust that whomever
> first creates a seed AI, is probably wise enough to make all decisions
> regarding the advent of the Singularity." I think your proposal is pretty
> darn dubious, as it relies on your psychological theory that only a person
> of supreme wisdom can possibly create a seed AI -- and I think this
> psychological theory is pretty darn dubious.
>

It is dubious on one hand. On the other hand it is less dubious
that one wise and capable person or group acheives the goal than
to expect such wisdom and capabilities from what will fast
become a politicized committee. Human perfectability, where it
exists at all, is usually not found in modern political committees.

> I don't intend to spend my time currently forming committees, I intend to
> continue to spend it working on designing, engineering, testing and (later)
> teaching an AGI. But if/when Novamente gets to near-human intelligence, I'm
> going to be wise enough not to trust my own wisdom, and I'm going to do what
> I've suggested: assemble a committee of Singularity wizards to help me
> monitor Novababy's progress, help me teach the thing, and help make the deep
> decisions the thing will lead to...
>

I don't think Singularity wizards are the only ones that should
be consulted.

> And I hope very much that, if *your* AGI design/engineering efforts bear
> fruit and produce a near-human-level AGI, you *at that point* will have seen
> the error of assuming that your AGI-creation prowess necessarily implies
> your immense personal wisdom... and I hope that you will, at that point,
> follow some methodology similar to what I've described.
>

I still don't see how or where you are going to get this greater
wisdom or how what you propose will produce a more safe and
agreeable outcome. We are talking about the innovation to end
all innovations here! We are talking about that which will turn
every institution on Earth inside out and change everything
forever for humankind. Where, on this earth, will you find the
mind or minds who are ready to take on this responsibility and
who have the right balance to carry it off. I have doubts that
mind exists at all at this time. But if it does exist I don't
expect to find it in a committee.

- samantha

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT