Re: nagging questions

From: Samantha Atkins (samantha@objectent.com)
Date: Tue Sep 05 2000 - 19:41:52 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >
> > An interesting bit of reasoning. But is an AI singularity the least
> > risky technological revolution?
>
> Yes. Why? Because it negates the risks of the other technological
> revolutions. If we did make it through nanotech okay, we'd then have to deal
> with the AI revolution. As long as Earth remains free and alive, sooner or
> later we're going to have to deal with the issue of AI. But if we make it
> through building a Sysop, then we're safe from nanotech too, and almost
> certainly all the other ultratechnologies out there.
>
> Thus the path of AI first is the path of least risk.

This doesn't deal with the rest of my question. Since a Singularity
class AI is utterly unpredictable, much more so than human beings and is
much more powerful simply humans with things things like nanotech, then
exactly why is the AI less dangerous? Your argument above seems to
hinge on an assumption that the AI will be a Sysop that will rule over
everything and somehow keep us from harm. That is a quite questionable
assumption.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT