Re: nagging questions

From: Samantha Atkins (samantha@objectent.com)
Date: Tue Sep 05 2000 - 11:52:54 MDT


xgl wrote:
>
> On Mon, 4 Sep 2000, Samantha Atkins wrote:
>
> >
> > Even granted that this Power is a much higher sentience, I still feel as
> > if I am betraying humankind, betraying my own primary motives in working
> > to bring it about sometimes. How do the rest of you deal with this?
> > What am I missing?
> >
> > I know that the Singularity is eventually inevitable for some
> > intelligent species and inevitable for us barring major disaster or some
> > totally unforeseen bottleneck. But how can I be in a hurry to bring it
> > about and still claim I work for the good of humanity?
> >
>
> there's no denying it -- the creation of an yudkowskyian
> transcendent mind (ytm) may be our salvation; it also may well be our
> doom. however, the same can be said for other ultra-technologies,
> especially nanotech. the issue here is mainly one of navigation.
>
> goal: survive the next 50 years;
>
> facts:
> - accelerating technological progress is virtually inevitable;
> - any technological revolution carries significant risk;
> - different technologies differ in risk;
> - while we might not be able to suppress any one technology, we
> may be able to influence the order in which they arrive;
>
> action:
> contribute my effort to increase the probability that the
> technology with the least risk arrives first.
>

An interesting bit of reasoning. But is an AI singularity the least
risky technological revolution? This entity is a totally unpredictable
quantity with unknown needs and goals. Human beings on the other hand
are pretty predictable, often woefully so. So which has the least risk,
beings you understand and know something about influencing with
super-technological assets or a piece of super-super-technology with
even more assets which you don't and probably can't understand at all
and have no way of influencing except perhaps in the very beginning?

> in other words, one doesn't need to be certain of the eventual
> outcome of one's work -- all one needs to believe is that under the
> circumstances, one's present course of action is the most likely to lead
> to a good end.
>
> singularitarians, for instance, believe that the creation of an
> ytm is the best bet for humanity. if ytm indeed arrives first, it will
> trump all risks posed by other ultra-technologies; however, other
> technologies, especially nanotech, has the lead in the race -- hence the
> hurry.
>

Unfortunately it may well trump them, if the work succeeds, by being
many orders of magnitude more dangerously powerful and unpredictable.
Ah well, better the evil we don't know!

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT