[sl4] Advent of Advanced AI Orthogonal to Present Problems?

From: Lee Corbin (lcorbin@rawbw.com)
Date: Tue Jul 01 2008 - 17:32:51 MDT


Bryan wrote

> [Lee wrote]
>
>> suppose that there could exist an entity so many millions of times
>> smarter than humans that it could dominate the world exactly as this
>> story proposes. I will in fact be so bold as to suggest that when he
>> started this list, Eliezer was making exactly this conjecture, and
>> that the whole issue of "whether or not to let an AI out", or, better
>> "whether or not an AI could be contained" still motivates a huge
>> amount of discourse on this list to this very day.
>
> I suspect this particular portion of our discussion should be forked off
> into a new thread,

Here it is.

> but arguably since I've presented arguments and evidence for
> alternative solutions to the problems of death, problems
> of the survival of humanity, that do not involve the requirement of
> FAI,

Honestly, I didn't think that that was the point of AI or FAI at all.

In fact, I think that our chances of surviving the development of
any kind of superhuman AI are less than 50 percent. But I'm
not really worried, because if we do survive, then we'll probably
survive in incredibly good style, minor problems like death,
poverty, boredom, true suffering, and so on being easily handled
for the likes of us primitive types. (And that's whether or not some
versions of one fork off and become superhuman themselves.)

So for me, AI or FAI is hardly a solution: it's merely a most
likely... occurrence.

> then I then argue that the discussion can return to actual
> implementation and elucidation of ai or other SL4-ish topics
> instead of the spooky ai domination scenarios.

How do these ideas that *you* have for quickly dispatching death
and human survival have any effect upon the problem of how we
should try to cope with an inevitable or nearly inevitable rise of
superhuman intelligence?

Lee



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT