Re: The problem of cognitive closure

From: Jimmy Wales (jwales@aristotle.bomis.com)
Date: Fri Mar 16 2001 - 13:24:22 MST


Randal Koene wrote:
> On Fri, 16 Mar 2001, Mitchell Porter wrote:
>
> > The bottom line: If we *don't* pay detailed attention
> > now to what we *don't* know, we may really, really
> > regret it later.
>
> This, in any case, is a statement I can fully agree with.

I tend to agree -- but only in a limited sense.

The trends that are taking place in computing, the trends that will
result in superintelligence, are global and fundamental; they are the
outcome of global and fundmental trends and processes, economic
pressures, scientific initiatives, etc.

We can try to influence things a little bit one way or the other,
but really, the superintelligences will eventually (and eventually
means very very soon once you hit singularity!) do whatever the hell
they want to do. Cows might not like human ethics, but they can't really
do anything about it.

I think we can trust, based on our *own* knowledge, that a superintelligence
will be benevolent. But that isn't something we can do anything about. I
don't think that designing in a "prime directive" is really feasible.

-- 
*************************************************
*            http://www.nupedia.com/            *
*      The Ever Expanding Free Encyclopedia     *
*************************************************


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT