Re: Leaving soon

From: Dan Clemmensen (dgc@cox.rr.com)
Date: Sun May 05 2002 - 10:18:57 MDT


ben goertzel wrote:

> What continues to amaze me is the solidity and definiteness of belief that
> some folks display, as regards future technological and social
> developments.
>
> It's inevitable that each of us should have some biases and believes (one
> guy thinks a hard takeoff will occur in N years, another in M years; one
> guy thinks the global brain will emerge without cranial jacks or radical
> genetic engineering, etc.). But I think we should all take our OWN beliefs
> on such matters with a very large shaker of salt.
>
> Sure, one doesn't want to get so bogged down with doubt that one is unable
> to act productively! But let's have a little respect for the pragmatic
> *unknowability* of what's to come...
>

I concur. Each of us has a favorite scenario. I'm in the internet-based
hard-takeoff camp. However, the crucial point is that there appear
to be a great many feasible scenarios. I think we have each concluded
that a superintelligence will come into existence in the (historical)
near term. This sets us apart from the rest of humanity. I suspect that
most of us think that the only thing that will invalidate our own pet
scenario would be the emergence of an SI by some other method. This
is why we need to worry about the generic SI issues as well as issues
that are specific to each scenario. In this regard, I think that
"friendliness" research and philosophy may be important even if the
SI ultimately derives from a human/computer collaboration or as as
emergent behavior of co-operating programs rather than from an
explicitly-designed AI.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT