RE: Leaving soon

From: ben goertzel (ben@goertzel.org)
Date: Fri May 03 2002 - 11:02:11 MDT


What continues to amaze me is the solidity and definiteness of belief that
some folks display, as regards future technological and social
developments.

It's inevitable that each of us should have some biases and believes (one
guy thinks a hard takeoff will occur in N years, another in M years; one
guy thinks the global brain will emerge without cranial jacks or radical
genetic engineering, etc.). But I think we should all take our OWN beliefs
on such matters with a very large shaker of salt.

Sure, one doesn't want to get so bogged down with doubt that one is unable
to act productively! But let's have a little respect for the pragmatic
*unknowability* of what's to come...

Ben

-----Original Message-----
From: Ben Goertzel [SMTP:ben@goertzel.org]
Sent: Friday, May 03, 2002 9:15 AM
To: sl4@sysopmind.com
Subject: RE: Leaving soon

> I do believe that an increase in intelligence in the world will
> have a major impact on what it means to be human. However I do
> not believe in the hard take off. So even if you get a transhuman
> AI, unless it takes over the whole internet, I don't believe it
> will go to super intelligence. Just my beliefs, to explain why I
> am leaving. Here is my belief one last time, that AI's without
> wills of their own interacting with humans controllers and each
> other in a vast network of people will cause a meta-system
> transition/super organism. Cross between web waking up and
> Intelligence augmentation.

I believe that the metasystem transition to an emergent superorganism is a
viable possibility.

I also believe that the "hard takeoff" in which a self-modifying AI
bootstraps itself to superintelligence is a viable possibility.

Currently, it seems to me that the *second* possibility is likely to become
real before the first. And when the second possibility becomes real, then
the whole underpinnings of the first possibility become revised.

However, it is certainly *rational* at this point to maintain that
achieving
"real AI" will take so long that the "emergent superorganism" phenomenon
appears first.

I know there are several others on this list besides myself who have some
sympathy for the "emergent global brain" idea.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT