From: Ben Goertzel (firstname.lastname@example.org)
Date: Fri May 03 2002 - 09:14:42 MDT
> I do believe that an increase in intelligence in the world will
> have a major impact on what it means to be human. However I do
> not believe in the hard take off. So even if you get a transhuman
> AI, unless it takes over the whole internet, I don't believe it
> will go to super intelligence. Just my beliefs, to explain why I
> am leaving. Here is my belief one last time, that AI's without
> wills of their own interacting with humans controllers and each
> other in a vast network of people will cause a meta-system
> transition/super organism. Cross between web waking up and
> Intelligence augmentation.
I believe that the metasystem transition to an emergent superorganism is a
I also believe that the "hard takeoff" in which a self-modifying AI
bootstraps itself to superintelligence is a viable possibility.
Currently, it seems to me that the *second* possibility is likely to become
real before the first. And when the second possibility becomes real, then
the whole underpinnings of the first possibility become revised.
However, it is certainly *rational* at this point to maintain that achieving
"real AI" will take so long that the "emergent superorganism" phenomenon
I know there are several others on this list besides myself who have some
sympathy for the "emergent global brain" idea.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT