From: James Higgins (firstname.lastname@example.org)
Date: Fri Aug 02 2002 - 00:32:23 MDT
> On Wed, 2002-07-31 at 11:47, Ben Goertzel wrote:
> It seems overwhelmingly likely to me that there will be multiple super
> intelligences. For there to be only one SI requires such a fast takeoff
Yes, and no, in my view.
> time that it strongly implies that there will be no additional "hard"
> problems in intelligence after we get just past human level. It
> requires that there be no problems as hard as human-level AI itself, no
> places where the blossoming process gets stuck. Because if it does get
> stuck, anywhere, there will soon be other SIs (it is easier to solve a
I don't agree. The first AI to reach SI capability could easily remove,
or at least halt, all others which are behind. In other words, unless
two AIs end up reaching the Singularity at nearly the same time (within
days I'd guess) then there will only be one. Assuming that 1st decides
that it doesn't want competition, at least (which we can't know). Thus,
I find it very unlikely that we will have multiple SIs, unless the 1st
SI wants it that way.
> problem once you know it has been solved somewhere else, even if you
> don't know exactly how it was solved). As soon as there are other SIs,
> game theory comes back and, with it, our ability to say useful things
> about possible civilizational structures in the post-singularity world.
That said, I do believe there will be multiple SIs. I just don't think
they will be terran. Other civilizations out there *must* be getting to
this point as well. They may be on the other side of the universe, but
that won't have anywhere near the effect on SIs that it does on us. So,
eventually, I believe multiple SIs will encounter one another. Might be
many (real time) years, or only days. May not even be relevant to us if
the SI just pops out of our universe for something better, leaving us
Just my 2 cents worth...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT