From: Ben Goertzel (firstname.lastname@example.org)
Date: Sat Jun 22 2002 - 12:50:03 MDT
> > I think the period of transition from human-level AI to
> superhuman-level AI
> > will be a matter of months to years, not decades.
> I suppose I could see a month, but anything longer than that is
> pretty hard
> to imagine unless the human-level AI is operating at a subjective
> of hundreds to one relative to human thought.
I understand that this is your intuition, but what is the reasoning
Say we have this AI mind with a nonhuman intelligence, roughly as smart as
Ben or Eliezer. Say this AI mind already uses a huge amount of
computational resources, and obtaining more rapidly is not financially
This mind now has to re-engineer its software to make itself smarter.
Maybe there are only a limited number of tweaks it can make to improve its
intelligence, without totally rearchitecting itself.
So, with these tweaks, it becomes a bit smarter than Ben or Eliezer.
OK, what's next? It has to completely rearchitect itself, i.e. come up with
a new and better AI design. Furthermore, it doesn't have that much hardware
available for experimentation, unless it wants to cannibalize its own
Where do you come up with a "one month upper bound" for this rearchitecture
I think a one month estimate is plausible, but I don't see why "anything
longer than that" should be "hard to imagine."
Maybe it won't go this way -- maybe no conceptual/mathematical/AI-design
hurdles will be faced by a human-level AI seeking to make itself vastly
superhuman. Or maybe turning a human-level mind into a vastly superhuman
mind will turn out to be a hard scientific problem, which takes our
human-level AI a nontrivial period of time to solve....
> > Moravec-and-Kurzweil-style curve-plotting is interesting and
> important, but
> > nevertheless, the problem of induction remains... . All sorts of things
> > could happen. For instance, the superhuman AI's we build may
> continue to
> > progress exponentially, but in directions other than those we
> foresee now.
> Even if your goal is to progress exponentially in enlightened spiritual
> directions, exponential physical progress is still a good way to get the
> computing power to support that enlightened spiritual stuff and
> bring others
> in on the fun.
Perhaps, or perhaps not. Perhaps the super-AI will realize that more
brainpower and more knowledge are not the path to greater wisdom ... perhaps
it will decide it's more important to let some of its subprocesses run for a
few thousand years and see how they come out!
> > In short, as I keep repeating, one of the unknown things about
> our coming
> > plunge into the Great Unknown, is how rapidly the plunge will
> occur, and the
> > trajectory that the plunge will follow. Dead certainty on these points
> > seems inappropriate to me.
> I often encounter people who are amazed at my dead certainty that
> evolved rather than being created. Generic arguments against "dead
> certainty" are not relevant.
It is your comment which is not relevant, dude -- because I was not making a
generic argument against dead certainty.
I *could* make such an argument, but it's not the one I was making (it would
just get pedantic, because I know you don't really mean "100% certain", you
understand that no knowledge is absolutely certain)
I was making a specific argument against dead certainty *in the face of
minimal empirical evidence*, which is the case at hand
> If you like, don't think of me as being "dead
> certain" that
> the Singularity will be fast, just "dead certain" of the wrongness of the
> common reasons offered for why the Singularity would happen to run on a
> conveniently human timescale.
I agree that many of the reasons commonly offered why the Singularity will
be slow, are poor reasons.
And I also think it's very likely that at some point, superintelligent AI's
will progress tremendously faster than humans can comprehend.
However, neither of these points gives me any knowledge about how long the
gap between human-level AI and vastly-superhuman-level AI will be.
And nor do your posts, or intuitions, give me any knowledge about this.
We don't yet fully understand how hard the scientific problem of creating a
human-level AI is. And we don't yet fully understand how hard the
scientific problem of transforming a human-level AI into a vastly
superhuman-level AI is. Until we understand these things, we can't forecast
the end-game of the path to the Singularity in any detail, though we can
certainly huff and puff about it a lot should we find such an occupation
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT