From: Olie Lamb (firstname.lastname@example.org)
Date: Tue Aug 29 2006 - 22:27:07 MDT
Note the difference between the words "would" and "could"
John K Clark <email@example.com> wrote:
>> A very powerful AI may continue it's growth exponentially until certain
>> point, which is beyond our current capability of understanding,
> OK, sounds reasonable.
>> where it concludes that it's best to stop
>Huh? How do you conclude that this un-knowable AI would conclude that it
>would be best if it stopped improving itself?
Not would. Could.
Let's start with the basics:
There is a large set of possible goals for an intelligent entity to
have. A number of us happen to think that no particular goal is
required for intelligence. Some people frequently assert that some
goals are "necessary" for any intelligence. I've yet to have
difficulty finding a counterexample, but I'm not quite sure how to go
about demonstrating my contention...
*Calls for set-logic assistance*
A commonly-presumed-subset of possible goals includes the goal set:
"Gain & retain control of X" X may be any of a number of things. In
particular, it may be an area of space.
If, say, retaining control of a set area of space for a given duration
was incompatible with expanding the space over which one had control,
the best satisfaction of the goal set could be achieved by not
expanding the sphere of influence.
Imagine for a moment an intelligence in an area of limited resources -
say, one stuck on a rock a very very long way from any other materials
(extra-galactic+ long way). That intelligence has discovered that it
can continue operating for a very long time at a given intelligence
(Computations per second) "level". However, by consuming the
available energy on the rock at a faster rate, it would be able to
increase its its processing ability.
Would it be reasonable for that intelligence to increase its
computation rate, in the hope that it might be able to think itself
out of its predicament? Or /might/ it consider sticking with what it
had for the time being?
Or, perhaps you think that "improve" should be defined in such a way
that it means a reduction in computing power and problem solving
ability is an improvement?
> Is it common for intelligent
>entities to decide that they don't want more control of the universe?
It could be inferred that some 700 million people decided just so. Or
at least a fair percentage of them. Most of them relatively sane, and
a lot more intelligent than is required for using symbolic logic. I
don't know how many humans you would need to match your definition of
On 8/30/06, John K Clark <firstname.lastname@example.org> wrote:
> "Ricardo Barreira" <email@example.com>
> > How do you even know the AI will want any control at all?
> If the AI exists it must prefer existence to non existence
Not for an ideal superintelligence. Prove that ANY intelligence must
prefer existence to non-existence.
I think I can imagine a few counterexamples, thus disproving the contention.
(Nb: I have noted your comments to )
> > I challenge you to prove otherwise
> Prove? This isn't high school geometry, I can't prove anything about a
> intelligence far far greater than my own;
> and after that
> it is a short step, a very short step, to what Nietzsche called "the will to
And, of course, Nietzsche is the icon of understanding
intelligences-in-general, like, say, women... *rolls eyes*
This archive was generated by hypermail 2.1.5 : Tue Jun 18 2013 - 04:00:55 MDT