Re: An essay I just wrote on the Singularity.

From: Tommy McCabe (rocketjet314@yahoo.com)
Date: Wed Dec 31 2003 - 13:04:28 MST


--- "Perry E. Metzger" <perry@piermont.com> wrote:
>
> Tommy McCabe <rocketjet314@yahoo.com> writes:
> > True, but no disproof exists.
>
> Operating on the assumption that that something
> which may or may not
> be possible will happen seems imprudent.

It seems like a very reasonable idea that what can be
done, by dumb evolution, in a few gigabytes of DNA can
be done in programming code by humans. And if you have
Friendly human-equivalent AI, it's going to be a very
short while until you have Friendly transhuman AI.
There could, in theory, be some sort of upper bound on
intelligence, but to argue that it is at the exact
level represented by Homo sapiens sapiens is
ungrounded anthropocentrism.
 
> > If anyone thinks they
> > have one, I would be very interested. And there's
> > currently no good reason I can see why Friendly AI
> > shouldn't be possible.
>
> I can -- or at least, why it wouldn't be stable.

Then please, by all means, show me the proof.

> There are several
> problems here, including the fact that there is no
> absolute morality (and
> thus no way to universally determine "the good"),

This is the postition of subjective morality, which is
far from proven. It's not a 'fact', it is a
possibility.

> that it is not
> obvious that one could construct something far more
> intelligent than
> yourself

Perhaps we truly can't construct something vastly more
intelligent than ourselves. But it doesn't take that:
it just takes some sort of seed with decent general
intelligence, reprogrammable code, and Friendliness.

> and still manage to constrain its behavior
> effectively,

You can't 'constrain' a transhuman. If you can tell me
a reasonable proposal for constraining a transhuman, I
will immediately reject it on the grounds that the
'constraints' will turn out to have a simple
workaround that humans, myself and Einstein included,
are too dumb to see. A transhuman, by definition, is
smarter than humans and thus will almost certainly
find a quick workaround to any constraining proposal
that we implement that we can't see in advance because
we're not smart enough. Read CFAI on the adversarial
attitude. And even if we could 'constrain'
transhumans, what would the world be like if mice
could 'constrain' us? The differential in intelligence
between us and transhumans is a lot bigger than that
between us and mice.

> that
> it is not clear that a construct like this would be
> able to battle it
> out effectively against other constructs from
> societies that do not
> construct Friendly AIs (or indeed that the winner in
> the universe
> won't be the societies that produce the meanest,
> baddest-assed
> intelligences rather than the friendliest -- see
> evolution on earth),
> etc.

Battle it out? The 'winner'? The 'winner' in this case
is the AI who makes it to superintelligence first.
Probably the first thing a superintelligence would do
is go to all the unFriendly AI projects and not only
say "This is a really, really bad idea", but persuade
everybody of it. A superintelligent AI would have
better persuasion capabilities than a politician.

> Anyway, I find it interesting to speculate on
> possible constructs like
> The Friendly AI, but not safe to assume that they're
> going to be in
> one's future.

Of course you can't assume that there will be a
Singularity caused by a Friendly AI, but I'm pretty
darn sure I want it to happen!

> The prudent transhumanist considers
> survival in wide
> variety of scenarios.

Survival? If the first transhuman is Friendly,
survival is a given, unless you decide to commit
suicide. If the first transhuman is unFriendly, you're
either dead, or have an INU (Infinite Negative
Utility) future.

__________________________________
Do you Yahoo!?
Find out what made the Top Yahoo! Searches of 2003
http://search.yahoo.com/top2003



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT