Re: AI-Box Experiment 2: Yudkowsky and McFadzean

From: James Higgins (jameshiggins@earthlink.net)
Date: Tue Jul 16 2002 - 01:16:21 MDT


Sorry, been locked out of this email account for better than a week.
Not that anyone on SL4 probably minded. ;)

Eliezer S. Yudkowsky wrote:
> James Higgins wrote:
> David McFadzean has been an Extropian for considerably longer than I
> have - he maintains extropy.org's server, in fact - and currently works

Ah, ok. I don't get into the Extropian realm much...

> on Peter Voss's A2I2 project. Do you still believe that you would be
> "extremely difficult if not impossible" for an actual transhuman to
> convince?

yes.

> Are you at least a *little* less confident than before? Am I not having
> any impact here? Will people just go on saying "Well, that's an

Nope. I'm starting to wonder if you're picking people who, you believe
in advance, you can get to open the jail.

There is no way to say for certain that a transhuman (espicially without
defining what exactly is meant by that) could not convince me. However,
I'm absolutely certain an Eliezer Yudkowsky could not convince me.

> Apparently the transhuman AI in your mental picture is not as smart as
> Eliezer Yudkowsky in actuality. You can't imagine a character who's
> smarter than the author and this definitely applies to figuring out
> whether a "transhuman AI" can persuade you to let it out of the box. All
> you can do is imagine whether a mind that's as smart as James Higgins
> can convince you to let it out of the box. You can't imagine anything

I know a mind as smart as JH (me) could never convince me to let it out
of the "box" prematurly. Obviously, at some point the goal is to let it
out of the box. Once everything reasonable has been done to ensure that
it is friendly and its goals match those desirable to a reasonable
aproximation. Unless your trick is to tell your jailer (out of role)
that this is the case I don't see how these people could be letting you
out unless you think they are likely to be soft in that area beforehand.

> at a level of intelligence above that. If something seems impossible to
> you, it doesn't prove it's impossible for a human who's even slightly

For the record, I don't think anything is impossible, at least in the
long term. Nearly so, maybe, but nothing is impossible given time and
resources.

> That's the problem with saying something like, e.g., "Intelligence does
> not equal wisdom." You *do not know* what intelligence does or does not
> equal. All you know is that the amount of intelligence you currently

Actually, I believe I do know that intelligence does not equal wisdom.
If for no other reason than because the two are different concepts.
Now, I'm not saying intelligence can't get you to wisdom, or at least
reduce the learning curve, but it does not equal nor guarantee wisdom.

> have does not equal wisdom. You have no idea whether intelligence
> equals wisdom for someone even slightly smarter than you. Intelligence

Sorry, wrong again. Intelligence never equals wisdom, no matter how
intelligent. An apple is never an orange, no matter how large it grows.

> Actually, this sounds like a rather adversarial restatement of my
> perspective. What I am saying is that a transhuman AI, if it chooses to
> do so, can almost certainly take over a human through a text-only
> terminal. Whether a transhuman would choose to do so is a separate
issue.

How transhuman are we talking here? If we move the bar closer to SI
than human then your probably correct. However, I personally believe it
will take a substantial improvement to make such things possible. Which
will, hopefully, give us enough time to fully evaluate and guide the
development of the AI.

>
>> Thus it is unlikely his team would do this (at least regularly) unless
>> they had a specific reason to do so.
>
>
> The point I am trying to make is that when a transhuman comes into
> existence, you have bet the farm at that point. There are potentially
> reasons for the programmers to talk to a transhuman Friendly AI during
> the final days or hours before the Singularity, but the die has been
> almost certainly been cast as soon as one transhuman comes into
> existence, whether the mind is contained in sealed hardware on the Moon
> next to a million tons of explosives, or is in immediate command of a
> full nanotechnological laboratory. Distinctions such as these are only
> relevant on a human scale. They are impressive to us, not to
transhumans.

Ok, so what *exactly* is your definition of a transhuman. By some
accounts you could qualify as a transhuman since you are obviously
smarter than the average human. I have no doubt I could stop you if
necessary. So are you using "transhuman" to indicate an AI that is
1,000 times as smart as an average human? Or?

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT