Re: AI boxing

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Thu Jul 21 2005 - 13:06:37 MDT


No, proof means a proposal for a FAI combined with a formal demonstration
that said FAI will be Friendly.
How about an appeal to ignorance backed by historical precedent. How many
of the things we can do today are magic by earlier standards? How many are
only possible due to our utilization of principles earlier people didn't
know about? Thinkers such as Bacon, Franklin, Hooke and others essentially
used the appeal to ignorance in their arguments that we would eventually
live forever and travel to the moon, and they were blatantly right. So did
early cryonicists. The appeal to ignorance does NOT justify saying that we
will some day do things that seem impossible according to physics as we know
it, but it DOES justify saying we will probably some day do any particular
desirable thing that we currently have no idea how to do. With SI, someday
probably means in 15 minutes. Remember, any given method of doing something
may be impossible (like it really may not be possible to fly by breeding and
riding giant birds) but the goal can often still be achieved. At any rate,
I find it very unlikely that a SI could not build a crude internal
radio-transmitter. It is obvious that this particular risk could be
prevented by prior counter-measures, but if you can figure out
counter-measures for EVERY option available to your AI it must not really be
smarter than you. Anyway, it's a safe bet that in actual implementation,
even countermeasures against techniques that I can think up will not be
implemented.

>From: Daniel Radetsky <daniel@radray.us>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: AI boxing
>Date: Wed, 20 Jul 2005 21:13:05 -0700
>
>On Wed, 20 Jul 2005 17:31:49 -0400
>"Michael Vassar" <michaelvassar@hotmail.com> wrote:
>
> > I agree that no convincing argument has been made that a deceptive proof
> > could be made, or that a UFAI could exploit holes in our mathematical
>logic
> > and present us with a false proof. However,
>
>I'm sorry: "proof" means an argument that that the AI should be unboxed?
>
> > c) "magic" has to be accounted for. How many things can you do that a
>dog
> > would simply NEVER think of? This doesn't have to be "quantum cheat
>codes".
> > It could be something as simple as using the electromagnetic fields
>within
> > the microchip to trap CO2 molecules in Bose-Einstein condensates and
>build a
> > quantum medium for itself and/or use electromagnetic fields to guide
> > particles into the shape of a controlled assembler or limited assembler.
> It
> > could involve using internal electronics to hack local radio traffic.
>But
> > it probably involves doing things I haven't thought of.
>
>I'm no physicist, so if you think that those are reasonable possibilities,
>then
>I'll have to take your word for it. However, I don't see how you can
>justify
>positing magic on the grounds that we haven't considered every logical
>possibility. It is true that what we believe is a box may not be a box
>under
>magic, if there exists some magic, but you'll have to give a better
>argument
>for the existence of this magic than an appeal to ignorance.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT