Re: Sysop hacking

From: ben goertzel (ben@goertzel.org)
Date: Wed Feb 06 2002 - 09:50:53 MST


Us trying to understand whether hacking the Sysop will be possible, is much
like a very smart dog, who has intuited a little bit of what human language
is like, trying to make projections about the complex machinations of a
dispute in intellectual property law.

ben g

----- Original Message -----
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
To: <sl4@sysopmind.com>
Sent: Wednesday, February 06, 2002 8:10 AM
Subject: Re: Sysop hacking

> Gordon Worley wrote:
> >
> > On Wednesday, February 6, 2002, at 03:17 AM, Eliezer S. Yudkowsky
wrote:
> >
> > > Mm, that sounds like circular logic to me. The Sysop is what
supposedly
> >
> > Yes, I should have made it more clear. I am suggesting that the Sysop
> > is a tautology,
>
> That is impossible. A Sysop is not a philosophy. A Sysop is a physical
> system that may or not exist at some point in our future.
>
> Our concept of the Sysop may be a tautology. If any of our concepts have
> the cognitive property of tautology than they must automatically be wrong,
> or at the least must automatically possess no force as a rational
> argument, with respect to the external referent of the Sysop.
>
> We aren't doing philosophical exploration for the sake of philosophical
> exploration, we're taking current knowledge and projecting it forward.
> Arguing that something is demonstrated "by definition" in the Sysop
> Scenario turns it all into a philosophical game; what matters are simply
> those things that are likely to actually appear within our own future.
> This is not an attempt to determine what happens if some hypothetical
> entity existed with an arbitrarily defined set of defined powers. This is
> an attempt to extrapolate events that may actually occur to humanity at
> some point.
>
> > but this makes sense and isn't really bad since, if the
> > Sysop is basically making it impossible to violate someone's volition
> > (or whatever), then ve is setting up a system where ve is always
> > unhackable. In short, the Sysop sets the rules, so it's very easy to
> > make sure that not being hacked is in those rules.
>
> 1) "In short, the Sysop sets the rules"
>
> 2) "so"
>
> 3) "it's very easy to make sure"
>
> 4) "that not being hacked is in those rules"
>
> and we have the objections:
>
> 1) The degree to which the Sysop sets the rules is precisely the point in
> dispute;
>
> 2) The "so" is therefore arguing from the point of the dispute;
>
> 3) Whether it is "easy" to prevent high-level hacking given sovereignty
> over low-level operations and superintelligence is again precisely the
> point of dispute;
>
> 4) Whether it is possible to implement the high-level goal of "not being
> hacked" in the low-level "rules" is again the dispute.
>
> > But, as I mentioned, just because that's how it works in theory doesn't
> > means that attacks are impossible.
>
> It can't work that way "in theory" - who invented the theory? What was
> their justification for hypothesizing this as part of a real Sysop
> Scenario? At what point did someone, extrapolating forward from current
> knowledge, say "The Sysop sets the rules"? If you want to rely on "The
> Sysop sets the rules" to argue something, your argument can't be any
> stronger than the reasons behind the person saying "The Sysop sets the
> rules". The first rule of competent philosophy is to never argue from
> definitions. If you want to determine whether the ability to "set the
> rules" is strong enough to accomplish something, you look at the reasons
> to think that a Sysop "setting the rules" is a possibility in the first
> place, and this tells you what you can rationally say about the details
> and specifics of these rules and the way they are set.
>
> We have some grounds to think that a superintelligence might be able to
> get low-level control over all local material reality, because we can
> visualize this as the result of nanotechnological competence by a
> singleton SI. When we say "make the rules", what we really mean is
> "control reality on a low level". We are now asking the question "Does
> the ability to control reality on a low level suffice for immunity to
> perversion attacks?" You cannot generalize "the ability to control
> low-level reality" into "making the rules", and then argue from this
> generalized definition that "prevent perversion attacks" is a subcategory
> of "making the rules". *That* is just a philosophical bait-and-switch.
>
> -- -- -- -- --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT