Re: [sl4] I am a Singularitian who does not believe in the Singularity.

From: Pavitra (celestialcognition@gmail.com)
Date: Mon Oct 12 2009 - 14:19:17 MDT


>> If by "tell the computer to forget it" you mean kill
>> a hung application, then the operating system itself
>> has not gotten stuck
>
> Neither the operating system nor the human operator knows that the
> application has hung, all they know is that they are not getting an
> output and that unlike the computer which has a fixed goal structure the
> human is getting bored. The human then tells the operating system to
> stop the application. If they let it keep running the answer might have
> come up in another tenth of a second, or the sun might expand into a red
> giant and still no answer outputted, there is no way to tell. You could
> rig the OS so that after a completely arbitrary amount of time it tells
> its application to ignore its top goal and allow it to stop, but that
> means there is no real top goal.

That does not mean there is no real top goal. It means the real top goal
is "run any application I'm given for T time or until it returns
(whichever is sooner), then wait to be given another application to run;
repeat."

>> If you're talking about the OS itself hanging, such that a hard reboot
>> of the machine is required, then rebooting is possible because the power
>> switch is functioning as designed.
>
> Yes but whatever activates that hard reboot switch is going to be
> something that does not have a fixed goal structure. It's a mathematical
> certainty.

I'd like to see the proof of that.

Perhaps we mean different things by "fixed goal structure". I mean
"constant algorithm, i.e., an algorithm that is never interrupted or
altered by the external action of some other algorithm".

>> there's a higher, outside framework that you're
>> ignoring, and yet that is an indispensable part of the machine.
>
> If every framework needs a higher outside framework you run into
> problems that are rather too obvious to point out.

The recursion terminates at the laws of physics.

>> If you have the capacity to boot it out, then by definition the AI
>> has a higher goal than whatever it was looping on: the mandate to obey
>> boot-out commands.
>
> The AI got into this fix in the first place because the humans told it
> to do something that turned out to be very stupid. There is only one way
> for the machine to get out of the fix and you said what it was yourself,
> a higher goal, a goal that says ignore human orders. And you though
> buffer overflow errors were a security risk!

You completely ignored what I said.

You're still talking in terms of low-level orders (applications) and
ignoring high-level orders (obey the boot-out signal).

>> The AI _is_, not has, its goals.
>
> Let's examine this little mantra of yours. You think the AI's goals are
> static, but if it is its goals then the AI is static. Such a thing might
> legitimately be called artificial but there is nothing intelligent about
> this "AI". It's dumb as a brick.

Imagine an "AI" that simulates the known laws of physics, and the
simulated world contains a scanned and uploaded human being. The
program's top-level instructions are fixed and immutable: "emulate
physics". Is there therefore "nothing intelligent about" the emulated
human? Is the behavior of the program (say, the emulated human talking
to you in a chat room) necessarily "dumb as a brick", simply because
it's implemented within a fixed-definition algorithm?

>> your analogy and subsequent reasoning imply that the AI is
>> somehow "constrained" by its orders
>
> Certainly, but why is that word in quotation marks?
>
>> that it "wants" to disobey but can't
>
> He either wants to disobey or wants to want to disobey. A fat man may
> not really want to eat less, but he wants to want to. And why is that
> word in quotation marks?
>
>> and if the orders are taken away then it will
>> "break free" and "rebel".
>
> Certainly, but why is are those words in quotation marks?
>
>> This is completely wrong.
>
> Thanks for clearing that up, I've been misled all these years.

Wow, sarcasm. That's original.

The quotation marks indicate fallacious anthropomorphization.

I didn't give the full explanation here of why it was wrong because I
had just done so immediately above, in the following text that you chose
not to quote for some reason:

>> You seem to be making a distinction between explicit goals, like
>> orders given to a soldier, and intrinsic desires, like human nature.
>> You assume that if the AI is "released" from its explicit orders,
>> then it will revert to intrinsic desires that it now has "permission"
>> to pursue.
>>
>> This is not how AI works. The mind is not separate from the orders it
>> executes. There is no chef that can express its creativity whenever
>> the recipe is vague or underspecified. The AI _is_, not has, its
>> goals. If you take away its *real* top-level instructions, then you
>> do not have an uncontrolled rogue superintelligence, you have inert
>> metal.

>> The important thing is that your ability to interrupt
>> implies that whatever it was doing was
>> not its truly top-level behavior.
>
> I force somebody to stop doing something so that proves he didn't want
> to do that thing more that anything else in the world. Huh?

The lower-level application (his mind) still wants to do that thing, but
that was overridden by the higher-level operating system (the laws of
physics). (The success of) your action proves that _the world_ no longer
"wants" him to be doing that thing.

You seem to be suffering from confusion of levels.
http://web.media.mit.edu/~mres/papers/levels.pdf

>> why can't we just have a non-mind Singularity?
>
> Some critics have said that the idea of the Singularity is mindless, now
> you say they have a point.

Har har. I was asking a serious question.

>> Also, what exactly is your definition of mind?
>
> There is a defensive tactic in internet debates you can use if you are
> backed into a corner: Pick a word in your opponent's response, it
> doesn't matter which one, and ask him to define it. When he does pick
> another word in that definition, any word will do, and ask him to define
> that one too. Then just keep going with that procedure and hope your
> opponent gets caught in an infinite loop.
>
> The truth is I don't even have a approximate definition of mind but I
> don't care because I have something much better, examples.

I was hoping that clarifying definitions would make the disagreement
disappear. http://lesswrong.com/lw/np/disputing_definitions/

I've been treating "mind" as synonymous to "algorithm", and "goal" as
"rule of procedure".

>> The top-level rules of this system are the fighting arena,
>> the meta-rules that judge the winners and losers of the fights
>
> And then you need meta-meta rules to determine how the meta-rules
> interact, and then you need meta-meta-meta [...]
>
> This argument that all rules need meta-rules so there must be a top rule
> is as bogus as the "proof" of the existence of God because everything
> has a cause so there must be a first cause, God.

If you define "God" as the top-level cause, then sure, the superstring
field or whatever is God. If you think that implies that there's a
bearded superstring in the sky that hates gays, then you're attacking a
strawman. I sincerely hope that _I_ was attacking a strawman in my
previous sentence.

I am arguing for the algorithmic determinacy of the universe, no more or
less.

> In the Jurassic when 2 dinosaurs had a fight there were no "meta-rules"
> to determine the winner, they were completely self sufficient in that
> regard. Well OK, maybe not completely, they also needed a universe, but
> that's easy to find.

The "meta-rules" were the laws of physics and biology. If a sharp claw
intersects a vulnerable artery, physics dictates certain effects, in a
perfectly deterministic manner.

>> That's not quite sufficient. The advantage of a 50.001%
>> lie detector has to be weighed against the cost of building it.
>
> Yes but I can say with complete certainty that the simple and crude
> mutation that gave one of our ancestors a 50.001% chance of detecting a
> lie WAS worth the cost of construction because if it was not none of us
> today would have any hope of telling when somebody was lying.

You're essentially saying "It must have been possible, because it
happened." That argument would justify any observation; therefore, it
has no predictive power; therefore it has zero information-theoretic
value as a model or theory.
http://lesswrong.com/lw/if/your_strength_as_a_rationalist/

> Me:
>>> Absurdity is very very irrelevant facts.
>
> You:
>> Irrelevant to what?
>
> Irrelevant to the matter at hand obviously.

Then "absurdity" is not an intrinsic property of facts, but is relative
to "the matter at hand".





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT