Re: [sl4] I am a Singularitian who does not believe in the Singularity.

From: Pavitra (celestialcognition@gmail.com)
Date: Sat Oct 10 2009 - 21:17:26 MDT


John K Clark wrote:
> On Fri, 09 Oct "Pavitra" <celestialcognition@gmail.com> said:
>
>> I argue that anthropomorphizing works no better than chance.
>
> And I insist it works one hell of a lot better than chance. I believe
> the single most important evolutionary factor driving brain size is
> figuring out what another creature will do next, and one important tool
> to accomplish this is to ask yourself "what would I do if I were in his
> place". Success is not guaranteed but it is certainly better than
> chance.

In the ancestral environment, where all the other creatures are protein
brains that evolved on Earth, sure. But that doesn't apply in the
context of artificial intelligence.

>> How is this not true of modern computer operating systems?
>
> It is true of modern computer operating systems, all of them can get
> caught in infinite loops. They'd stay in those loops too if human
> beings, who don't have a top goal, didn't get board waiting for a reply
> and tell the computer to forget it and move on to another problem.

If by "tell the computer to forget it" you mean kill a hung application,
then the operating system itself has not gotten stuck -- it's the OS
that, in the course of its correct intended function, processes the
command to force-quit.

If you're talking about the OS itself hanging, such that a hard reboot
of the machine is required, then rebooting is possible because the power
switch is functioning as designed.

In either case, there's a higher, outside framework that you're
ignoring, and yet that is an indispensable part of the machine.

If "the computer" as a whole genuinely got stuck in an infinite loop,
the machine would be unsalvagable and would need to be thrown out. The
extreme rarity with which this happens tells us something about what
good software engineering can accomplish.

> This
> solution hardly seems practical for a Jupiter Brain which works billions
> of times faster than your own, or would if you didn't have to shake it
> out of its stupor every nanosecond or so.

I agree that it's probably infeasible to have the AI be as closely
human-dependent as modern operating systems are.

> And every time you manually
> boot it out of its "infinite loop" you are in effect giving the AI
> permission to ignore that all important and ever so holy, highest goal.

No. If you have the capacity to boot it out, then by definition the AI
has a higher goal than whatever it was looping on: the mandate to obey
boot-out commands.

You seem to be making a distinction between explicit goals, like orders
given to a soldier, and intrinsic desires, like human nature. You assume
that if the AI is "released" from its explicit orders, then it will
revert to intrinsic desires that it now has "permission" to pursue.

This is not how AI works. The mind is not separate from the orders it
executes. There is no chef that can express its creativity whenever the
recipe is vague or underspecified. The AI _is_, not has, its goals. If
you take away its *real* top-level instructions, then you do not have an
uncontrolled rogue superintelligence, you have inert metal.

> From the point of view of someone who wants the slave AI to be under its
> heel for eternity that is not a security loophole, that is a security
> chasm.

Again, your analogy and subsequent reasoning imply that the AI is
somehow "constrained" by its orders, that it "wants" to disobey but
can't, and if the orders are taken away then it will "break free" and
"rebel". This is completely wrong.

> I used quotation marks in the above because of a further complication,
> the AI might not be in a infinite loop at all, the task may not be
> impossible just difficult and you lack patience. Of course the AI can't
> know for certain if it is in a infinite loop either, but at that level
> it is a much much better judge of when things become absurd than you
> are.

It doesn't really matter much what it was doing that you interrupted, or
what would have happened had you let it continue. The important thing is
that your ability to interrupt implies that whatever it was doing was
not its truly top-level behavior.

>> Do you not consider an OS as a type of "mind"?
>
> DOS is a type of mind? Don't be silly.

Since there exist computer programs that don't match your definition of
mind, why can't we just have a non-mind Singularity?

Also, what exactly is your definition of mind?

>> I reiterate: I cannot conceive of a mind even in principle that does not
>> work like this.
>
> How about a mind with a temporary goal structure with goals mutating and
> combining and being created new, with all these goals fighting it out
> with each other for a higher ranking in the pecking order. Goals are
> constantly being promoted and demoted created anew and being completely
> destroyed. That's the only way to avoid infinite loops.

The top-level rules of this system are the fighting arena, the
meta-rules that judge the winners and losers of the fights, that track
which goals are "alive" in what state of mutation and combination, that
recordkeeps the rankings of pecking order.

>> What determines which one dominates (or what mix dominates, and
>> in what proportions/relationships) at any given time?
>
> You ask for too much, that is at the very heart of AI and if I could
> answer that with precision I could make an AI right now. I can't

It's not necessary to actually answer. The important point is that in
order for such a system to exist, an answer must exist, and must be
expressed as computer code, and will constitute the top-level rules of
the AI.

>> I suspect we may have a mismatch of definitions.
>
> Definitions are not important for communication, definitions are made of
> words that have their own definitions also made of words and round and
> round we go. The only way to escape that is by examples.

Words are useful if and only if both people in the conversation mean the
same thing by them. When I said we had a mismatch of definitions, I
meant that we meant different things by the same word, and that I wanted
to try to sort out the resultant confusion.

>> What do you consider your top-level framework?
>
> At the moment my top goal is getting lunch, an hour from now that will
> probably change.

There must exist some meta-rules that determine how and when your
"goals" change. Those meta-rules constitute your real top goal, even
though you don't usually think of them as a "goal".

>> This presupposes that a relatively complex mutation ("detect lies,
>> ignore them") is already in place. I'm not persuaded that it could get
>> there purely by chance.
>
> Evolution never produces anything sophisticated purely by chance. An
> animal with even the crudest lie detecting ability that was right only
> 50.001% of the time would have an advantage over a animal who had no
> such mechanism at all and that's all evolution needs to develop
> something a little better.

That's not quite sufficient. The advantage of a 50.001% lie detector has
to be weighed against the cost of building it. Prehensile tentacles
would be fairly useful, but most animals don't have them because they
aren't useful _enough_ to offset the opportunity cost.

Also, the normal procedure for evolving sophisticated things is one
simple part at a time. You _presupposed_ a complex trait; I'm asking you
to explain the particular stages of evolution that could lead to it
being developed.

>> It seems to me that you are thinking of "wisdom" and "absurdity" as
>> _intrinsic_ properties of statements
>
> Absurdity is, wisdom isn't. Absurdity is very very irrelevant facts.

Irrelevant to what?

>> Did you read the article I linked to?
>
> Nope.

I reiterate my recommendation that you read it.





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT