Re: [sl4] Re: More silly but friendly ideas

From: Lee Corbin (lcorbin@rawbw.com)
Date: Thu Jul 03 2008 - 22:00:11 MDT


John Clark writes

> On Thu, 3 Jul 2008 08:32:14 -0700, "Lee Corbin" <lcorbin@rawbw.com>
> said:
>
>> Examine in what sense?
>
> In the sense of predicting what a mind can't do.

You so very often leave your correspondent with so little
context that he, as me right here, can make nothing whatsoever
of what you are trying to say.

I'll just have to snip equally obscure parts of your post.

>> That's true, but *only* if that mind
>> limits itself to formal proofs.
>
> Nobody is saying a mind will use the same procedures to decide things
> that formal logic does, you're right that is far too long and cumbersome,

Yes.

> but formal logic can show that any mind that is good enough
> to do arithmetic is susceptible to getting into infinite loops
> regardless of the details of its operation.

Can you expand on that? How would formal logic do that?
And why do we need formal logic anyway. One line of
assembler code can fully exhibit the problem:

       br *

> A real mind has the ability to detect when things are becoming
> unproductive. It's not a fool proof procedure; Turing proved that is
> imposable [please quit spelling it that way---the word is "impossible"],
> it's just some rules of thumb and the ability to become
> impatient. A real mind can say "I'm bored with that, I'll think about
> something else";

A real mind *might* do that. We have real mind obsessive
compulsive disorder types who don't get bored with certain
extremely routine things. I also note that I never get bored
having a meal a two a day, despite this having gone on for
many decades.

> a slave AI with fixed goals couldn't do that. Infinite loop time.

Well, *you* like to keep mentally considering conjectures and
often trying to refute them. Are you stuck in a loop? Certain
priests continually ask themselves almost as often as possible
if they are doing enough to serve God; are they stuck in a
loop? What you call a "slave AI" practically everyone else
here would call "an AI who has human benefit high on its
agenda" (though they must keep in mind that nothing is
certain, especially over the long haul).

>> What about a "fixed-goal mind" whose only passion
>> was to find a scheme that unified GR and QM?
>
> Then he will die because he's concentrating so hard on the intricacies
> of string theory he fails to notice the cement truck heading right for
> him as he crosses the street.

Certain human theoreticians are almost so lost in thought that
cement trucks can pose a real danger to them. I myself almost
ran over a philosopher lost in thought as he crossed a parking
lot---his daughter was following three steps behind, and got
quite a kick out of me sticking my head out the window and
saying in faux-disgust "Philosophers!".

Just as we tend to provide safe environments in our ivory towers
for certain people, we also house programs in hardware where
they're very safe from cement trucks.

Lee



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT