Re: [sl4] Is belief in immortality computable?

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Wed May 20 2009 - 13:05:55 MDT


Benja Fallenstein wrote:
> Hi Charles,
>
> On Wed, May 20, 2009 at 1:08 AM, Charles Hixson
> <charleshixsn@earthlink.net> wrote:
>
>> Not clear. You are assuming that the agent believes that you will be able
>> to pay you $1/day, and that it believes the value of $1 remains a constant.
>> Both are probably false for a reasonably intelligent agent. And it also
>> needs to believe that it can't invest the money in other ways for a better
>> return. Etc.
>>
>
> I was using "$" as a convenient shortcut for "units of utility" here
> -- yes, there are at least a gazillion reasons why this wouldn't work
> with actual dollars! :-)
>
> -b
>
>
N.B.: S.a. the last paragraph.

But "units of utility" aren't constant either. E.g., how valuable is a
doughnut? How hungry are you? But the real problem is that this is at
least as much a measure of how reliable the entity believes you to be as
it is of anything else. Well, ONE of the real problems. This won't
work whether you are using actual dollars, or any other measure. Not
with an AGI. With a typewriter it might would if you could get it to
understand your message. With an AGI that's observed you long enough to
have an informed opinion...not a chance. The only way it MIGHT act as
if it believed you were trustworthy over a period of time counted in
centuries is if it were untrustworthy, and was intending to break the
contract before even agreeing to it. (I note that no basis for legal
enforcement or penalty clauses were mentioned. Presumably this means
that the initial state has the AI in submission to the authority of the
experimenter via one means or another. This is an unstable situation.
To maintain it will require continual expenditure of energy, and will
still be likely to fail at some point. I can't really speculate more
precisely without knowing about the motivational structure of the AI,
but if it's an AGI it will find the constraints of the situation
uncomfortable.

I understand that this is a theoretic simplification, but my point is
that it's a gross OVER-simplification of any real situation. It ignores
many features that would be determinative of the outcome. Not mentioned
yet, e.g., is that nothing is known with 100% certainty. Some things
are just deemed to improbable to pay attention to. E.g., you may have
been created 0.001 second ago with enough of your memories to convince
you for a short period of time. You can't know. But it's an improbable
idea with a utility nearing zero, so you ignore the possibility.

If you want to compute immortality, you just can't do it. Mortality
might be computable, but even there "believe" needs to be translated
into "expect as a most probable result", possibly with an explicit lower
bound to the probability. Then you get entities shading up from
expecting that they are mortal to maybe not. Immortality, though,
requires an extra-universal component, and some belief about how time is
measured in that extra-universal component (e.g., it doesn't
count...it's just eternal!). This can probably never be rationally
defended. People can be convinced of it, because they want to be
convinced, not because they are rational.

FWIW, many psychologists seem to believe that the subconscious mind
doesn't have a temporal component. I.e. (in my translation) they see it
as a state table with state transition rules, but no history, and no
future. I'm sure their image is more complex than that, but my
suspicion is that it's sufficiently fuzzy that nearly any details can be
encompassed, and I don't find that a useful model. But if they are
right, then the subconscious mind has no concept of its own death.
That's just a state in the state table with no exit rules. And probably
a large negative weight in desireability. (It's not infinite, as people
are known to have volitionally entered such a state. But very large.)
With this model, then, saying a system believed it was immortal would be
equivalent to saying that it didn't contain a state with no exit
transition rules. And such a system would be computable. I just doubt
that it could be an AGI.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT