Re: Shock level 4 (was Re: META SL4)

From: Samantha Atkins (sjatkins@gmail.com)
Date: Sat Apr 26 2008 - 04:21:42 MDT


Matt Mahoney wrote:
> --- Mike Dougherty <msd001@gmail.com> wrote:
>
>
>> I have reviewed Shock Levels. There is currently nothing that mere
>> mortals may discuss that is SL4. I spent a long time waiting for a
>> discussion that was truly on-topic for the list.
>>
>> Is it even possible for an SL4 thread to be discussed?
>>
>> I'll wait for an SL4 topic before posting again.
>>
>
> I also reviewed http://www.sl4.org/shocklevels.html
> I will try. First I adjust my belief system:
>
> 1. Consciousness does not exist. There is no "me". The brain is a computer.
>
Are you a relatively independent and autonomous computational unit?
Then you doubtless have a model of the unit you are relative to other
units and groups of various characteristics as well as the containing
environment. Thus you have some type of self-concept.
> 2. Free will does not exist. The brain executes an algorithm.
>
As a rational intelligence you run various algorithms with or without
active consciousness of the running (tracing?) that choose the best of
sets of alternatives consistent with your goals within
computational/time/resource limits. As you are free to run the
algorithms and act on their outcome and further free to examine your
algorithms and implementations and other possible algorithms you are to
that extent a being of "free will". You are also free to develop your
knowledge base and reality model to the best of your initiative within
your limits. You are even free to consider the validity and limitation
of you current goals and understanding of them. This looks like quite
a bit of freedom and a singular lack of coercion.

> 3. There is no "good" or "bad", just ethical beliefs.
>
"Ethical" is all about defining relative "good" or "bad" in a hopefully
grounded fashion. Specifically it is about what is good or bad for
your best functioning and success in achieving your goals in your
context which includes other goal seeking intelligent entities.
> I can only do this in an abstract sense. I pretend there is a version of me
> that thinks in this strict mathematical sense while the rest of me pursues
> normal human goals in a world that makes sense.
How are your normal human goals not the result of unconscious algorithms
and goal structures?

> It is the only way I can do
> it. Otherwise I would have no reason to live. Fortunately human biases
> favoring survival are strong, so I can do this safely.
>
What is this "I" that would have no reason to live? Isn't it already a
construct you have defined to included things that are critical to the
construct and that it thus that it could not survive without? But other
constructs of "I" or self-image might work perfectly fine without some
of these elements. You also might find that the non-human being you
attempted to portray above in fact has most of those characteristics
that are important to your construct but seen a different way.

> My abstract self concludes:
>
> - I am not a singularitarian. I want neither to speed up the singularity nor
> delay it. In the same sense I am neutral about the possibility of human
> extinction (see 3).
>
>
You are perhaps not a singularitarian, yet.
> - AI is not an engineering problem. It is a product of evolution (see 2).
>
>
Engineering is a product of evolution.

> - We cannot predict the outcome of AI because evolution is not stable. It is
> prone to catastrophes.
>
>
Engineering is not evolution. But we cannot predict post-AGI in detail
any more than an ant could predict the human mind.

> - "We" (see 1) cannot observe a singularity because it is beyond our
> intellectual capacity to understand at any pre-singularity level of intellect.
>
>
In any real detail yes. But that does not mean we have no basis for
seeing it as desirable.

> - A singularity may already have happened, and the world we observe is the
> result. We have no way to know.
>
>
We have no evidence this is the case and thus we must act as if it is
not. We can only act rationally on what we have adequate evidence is
the case by definition.

> Discussions about friendliness, risks, uploading, copying, self identity, and
> reprogramming the brain are SL3. SL4 makes these issues irrelevant.
>

See http://www.sl4.org/intro.html <http://www.sl4.org/intro.html#concept>

It is not that strict or that small an eye of a needle.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT