Re: Deliver Us from Evil...?

From: Mark Walker (tap@cgocable.net)
Date: Tue Mar 27 2001 - 07:32:21 MST


> Personally, I feel that it will probably be impossible to "hardwire"
> anthropomorphic morality and reasoning into a seed AI and expect those
> goal-systems to remain after severe self-enhancement by the transcending
AI.
> The resulting SI would be an utterly alien thing, and any speculation
about
> its actions would be futile. Hence my slight irritation regarding
> discussions about the Sysops do:s and don't:s.
> Since it is my belief that the post-singularity world will be unknowable,
my
> definition of long-term is on the order of 20-25 years. My guiding
> principles is reaching singularity as fast as possible. If you want to
call
> that ethics, that's fine with me.
>
>
>
> There will ONE relevant entity. This entity will IMO relate to humans as
we
> relate to bacteria. We do not make stable associations with bacteria.
>
> Again, the unknowability assumption makes it impossible to predict
anything
> IMO.
>
>
I have some sympathy with your point that epistemology ought to proceed
ethics but I think a lot more needs to be said about the unknowability
assumption. Here is a very very very rough schema for fleshing out the
unknowability assumption:
Cognitively speaking:

0. There is no overlap between us and SI
1. Minimal overlap: We hold to the same principles of logic as SI.

Beliefs Desires
2b. Beliefs in basic physics (+1) 2d. game theoretic assumptions
(+1)
3b Belief in basics of special 3d. same ethical concerns (+2d
+1).
      sciences (biology etc.) but also have desires
about things beyond our ken.
      (+1, 2b) but
      have belief about things
       beyond our ken.
4b Share all our beliefs about the 4d. Share all our desires--they
just process
    world--they just think faster them faster.

Presumably you mean something above the fourth level. (This would be what is
sometimes called weak super intelligence. The SI can think faster than us
but we can come up with the same answer if we plod along).
Presumably you mean something more than level three. The idea here would be
that our viewpoint might be (roughly) a proper subset of the SI viewpoint in
the way that say an average 10 year old's viewpoint is (roughly) a proper
subset of an average adult.
Presumably you mean something more than level two since even here we are
imaging there is a partial overlap in our most basic beliefs and desires.
Presumably cannot mean level one either since even at this point we would
share knowledge of some logical truths.
As far as I can tell, many people in the transhumanist network believe that
SIs must be at least at level two. (For example, Jupiter brain discussions
seem to presuppose that we have got the basic physics and thus logic right,
hence 2b. Discussions about wars between SIs seem to assume that
game-theoretic assumptions hold and thus logic, hence 2d). Although I have
yet to find a good reason for this assumption.
Your argument seems to presuppose that the null level best describes our
cognitive relation--I take this to be the upshot of the bacteria analogy. Do
you have good evidence that this MUST be the case? Myself, I think that the
attempt to make SIs is an experiment where we do not know for certain where
on the 0 to 4 scale our "children" will land. This being the case it makes
sense to be anthropomorphic (as you say) and due what we can to ensure
friendly AI. We can due this even if we believe that it is possible (but not
certain) that all our efforts to this end may be like the bacteria's attempt
to determine our kingdom of ends, i.e., that all our efforts may be full of
sound and fury, signifying nothing. So, at minimum your argument needs to
show complete transcendence level 0. If the SI's transcendence is only
partial then there is still hope for having a hand in the future.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT