Confidence in Friendly Singularity

From: H C (lphege@hotmail.com)
Date: Thu Jun 08 2006 - 16:00:08 MDT


My response to a statement on imminst

http://www.imminst.org/forum/index.php?s=&act=ST&f=75&t=10834&st=20#entry113511

The problem comes down to what we make the AI desire. Humans desire sex,
food, truth, social standing, beauty, etc. An AI might desire none of these
things (except most certainly truth), and yet still be capable of general,
human level, adaptable intelligence. It wouldn't need any of the human
instincts indiginous to our body (although probably will be some overlap
with intuitional (i.e. creative) instincts).

Because of this *inconcievably* large array of possibilities, almost any
analogy that we implicitly use in order to make some extrapolation on to
what we think it will actually do are extremely unreliable (let alone the
explicit ones).

This is because the actual implementation is in the hands of the
programmers. Whatever exact desires that they seed this Really Powerful
Optimization Process with, it's going to explode into something completely
beyond our current comprehension. You can't predict what someone smarter
than you can, let alone will, do, especially if you don't have a technical
understanding of it's desires.

Lastly, if it has some desire, we can predict with strong confidence that it
will obtain those desires, irrespective of any actions taken by humans to
counter act this. Imagine if you were playing someone in chess, that was
way, way smarter than you. You can't predict what they will do, because they
are simply better chess players. No matter how smart you are, how clever you
are, how much experience, preparation, and security you try to "Box" the AI
with, you can predict with strong confidence it will overcome all of it (for
reasons of it's exponentially increasing effectiveness of it's source code-
which is the AI, which modifies the source code, etc)

The only reliable measure of confidence you have in a safe, effective
Singularity, is the level to which you can mathematically verify confidence
in the Friendliness of the *first* AGI *before* it is implemented.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT