Re: [SL4] brainstorm: a new vision for uploading

From: king-yin yan (y.k.y@lycos.com)
Date: Thu Aug 14 2003 - 13:08:00 MDT


>Risky paths are reasonable only if there are no knowable faults with the path.
>Creating an AI without a concrete theory of Friendliness, perhaps because you
>don't think it's necessary or possible to work out anything beforehand, is a
>knowable fault. It is both necessary and possible to work out essential
>things beforehand (eg. identifying "silent death" scenarios where the AI
>you're experimenting with appears to perfectly pick up Friendliness, but
>become unFriendly as soon as it's not dependant on humans). You can't work
>out every detail so you'll update and test your theories as evidence from AI
>development comes in.
>
>Creating an AI with the belief that no special or non-anthropomorphic efforts
>are needed for Friendliness, perhaps assuming it'll be an emergent behaviour
>of interaction between altruistic humans and the developing AI or that you
>need only raise the AI 'child' right, is another knowable fault. There a
>bunch of them, since there are always more ways to go wrong than right.
>
>An AI effort is only a necessary risk given that it has no knowable faults.
>The project must have a complete theory of Friendliness, for instance. If you
>don't know exactly how your AI's going to be Friendly, it probably won't be,
>so you shouldn't start coding until you do. Even then you have to be careful
>to have a design that'll actually work out, which requires you to be
>sufficenlty rational and take efforts to "debug" as many human
>irrationalities and flaws as you can.
>
>"AGI project" -> "Friendly AGI project" is not a trivial transformation. Most
>AI projects I know of have not taken sufficent upfront effort towards
>Friendliness, and are therefore "unFriendly AGI projects" (in the sense of
>non-Friendly, not explictly evil) until they do. You have to have pretty
>strong evidence that there is nothing that can be discovered upfront to not
>take the conservative decision to work out as much Friendliness as possible
>before starting.
>
>Since an unFriendly AI is one of the top (if not the top) existential risk,
>we're doomed both with and without AGI. For an AGI to have a good chance not
>to destroy us, Friendliness is necessary. Ergo, Friendly AIs are better than
>our default condition. By default an AGI is unFriendly: humane morality is
>not something that simply emerges from code. If the AGI project hasn't taken
>significant up front measures to understand Friendliness, along with
>continuing measures whilst developing the AI, it's not likely to be Friendly.
>
>The unFriendly AI wouldn't destroy literally everything, but optimise nearby
>matter to better fufill it's goal system. An unFriendly AI doesn't care for
>humans, and so we get used as raw matter for computronium. It's this kind of
>scenario that's the risk.
>
>- Nick

Hi Nick,

I don't have a clear understanding of the big picture yet, but I think I've
spotted a mistake here: "Robust" Friendliness requires non-anthropomorphic,
mathematical / logical precision. Anything less than that would be risky.
However, Friendly to whom? We seem to have already excluded other
primates from consideration, not to mention animals. Even if that is OK,
the defintion of Friendliness will become more problematic when uploading
becomes available. Are uploads humans? Do copies have separate votes or
1 vote? I'm not sure how a formal system of Friendliness is capable of
dealing with such questions.

The second problem is that Friendliness will be designed by a group of
*human* programmers, not by an AI. If we then concentrate all our
computational resources to the FAI, then Friendliness will effectively
become some sort of universal political solution. In other words:
A group of human programmers will have to design a perfect political
system. That sounds very unrealistic...

The question is why put ourselves in a vulnerable position, sub-
-ordinating to a superintelligence which doesn't even exist now.

Personally I think the most appealing solution is to let people augment
themselves rather than create autonomous intelligent entities. But we
don't have a direct neural interface to connect our brains to computers.
Unless we get uploaded otherwise we'll have to rely on LANGUAGE to
communicate with computers. This *linguistic bottleneck* is the hardest
problem I think.

YKY

PS Thanks for everyone else's reply...

____________________________________________________________
Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail!
http://login.mail.lycos.com/r/referral?aid=27005



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT