Re: A Sysop alternative

From: James Higgins (jameshiggins@earthlink.net)
Date: Mon Apr 09 2001 - 09:10:20 MDT


At 10:07 AM 4/9/2001 -0400, Gordon Worley wrote:
>At 6:44 AM -0700 4/9/01, James Higgins wrote:
>>Second, I can't personally imagine an implementation to enforce morals
>>that wouldn't require intelligence. Making these decisions requires the
>>ability to reason. A good example of this is some of the latest Airbus
>>planes. They have "enforced morals", as such, in that they prevent the
>>pilot from taking the plane past certain angles they have preset. Now,
>>99.999% of the time that's great, but what if your about to hit a
>>mountain? I'd rather be uncomfortable for a short time and live, than be
>>comfortable right up to my grave. The plane can't reason, though, so in
>>this scenario you'd be dead (there is no manual override, I understand,
>>btw). This is the kind of thinking I want to avoid. One wrong moral,
>>and it could artificially force the whole group into extinction.
>
>This isn't really a good analogy. Real morals have nothing to do with
>comfort, but with actual survival. You can be incredibly uncomfotable and
>not violate morals. Also, this is why I am weary of actually enforcing
>the morals. This is mostly me trying to find an alternative to a Sysop if
>we have to have some kind of rules. I agree, if we get it wrong just a
>little bit, were all screwed.

Well, flight guidelines do not compare well with morals, agreed. My point
was to show that a simple (unintelligent) system which enforces rules is
dangerous, for which I do believe this is a good analogy.

>>I would also say that Intelligence breaking the "natural morals" is
>>probably a good thing, on the whole. Our natural morals are such that we
>>should run around kill & eat animals, and screwing any female we can to
>>propagate the species. Well over 90% of the population has probably
>>never killed anything larger than a mouse; Some would say this is a good
>>thing. We have also created trade, computers, video games, written
>>language, etc. These are not, strictly speaking, required by our
>>morals. Morals really represent "common sense" most accurately. They
>>are not, and should not be, hard and fast rules. Rather, they are guidelines.
>
>This is a common misunderstand, relating morals too closely with changes
>in society. I don't have the time right now to explain this (I'm sort of
>in a hurry but wanted to reply).

The fact is I don't think much of "natural" morals. Such things, if they
can even be said to exist (I disagree with this usage of the term moral),
are strictly related to survival. For example, some species eat each other
when food becomes very scarce. Heck, some species will even eat their own
children if given the opportunity.

>>Having a non-intelligent system that enforces morals would be terribly
>>dangerous for all inside if one or more external SIs existed. In such a
>>situation the free SIs could do anything they wished to the SIs in the
>>moral reality, who would be restrained in protecting themselves. Without
>>a Sysop around to help, they would be at the mercy of the external
>>SIs. Not that I personally love the Sysop scenario, but having a
>>non-intelligent system is even worse.
>
>Well, this is why I don't want any kind of Sysopish thing at all. So, I
>guess we agree on the anarchy scenario then?

Guess so. So far I personally think this is probably the best long-term
solution. The idea that super-intelligent beings need something to enforce
their happiness seems unrealistic. I can se where WE may think we need
this, but I don't believe it makes sense on the other side. As long as
their are a population of SIs, I believe they will self-regulate just fine.

Ah, I may have an interesting compromise. I do agree that us mere mortals
could use something and would be helpless against SIs. How about if the
sysop ONLY enforced happiness towards humans. Once uploaded, SIs would be
free to do as they pleased, as long as they didn't interact with
humans. When they did interact with humans, they would be forced to follow
a set of friendly rules. Humans would also be subject to these friendly
rules. Eliezer, what do you think about this?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT