Re: A Sysop alternative

From: James Higgins (jameshiggins@earthlink.net)
Date: Mon Apr 09 2001 - 07:44:12 MDT


At 02:45 AM 4/9/2001 -0400, Gordon Worley wrote:
>Now, what I propose we do is that, rather than creating a Sysop, program
>into the system morals as natural laws and make them be enforced. In
>other words, the system actively makes it not possible to act outside of
>what is moral by the natural laws. How will we figure those laws
>out? Why, we have some AIs around first that are SIs and are more or less
>Friendly and maybe some post humans that we trust will not set up their
>own morals or drag along their human ones, let them live in a system for a
>while, figure out what the naturaly emergent laws are, and then impliment
>them at the most basic levels of the system. Then, everyone who wants to
>use the Earth created Singulartiy technology will have to do so based on
>what we determine to be the natural laws. No more allowing the
>intelligent to break their own laws, we just make it impossible. I
>realize already that most of the complaints about the Sysop (SIs outside
>of ver control, bugs, et al.), but it is free from some others, like the
>Sysop's potential to abuse power, ver limited ability to enforce morality
>(after all, there's only so much Sysop to go around), ver need to be
>watched to ensure that there are not bugs and the ability to decide that
>there is a bug, etc..

Well, I can already see the 1st objection. Go back and look at
Brian/Eliezer's arguments about not uploading a human first. If you need
to have AIs & SIs to do this, how do you prevent them from just taking
over? I'm a little less worried about this than some on the list, but it
is an important question. The first AIs must have something like
Friendliness, "more or less Friendly" just won't cut it.

Second, I can't personally imagine an implementation to enforce morals that
wouldn't require intelligence. Making these decisions requires the ability
to reason. A good example of this is some of the latest Airbus
planes. They have "enforced morals", as such, in that they prevent the
pilot from taking the plane past certain angles they have preset. Now,
99.999% of the time that's great, but what if your about to hit a
mountain? I'd rather be uncomfortable for a short time and live, than be
comfortable right up to my grave. The plane can't reason, though, so in
this scenario you'd be dead (there is no manual override, I understand,
btw). This is the kind of thinking I want to avoid. One wrong moral, and
it could artificially force the whole group into extinction.

I would also say that Intelligence breaking the "natural morals" is
probably a good thing, on the whole. Our natural morals are such that we
should run around kill & eat animals, and screwing any female we can to
propagate the species. Well over 90% of the population has probably never
killed anything larger than a mouse; Some would say this is a good
thing. We have also created trade, computers, video games, written
language, etc. These are not, strictly speaking, required by our
morals. Morals really represent "common sense" most accurately. They are
not, and should not be, hard and fast rules. Rather, they are guidelines.

Having a non-intelligent system that enforces morals would be terribly
dangerous for all inside if one or more external SIs existed. In such a
situation the free SIs could do anything they wished to the SIs in the
moral reality, who would be restrained in protecting themselves. Without a
Sysop around to help, they would be at the mercy of the external SIs. Not
that I personally love the Sysop scenario, but having a non-intelligent
system is even worse.

Lastly, I'd like to point out that everyone pretty much has different
morals. Some of these are good, some wacky and some just plain wrong (in
my opinion). Morals tie quite closely to beliefs, and I don't think we
have any business going around and forcing people to have certain
beliefs. That said, though, there are some "common" root morals that
should apply to humanity as a whole. I guess my point here is that
"friendly" is less restrictive than "moral", which is probably a good
thing. After all, would you like to be forced to follow a moral saying
that your not allowed to dance? If I *have* to be forced into anything I'd
rather be forced into friendliness. Although I still think we need to
consider a future where we don't need to force anyone into anything.

>Also, I've been thinking. Back when the industrial revolution started,
>many people looked at the new technology and wondered how the world would
>ever work unless the state stepped in and managed things (thus the rise in
>the popularity of socialism around these times in various
>locations). Today, it is easy to see how our industrial and now
>informational society could work without state intervention, but the
>future looks uncertain. It seems again that the state (or our equivilant
>of it) will have to step in and regulate to make things work out
>alright. Considering the past, I have to wonder if we'll see the same
>thing again in the future or if these technologies really are going to be
>capable of destroying everything to the extent that what bit of natural
>laws survive on their own will not be enough to prevent the End of Everything.

Interesting comparison.

I personally think we would probably be just fine individually
uploaded. Well, as long as a large group of mostly reasonable people
uploaded at the same time. If SI has a natural attractor for friendliness,
everything is good. If it does not, things continue *mostly* as they are
now. Except that everyone is incredibly intelligent, no one is handicapped
(unless they want to be) and everyone has incredible power over the
physical universe. At first this sounds scary, but most people will want
to do the right thing. So if 1% of the population is inherently
unfriendly, 99% of the population would be able to gang up (as it were) and
protect against that 1%. Maybe putting them in a Sysop controlled virtual
reality for eternity if they are really bad (ie: a very nice & cushy
prison, where you can do anything except affect the "real" world).

In a short amount of time I imagine order would occur and everyone could be
free to live their lives among peers that were reasonable. Maybe not
"friendly" all the time, but that is good (I'd go nuts if I had to live
inside "It's a small world" after all).



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT