Join: Pete & Passive AI

From: P K (kpete1@hotmail.com)
Date: Wed Dec 07 2005 - 13:22:56 MST


Hello.
First, a bit about me. I am currently a college student in Montreal, Canada.
Like many people new to FAI, I have an idea that I think is really cool and
that might solve many problems in this field. Feel free to point out the
flaws if you see any. And now¡K (Drumroll) I present my take on the
problem¡K

THE FRIENDLINES PROBLEM:
A very short summary of the problem:
-We want to make an AGI.
-An AGI would be really powerful.
-How do we make sure ve doesn¡¦t kill us or does something we really don¡¦t
want vim to do?

PROPOSED SOLUTIONS:
I will write it in the form of a ¡§proposed solution¡¨ followed by the
¡§problems¡¨ I see with it. I am NOT going to cover ALL the problems since
that would take too long.

1) INACTION:
Proposed solution: Don¡¦t make an AGI.

Problems: An AGI is the best way to avoid existential risks. (Even if it
creates potential risks)

2) ¡§FRIENDLY¡¨ AI (FAI):
Proposed solution: Make a ¡§friendly¡¨ AGI.

Problems: What is ¡§friendliness¡¨? There doesn¡¦t seem to be one clear
definition. All sorts of people claim to have a monopoly on knowing what
SHOULD be done. Also, every time someone comes up with a definition, people
can think of a scenario where everything goes horribly wrong and not at all
like the user intended. It seems humans are not smart enough at this time to
start with a finished PERMANENT and UNMODEFIABLE goal system.

3) OBEDIENCE (O):
Proposed solution: If the AGI is supposed to act like the programmer(s)
INTEND it to act, then just make the AGI obey their commands. And once ve is
superintelligent, use vim to make a new goal system.

Problems: Should the AGI obey commands literally? The AGI may not act as the
programmer(s) INTEND vim to. Ex: The user tells vim to calculate the nth
digit of Pi and ve converts the entire universe into computronium to do so.

4) CAUTIOUS OBEDIENCE (CO):
Proposed solution: Like ¡§obedience¡¨ but with safeguards. Ex: warning and
confirmation requests when an action will result in large movement of matter
or energy. And things like don¡¦t kill.

Problems: Again, this can still be very dangerous. Humanity will only be
safe from all the things the programmers thought of. What about the things
they DIDN¡¦T think of?
It could work, if the programmers get this initial goal system right and
cautiously work to make the AI superintelligent. They could then
(cautiously) order it to design a goal system they REALLY want¡K but it¡¦s
risky.

5) EXTRAPOLATED VOLITION (EV):
Proposed solution: Since humans are not smart enough to figure out what they
want, assign someone that IS smart enough do it, the AGI.

Problems: The problem is that to extrapolate you need a superintelligent AI
and to have a superintelligent AI you need an extrapolated goal system.
It¡¦s a catch 22. Unless you think you can make an extrapolating AI on the
first iteration. Fat chance. The first AGI will probably be dumber than
humans are. Then, with human assistance, ve will pass into superintelligence
and hopefully not become ¡§evil¡¨. So what do you do in that intermediate
period when you still can¡¦t extrapolate? You could probably start with
Cautious Obedience, get an AI smart enough and then, Extrapolate Volition.
However, you get all the disadvantages inherent to CO. Also, if you get a
superintelligence with CO, you could use it to help you figure out how to
continue, instead of using your current pre-superintelligent mind to say EV
is the way.

6) COLLECTIVE VOLITION (CV):
Proposed solution: It is similar to EV but with a twist. The AGI should
extrapolate the volition of ALL humans and somehow combine all that
information and extract a goal system. The programmers don¡¦t get to see the
results before and cant change anything after, but there is a last judge who
can call everything off. The upshot is that CV will get the same result from
any human(s) that programmed vim, even jerks.

Problems: Same problems as with EV but with twists. To extrapolate and
combine the volition of all humans the AI would have to be EXTREAMLY
intelligent. Obviously, you can¡¦t start with this. Eliezer even admits that
there would have to be an initial dynamic in his CV paper. CV might be
implemented at a very late stage when we already have a superintelligent AI
but why not just ask the AGI for help in figuring things out. Maybe CV is a
completely flawed concept but we are too dumb to see it. If there is a
superintelligent AGI around, why not ask it for advice?

Of course, to ask for advice we would need an initial dynamic we can trust,
a dynamic without a personal agenda. This brings us to my idea¡K

7) PASSIVE AI (PAI):
Proposed solution: Since AI can be so dangerous, why not make vim incapable
of ¡§acting¡¨ and only capable of ¡§thinking¡¨?

First of all, PAI should not be confused with AI boxing. A boxed AI IS
capable of acting. Vis actions are simply restricted by a digital cage.
Assuming ve wants to escape, ve probably has a very good chance since, by
definition, ve is smarter than vis jailers are. So, from the jailers¡¦ point
of view, the cage is a crappy security measure. In fact, this is the wrong
attitude when designing AI. The AI should¡¦ t be the enemy. But I digress¡K

The kind of pacification I¡¦m talking about, by analogy, would be like if
the jailers removed the part of the prisoners¡¦ brain responsible for his
will. The prisoners ceases to be a prisoners because he doesn¡¦t WANT to
escape (or anything for that mater) and the jailers cease to be jailers
because they don¡¦t have to keep him captive. This analogy seems pretty
gruesome, let¡¦s get back to AI. (Building a mind from scratch without a
piece is not the same as removing a part from a human¡¦s brain, so we won¡¦t
feel uncomfortable on ethical grounds).

Let¡¦s say you build an AI without a goal system. What working parts will
that AI have? It would have an Inference engine (probably Bayesian), a
memory etc. Basically, it would have all the parts that PREDICT and help
predict. (I.e.: S1 „³ S2) Now you have an empty slot where the goal system
should be. You set up your program such that you can act as a temporary goal
system for the AI by manually feeding it input.

Are humans too slow to act as manual goal systems? Probably slower than the
computer and some things will be impossible to do in this way but it is
still very useful. I will illustrate this with examples:

Human: What is X?
AI: Insufficient parameters. Equation data required.
Human: X*X = 4
Human: What is X?
AI: X=2 or X=-2

Human: Is global warming real?
AI: Insufficient parameters. Weather data and satellite imagery required.
Human: <input weather data and satellite imagery >
Human: Is global warming real?
AI: :-p

Human: Given universe state S1, what is the next most likely state?
AI: S2

Human: What are the required conditions for S2 to occur?
AI: S1

As you can see the ¡§predicting¡¨ part can solve for things given parameter.
However it does not chose the question or what actions to take. Moving
along¡K

Human: What is the best goal system?
AI: Insufficient parameters. Define ¡§best¡¨.
<The human keeps asking questions while the AI requests missing information
and points out inconsistencies>

As you can see the ¡§predicting¡¨ part can be used to get the goal system
and unlike humans, the AI wont make any mistakes and will notice all the
inconsistencies. Also, it is unaffected by human biases. An AI doesn¡¦t need
a goal system to do these things. It reacts to input the same way your leg
reacts when a doctor hits it with a hammer, automatically.

Note: I do not claim to know how an AI would answer in the examples since I
am not superintelligent nor to I claim that the interface will be exactly in
this way (console chat).

Problems: Pending¡K

_________________________________________________________________
Take advantage of powerful junk e-mail filters built on patented Microsoft®
SmartScreen Technology.
http://join.msn.com/?pgmarket=en-ca&page=byoa/prem&xAPID=1994&DI=1034&SU=http://hotmail.com/enca&HL=Market_MSNIS_Taglines
  Start enjoying all the benefits of MSN® Premium right now and get the
first two months FREE*.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT