Building a friendly AI from a "just do what I tell you" AI

From: sl4.20.pris@spamgourmet.com
Date: Sat Nov 17 2007 - 14:46:24 MST


Building a friendly AI(FAI) from a "just do what I tell you" AI(OAI ==
obedient AI).

I know that OAI have been discussed recently in this forum but read on
before you dismiss this.
To avoid any possibility of dangers we program the OAI to not perform
any actions other than answering with text and diagrams(other media
like sound and video would be a possibility too). In essence what we
would have is a glorified calculator. I think this avoids any dangers
from the AI following orders literally with unintended consequences.

So we go to the OAI and say: "Tell me how I can build a friendly AI in
a manner that I can prove and understand that it will be friendly."

The OAI will think and give you a detailed blueprint, proof, etc...

You then analyse the documents until you understand them. You could
also ask for further clarification from the OAI.
Someone might raise the objection: how can you be sure that there
aren't any backdoors or problems with the blueprints? This will also
be a problem if you come up with your own way of making a FAI. The
only answer is: you have to be very careful! The point of using an OAI
is the same as for using a calculator: to make things easier.

Then you build the FAI.

Of course the real thing may be a bit more complicated for example:
making the OAI first generate plans for a more intelligent OAI and so
on. We could have several OAI enhancement steps until we finally are
able to make a FAI.

On a very basic level our current-date computers are OAIs.

Comments?

Roland.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT