Re: AI, just do what I tell you to

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Wed Oct 31 2007 - 00:29:37 MDT


On 31/10/2007, Peter de Blanc <peter@spaceandgames.com> wrote:

> I think you're imagining an AI that implements some, but not all, of
> human morality. This is the real problem. If an increase in optimization
> power is hurting you, then you're optimizing for the wrong thing.

That's right, but it gets very tricky when the AI can see things more
clearly and further into the future than you can. The two courses of
action available to it that we are considering are whether to (a)
estimate your CEV on the matter in question and act accordingly, or
(b) give you the relevant information and then follow your
instructions. To a certain extent (a) and (b) will coincide, since if
for example you get really annoyed that the AI never seems to do what
you want or or limits your freedom, that will skew your CEV towards
making it more obedient. But I worry that the person I might become in
a thousand years time could get more of a vote in my estimated CEV
than I get today, especially if that potential future me is modified
by the AI to be "better" than I am now. Just as I may regret the
actions of my past selves I'm sure some of those past selves wouldn't
have liked the way I've turned out today, and would have been very
indignant if I had reached back from the future to control their
lives. That's why I'd rather just have the AI follow my instructions,
which may from time to time include instructions such as "do whatever
you think is best for my long term future".

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT