From: Tim Freeman (email@example.com)
Date: Thu Nov 22 2007 - 11:28:27 MST
From: "Wei Dai" <firstname.lastname@example.org>
>1. Suppose a human says to the AI, "please get an apple for me." In your
>scheme, how does the AI know what he really wants the AI to do? (Buy or
>pick, which store, etc.)
My previous answer to this was no good; I misread the question as
"what will the AI think he wants the AI to do?"
The AI will come up with multiple utility functions that explain the
human's choices so far, and assign probabilities to them according to
the speed prior. Each of these utility functions will assign a
utility to the consequences of possible AI actions, such as the AI
picking the apple, the AI buying the apple from store X, the AI buying
the apple from store Y, the AI ignoring the request, the AI buying a
banana, etc. Weighting each utility by its a-priori probability,
we'll get an expected utility for each action, and the AI will take
the action that has the greatest expeted utility.
So if the guy usually buys apples from a particular store, and he
never steals them from the neighbor's tree even though that's more
convenient, the AI will probably get it from the same store.
-- Tim Freeman http://www.fungible.com email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT