From: Matt Mahoney (firstname.lastname@example.org)
Date: Wed Apr 16 2008 - 21:02:37 MDT
--- Tim Freeman <email@example.com> wrote:
> From: Matt Mahoney <firstname.lastname@example.org>
> >Are we asking "what should we do?" or "what will we do?"?
> I'm interested in answers to the question "What do we want the AI to
In which context? "What will we want the AI to do?" We will want it to grant
our wishes, to make us happy. So that is what we will build. But our evolved
utility function does not maximize fitness when we can have everything we
want. We will upload into fantasy worlds with magic genies. We will
reprogram our brains to experience a million permanent orgasms. We will go
"What should we want the AI to do?" If I pick up a piece of trash, hold it
for t seconds and throw it down, then for what value of t is it littering? If
I make an exact copy of you, wait t seconds, then kill one copy, then for what
value of t is it murder? If you were uploaded with made-up memories, would
you care? Is human extinction a bad thing? It is pointless to argue such
questions. Good and bad are only defined in the context of ethical beliefs.
Ethical beliefs evolve to increase the fitness of the group.
-- Matt Mahoney, email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT