From: Tim Freeman (firstname.lastname@example.org)
Date: Thu Apr 17 2008 - 08:07:20 MDT
> I'm interested in answers to the question "What do we want the AI to
From: Matt Mahoney <email@example.com>
>In which context?
Right now, what do we want the AI to do?
>"What will we want the AI to do?"
No, I'm not interested in extrapolating what I will want in the
future. I'll deal with that in the future.
>We will want it to grant our wishes, to make us happy. So that is
>what we will build. But our evolved utility function does not
>maximize fitness when we can have everything we want. We will upload
>into fantasy worlds with magic genies. We will reprogram our brains
>to experience a million permanent orgasms. We will go extinct.
Do you want us to go extinct? If not, then the scenario you describe
isn't what you want the AI to do. If you do want us to go extinct,
then I hope you're a minority.
You are making a valid point that we want the AI to have a planning
horizon that's large compared to the time required for addiction to
form, and perhaps also large compared to the time required for us to
go extinct. Otherwise we can't tell the AI we don't want addiction or
extinction and expect it to help.
>"What should we want the AI to do?"
No, I didn't ask that and am not interested in that. We want what we
want. So far as I can tell, talk about what we should want just leads
to hypocrisy. People say they should want things that sound nice but
are entirely unconnected with what they do want, and there's no way to
form a connection, so it's just hot air.
-- Tim Freeman http://www.fungible.com firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Tue Jun 18 2013 - 04:01:01 MDT