Re: [sl4] to-do list for strong, nice AI

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Sat Oct 17 2009 - 00:13:33 MDT


On Oct 16, 2009, at 10:54 PM, Pavitra wrote:
> Matt Mahoney wrote:
>> To satisfy conflicts between people
>> (e.g. I want your money), AI has to know what everyone knows. Then it
>> could calculate what an ideal secrecy-free market would do and
>> allocate resources accordingly.
>
> Assuming an ideal secrecy-free market generates the best possible
> allocation of resources. Unless there's a relevant theorem of ethics
> I'm
> not aware of, that seems a nontrivial assumption.

What is your definition of "best possible allocation"? Matt is making
a pretty pedestrian decision theoretic assertion about AGI. It would
out-perform real markets in terms of distribution of resources, but
the allocation would still be market-like because resources would
still be scarce. It would be as though a smarter version of you was
making decisions for you.

Ethics has little to do with it. How much sub-optimality should the
AGI intentionally insert into decisions, and how does one objectively
differentiate nominally "bad" suboptimality and nominally "good"
suboptimality?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT