From: Matt Mahoney (firstname.lastname@example.org)
Date: Sat Oct 17 2009 - 17:15:55 MDT
From: Pavitra <email@example.com>
Matt Mahoney wrote:
>> Pavitra wrote:
>>> A[ ] Develop a mathematically formal definition of Friendliness.
>> In order for AI to do what you want (as opposed to what you tell it),
>> it has to at least know what you know, and use that knowledge at
>> least as fast as your brain does.
> Doesn't this imply that the relevant data and algorithms are
> incompressible? In particular, it's possible that _lossy_ compression
> may be acceptable, provided edge cases are properly handled; I can
> predict the trajectory of a cannonball to an acceptable precision
> without knowing the position or trajectory of any of its individual atoms.
We will arrive at approximate solutions before exact ones, because the problem is hard, yes.
>> To satisfy conflicts between people
>> (e.g. I want your money), AI has to know what everyone knows. Then it
>> could calculate what an ideal secrecy-free market would do and
>> allocate resources accordingly.
> Assuming an ideal secrecy-free market generates the best possible
> allocation of resources. Unless there's a relevant theorem of ethics I'm
> not aware of, that seems a nontrivial assumption.
I define "best" to mean the result that an ideal secrecy-free market would produce. Do you have a better definition, for some definition of "better definition"?
>> One human knows 10^9 bits (Landauer's estimate of human long term
>> memory). 10^10 humans know 10^17 to 10^18 bits, allowing for some
>> overlapping knowledge.
> Again, where are you obtaining your estimates of degree-of-compressibility?
The U.S. Dept. of Labor estimates it costs on average $15K to replace an employee. This is about 4 months of U.S. per capita income, or 0.5% of life expectancy. This mean on average that nobody knows more than 99.5% of what you need to know to do your job. It is reasonable to assume that as the economy grows and machines do our more mundane tasks, that jobs will become more specialized and that the fraction of shared knowledge will decrease. It is already the case that higher paying jobs cost more to replace, e.g. 1-2 years.
Turnover cost is relevant because the primary function of AI will be to make humans more productive, at least initially. Our interest is in the cost of work-related knowledge.
> The Turing test is probably not suitable.
> Chatterbots have been found to improve their Turing Test performance
> significantly by committing deliberate errors of spelling and avoiding
> topics that require intelligent or coherent discourse.
I agree. Turing was aware of the problem in 1950 when he gave an example of a computer taking 30 seconds to give the wrong answer to an arithmetic problem. I proposed text compression as one alternative. http://mattmahoney.net/dc/rationale.html
>>> C->D[ ] Develop an automated comparison test that returns the more
>>> intelligent of two given systems.
>> How? The test giver has to know more than the test taker.
> Again, this seems more a criticism of C than of D.
It depends on what you mean by "intelligence". A more general definition might be making more accurate predictions, or making them faster. But it raises the question of how the evaluator can know the correct answers unless it is more intelligent than the evaluated. If your goal is to predict human behavior (a prerequisite for friendliness), then humans have to do the testing.
> The whole point of Singularity-level AGI is that it's a nonhuman
> intelligence. By hypothesis, "humanity" ⊉ "intelligence".
Nonhuman intelligence = human extinction.
I don't mean this in a good or bad way, as "good" and "bad" are relative to whatever populates the world after humans are gone. It might be your goal to have these agents preserve human memories, but it might not be *their* goals, and it's their goals that count. They might rationally conclude that if your memories were those of somebody else's, you wouldn't notice.
You could argue that with somebody else's memories you wouldn't be "you". But what are you arguing? If a machine simulates you well enough that nobody can tell the difference, is it really you? Would you kill yourself and expect your soul to transfer to the machine?
> The goal, then, would be to ensure that the Singularity will be Friendly.
Define "Friendly" in 10^17 bits or less.
-- Matt Mahoney, firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT