Re: [sl4] Potential CEV Problem

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Thu Oct 23 2008 - 09:52:21 MDT


--- On Thu, 10/23/08, Edward Miller <progressive_1987@yahoo.com> wrote:

> I am assuming that to successfully determine the
> extrapolated volition of the human race, it would take an
> enormous extra amount of computational power. Before the CEV
> is determined, I am assuming that the AGI would be agnostic
> on the matter. Thus, its first task would be to acquire as
> much computing power as possible and then CEV might turn out
> to be one of those not-so-great ideas.

In my most recent AGI proposal at http://www.mattmahoney.net/agi2.html I estimate the cost of modeling all human brains (from which a model of friendliness could be derived) is 10^17 to 10^18 bits. The cost of acquiring this knowledge is currently about US $1 quadrillion. The estimate makes optimistic assumptions about the future cost of hardware (assume free), AI technology (assume language and vision are solved), and our willingness to live in a world where everything we say and do is public knowledge and instantly accessible to everyone due to pervasive surveillance (since this is the cheapest way to acquire that knowledge).

This is not CEV. It is a market driven consensus of what we want right now. I believe it is what we will actually build because people won't like the idea of machines telling them what's best for them.

I have outlined some long term threats in section 5 of my proposal. I believe that the biggest threat to humanity is uploading. In a world of extensive surveillance and the AI technologies that make it useful, it is not even necessary to develop any additional technology such as brain scanning to achieve it. Anyone can make a plausible simulation of you that's good enough to fool your friends and relatives. The problem is when people simulate themselves and transfer legal rights to their simulations after they die. Our standard of living improves as long as the economic growth (currently 5% per year) exceeds population growth (currently 1.5%). But uploads don't have the cognitive limitations of human brains. They can be smarter, reproduce rapidly, evolve, and compete with humans for computing resources. Humans are at a distinct disadvantage.

CEV says (something like) AGI should grant our volition as if we could think faster, knew more, and were more the people we wanted to be. But it doesn't define "we". De "we" become the programs that initially simulate us? Is this extinction or not?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT