Re: [sl4] What is the probability of a positive singularity?

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Wed Jul 23 2008 - 19:40:38 MDT


-- Matt Mahoney, matmahoney@yahoo.com

--- On Wed, 7/23/08, Nick Tarleton <nickptar@gmail.com> wrote:

> On Wed, Jul 23, 2008 at 5:04 PM, Matt Mahoney
> <matmahoney@yahoo.com> wrote:
>
> > Another possible scenario is that once we have the
> > technology to reprogram
> > our brains (either in-place or uploaded), that a
> > fraction of humans won't go
> > along. The brain is programmed to find the state x
> > that maximizes utility
> > U(x). In this state, any perception or thought will be
> > unpleasant because it
> > would result in a different mental state.
> >
>
> To say the brain is "programmed" to do anything really stretches the
> metaphor; and more importantly, the fact that this is intuitively
> undesirable suggests that the human utility function, to the extent such
> a thing exists, is over histories rather than timeslices. (At
> least the 'utility function' of the subself writing this -
> other subselves might have preferences over timeslices.)

You're right, utility = accumulated reward. Substitute happiness = dU/dt and my argument is the same. My point is that having a magic genie that will grant all you wishes (1) won't make you any happier and (2) will result in a degenerate mental state. Evolution will favor those who don't succumb to the temptation, if not everyone does.

> > The fraction that realizes utopia = death, who realize
> > that evolution is
> > smarter than you are, will be the ones that pass on
> > their genes. There is a
> > good reason that humans fear death and then die, but
> > not all of us realize it (including SIAI, it seems).
> >
>
> ?

I disagree with SIAI that we should be "working toward" a singularity, with their tempting but false utopian view of uploading and immortality. It confuses what is best in our ethical system with what is best for the species (e.g. http://www.intelligence.org/blog/2007/06/16/transhumanism-as-simplified-humanism/ )

Agents that can reprogram themselves cannot be the dominant intelligence for two reasons. I just gave one. The other is the lack of non-evolutionary models of RSI. It would be an important advance if we could discover such a model, for example, a proof that P != NP or a provably secure cryptosystem with short keys (so that agents could test their offspring). But I think such a model is unlikely and we should prepare for that outcome.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT