From: Matt Mahoney (firstname.lastname@example.org)
Date: Sun Mar 16 2008 - 19:01:49 MDT
--- Mark Waser <email@example.com> wrote:
> > There are a number of ways in which humans could become extinct without
> > our
> > goals being stomped on. Human goals are appropriate for survival in a
> > primitive world, not a world where we can have everything we want. If you
> > want 1000 permanent orgasms or a simulated fantasy world with a magic
> > genie,
> > then the nanobots go into your brain and your wishes are granted. What
> > difference does it make to you if your brain is re-implemented more
> > efficiently as gray goo and your body and world are simulated? You're not
> > going to know. Does this count as extinction?
> If the person being "re-implemented" believes so, then yes. In that case,
> you are clearly messing with their goals of not being extinct. You can't be
> absolutely sure that your "re-implementation" truly is exactly the same and
> that the subject wouldn't know or realize. Doing this against someone's
> will is evil.
Getting the re-implementation correct is an intelligence problem. If you
still have any doubts, the nanobots will go into your brain and remove them.
You will believe whatever you are programmed to believe. If you are opposed
to being reprogrammed, the nanobots will move some more neurons around to
change your mind about that too. It can't be evil if everyone is in favor of
Look, the AI is smarter than you and knows what's best for you. If you
persist in your stubbornness and continue to waste the AI's resources, it will
simply declare you UnFriendly and do it anyway.
> > But we don't really have a choice over whether there is competition
> > between
> > groups or not. My bigger concern is the instability of evolution, like a
> > plague or population explosion that drastically changes the environment
> > and
> > reduces the diversity of life. Some of the proposals for controlling the
> > outcome of a singularity depend on a controlled catastrophe by setting the
> > initial dynamic in the right direction. This is risky because
> > catastrophes
> > are extremely sensitive to initial conditions. But of course we are in
> > the
> > midst of one now, a mass extinction larger than any other in the last 3.5
> > billion years. We lack the computing power to model it, and there is no
> > way
> > to acquire it in time because the process itself is needed to produce it.
> > So
> > it always stays a step ahead. Sorry for the bad news.
> Um, I'm missing the bad news (or, at least, how it relates to my proposal).
> Could you please clarify?
Yes, our long discussion on whether it is better to have one group of
cooperating agents or competition between groups is moot. Whether there is
one group or many depends on the outcome of a catastrophe. We don't have a
choice in the matter.
-- Matt Mahoney, firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT