Re: Arbitrarily decide who benefits (was Re: Bounded population)

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Thu Apr 17 2008 - 20:39:06 MDT


--- Lucas Sheehan <lucassheehan@gmail.com> wrote:
> On Thu, Apr 17, 2008 at 12:03 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:

> > I think the majority do not want human extinction (even though you would
> not
> > know the difference. Extinction is not death, it is the lack of birth).
> But
> > if enough people believe that AI will result in human extinction (as I
> do),
> > then it is sure to be outlawed.
> >
>
> Do you then think we should stop its persuit? Is your goal to
> hinder/avoid/outlaw AI?

No. I think AI will result in humans being replaced with something "better"
or more intelligent (or perhaps coexisting but unaware of the AI). I
mentioned it because most people do not want to risk human extinction. So
far, bans on AI exist only in fiction, e.g. Herbert's Dune, "thou shalt not
make a machine in the likeness of a human mind". There is a possibility that
as more people become aware of the singularity that many would wish to avoid
it. We have not solved the friendliness problem, and many possible bad
outcomes have been discussed. A singularity is inherently unpredictable.
This is a problem for AI research.

My position is strictly neutral. I am interested in forecasting where AI will
lead us, which means understanding not just technology and the dynamics and
limits of computation, but also how human motivation and ethics will drive the
design. I won't say that any particular outcome is good or bad, because that
is just a statement about my own beliefs and ethics, which are irrelevant to
the outcome.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT