From: Lucas Sheehan (firstname.lastname@example.org)
Date: Fri Apr 18 2008 - 12:33:00 MDT
On Thu, Apr 17, 2008 at 7:39 PM, Matt Mahoney <email@example.com> wrote:
> > Do you then think we should stop its persuit? Is your goal to
> > hinder/avoid/outlaw AI?
> No. I think AI will result in humans being replaced with something "better"
> or more intelligent (or perhaps coexisting but unaware of the AI). I
> mentioned it because most people do not want to risk human extinction. So
> far, bans on AI exist only in fiction, e.g. Herbert's Dune, "thou shalt not
> make a machine in the likeness of a human mind". There is a possibility that
> as more people become aware of the singularity that many would wish to avoid
> it. We have not solved the friendliness problem, and many possible bad
> outcomes have been discussed. A singularity is inherently unpredictable.
> This is a problem for AI research.
> My position is strictly neutral. I am interested in forecasting where AI will
> lead us, which means understanding not just technology and the dynamics and
> limits of computation, but also how human motivation and ethics will drive the
> design. I won't say that any particular outcome is good or bad, because that
> is just a statement about my own beliefs and ethics, which are irrelevant to
> the outcome.
I understand and lean your way as well though part of me of course
cringes to lose or replace the "human" I am. I'm pretty fond of "me"
but excited by a better "us" even more. Sorry if I was abrupt I
misunderstood your position.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT