Re: SIAI seeking seed AI programmer candidates

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Thu Jun 03 2004 - 05:02:30 MDT


Giu1i0 Pri5c0 wrote:

> Michael, now you sort of scare me. The lesson I draw from History (of
> course others may draw other lessons) is not to trust things done to
> "save the world". This has some connotations of absolute truth that
> may degenerate in distructive self-righteous, or worse, behaviour in
> the hands of the wrong people. Remember the Inquisition, and be sure
> that they burned people for the very purpose of saving their sinful
> souls and the world. I know you guys are not the wrong people but you
> never know who comes next and what their real motives can be.
> Tell me that I should support the FAI project because it is
> intellectually interesting, because some deserving young people can
> make some money with it, because the FAI would solve this specific
> problem, or in general because the outcome can improve some things for
> some people. Saving the world is way too much.
> G.

Isn't this exactly the same mistake as Michael Wilson's, but in the
opposite direction? Let reason tell you whether the world is in danger
from some specific negative outcome, or whether FAI can solve the problem.
  It is a question of simple fact. If the world is in danger, then let
that be our belief; if the world is not in danger, than we should not
believe the world is in danger, regardless of any other ill effects it
might have. Answering the question of simple fact is necessary and
sufficient to deciding what to believe.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT