Re: Threats (was Theoretical question for the list: publicity?)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Apr 08 2001 - 09:50:57 MDT


Arctic Fox wrote:
>
> Pehaps Brian and Eliezer could give us some background on this. Has
> there been any threats or communications from extremist or religous
> groups etc regarding the Singularity Institute so far?

I receive an angry, badly-spelled letter every month or so regarding one
of my personal sites; so far, SIAI hasn't even had that level of problem.

Of course, this doesn't mean there isn't a threat.
The wise falcon hides his talons before he strikes.

> If the research (i.e. AI coding) had to go underground how difficult
> would that be to do? I presume there won't be too much physical
> equipment involved (unlike, say, a cloning laboratory) so would it be
> possible to relocate and for the group to split up and share data
> securely over the internet?

See above. <grin>.

If you were to ask me this as an entirely theoretical question involving
some *other* AI project, then I'd say that it would be seriously difficult
to take it underground in such a way that the world at large knew nothing
about it, and you'd probably have to wait for at least another few
iterations of Moore's Law before you could work on AI with that level of
diminished resources. A cloning laboratory can relocate to the Third
World but people will probably still know where it is; running a seed AI
project over FreeNet would be possible but very, very difficult until
Moore's Law caught up. So "relinquishment" wouldn't work forever, but it
wouldn't be a trivial obstacle either.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT