From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Fri Apr 06 2001 - 01:24:26 MDT
James Higgins wrote:
> As long as the Friendly AI people want to keep making it sound like
> everything is going to go like clockwork and be perfect, I'm going to
> continue to point out the opposite viewpoint.
Now, in point of fact, I think there's a quite significant probability
that everything will "go like clockwork and be perfect" - that the whole
issue will be resolved cleanly, safely, and without once ever coming close
to the boundary of the first set of safety margins.
That said, any damn fool can build a Friendly AI if nothing goes wrong.
Which is why the "Friendly AI" paper is currently more than 600K long.
> We are playing with incredibly dangerous technology here. Not once have I
> seen the powers that be on this list stop and ask "should we do this?"
I have to echo Brian on this. The point of doubts is that they lead to
questioning, and thence to ANSWERS. And we didn't start doing this
yesterday. All known doubts have been taken into account and resolved
into our current course of action, so we are unlikely to engage in
spontaneous self-questioning unless there's a new fact, experience, or
realization to act as a trigger factor. I'm sorry if this makes us look
overconfident, but what are we supposed to do? Pretend to engage in
spontaneous self-questioning for the PR benefit?
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:00:19 MDT