Re: When does it pay to play (lottery)?

From: Brian Atkins (brian@posthuman.com)
Date: Sun Jan 23 2005 - 11:34:51 MST


David Clark wrote:
> The chance that *any* implementation of AI will
> *take off* in the near future is absolutely zero. We haven't the foggiest
> clue exactly what it will take to make a human level AI, let alone an AI
> capable of doing serious harm to the human race.

Your two statements of certainty are in direct conflict with each other,
so I don't see how you can hold both at the same time.

If on one hand you claim there is an absolutely zero chance, then you
must know in extremely amazing detail with 100% certainty what it takes
for "take off". Or conversely if you don't know with perfect knowledge
what it takes for "take off" then how can you claim a zero chance.

Of course the answer is that no one should be claiming a zero chance.
And the other answer is that we need to continue gaining more knowledge.
But because of the unknowns, gaining that knowledge should be done in as
safe a manner as possible.

For some reason this reminds me of the worries at the time of the a-bomb
that many of the physicists had about whether it would accidentally
ignite the entire atmosphere. They didn't know at first what would
happen, and most of them would not have told you that the possibility
was "absolutely zero". In fact even after they ran some numbers on
paper, they still weren't absolutely certain. But they did by that point
have enough odds in their favor to proceed. But even at the end when it
was tested they were still heard betting with each other on the outcome.
There was never absolute certainty.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT