From: Brian Atkins (firstname.lastname@example.org)
Date: Mon Jan 24 2005 - 08:19:19 MST
David Clark wrote:
>>If on one hand you claim there is an absolutely zero chance, then you
>>must know in extremely amazing detail with 100% certainty what it takes
>>for "take off". Or conversely if you don't know with perfect knowledge
>>what it takes for "take off" then how can you claim a zero chance.
> If a person *absolutely* doesn't know how to do something, why wouldn't that
> give them zero chance of accomplishing the goal? Are you saying that I
> might just luck out and stumble upon the answer and therefore can't be 100%
> certain of failure? Your logic escapes me. Can you imagine accidentally
> making a car? A car is far more likely to be created by accident than an AI
> would be. I might agree on the possibility of an accidental take off if
> *any* AI project was even close to a human level, but sadly that is
> definitely not the case.
It's interesting that you are using the exact same flawed analogy class
as Ben did in his response. The answer is the same as I gave him: your
analogy is incorrect because we know with great detail using existing
physics whether a given car design will actually result in a working car
or not. With a given AGI design we do not know at all (you may think you
know, but if I ask you to prove it you will be unable to provide me
anything near the level of concrete proof of design-worthiness that a
car designer could provide using a physics model). Therefore your
statement that a car is more likely to be created by "accident" by
knowledgeable designers is incorrect in my opinion.
The whole point here is that you have no way of telling with any
absolute certainty whether or not a project that thinks its design will
lead to real AGI is close or not to taking off, or what exact point it
may do so. This is why I see a large conflict in your original
statements. For my analysis to be incorrect you would need to have as
much concrete knowledge about how AGIs work as a physicist has about
physics - and no one has that much knowledge yet.
>>Of course the answer is that no one should be claiming a zero chance.
>>And the other answer is that we need to continue gaining more knowledge.
>>But because of the unknowns, gaining that knowledge should be done in as
>>safe a manner as possible.
> Safety is something you will *absolutely* get if you never write any code.
> I couldn't agree more with the need for more knowledge. I am only
> disagreeing with the method for getting it. The only way I have ever found
> to really know a software algorithm is to make a successful program of it.
> I have been surprised many times by thinking a technique would work or be
> fast enough, only to find out just how wrong I was.
And perhaps you'll be surprised someday when your shiny new AGI
prototype takes off. All I'm attempting to point out is that you should
realize you are working with much greater uncertainty than your original
statements implied you recognized.
>>For some reason this reminds me of the worries at the time of the a-bomb
>>that many of the physicists had about whether it would accidentally
>>ignite the entire atmosphere. They didn't know at first what would
>>happen, and most of them would not have told you that the possibility
>>was "absolutely zero". In fact even after they ran some numbers on
>>paper, they still weren't absolutely certain. But they did by that point
>>have enough odds in their favor to proceed. But even at the end when it
>>was tested they were still heard betting with each other on the outcome.
>>There was never absolute certainty.
> How is it any comparison. The people working on the A-bomb didn't just send
> the first bomb over Hiroshima without any successful tests. They had many
> years of very expensive and extensive tests that lead up to the ignition of
> the first A-bomb. How does that compare with the effort at SIAI? I am not
> putting down the effort made so far by the SIAI but please don't make
> comparisons with the development of the A-bomb. Your group isn't close to
> that development group by many orders of magnitude. Who knows what you
> might be if the government of the USA spent the same amount of money through
> the SIAI as was spent in developing the A-bomb?
The point I was attempting to make was not regarding SIAI, although you
can be sure our test methods will be at least as rigorous as the rather
lackadaisical ones used in the A-bomb project. The point was that even
after all their testing and design they realized that they still weren't
absolutely sure what would happen. So again I was attempting to show
that statements of absolute certainty like you made are not warranted.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT