Re: When does it pay to play (lottery)?

From: David Clark (clarkd@rccconsulting.com)
Date: Mon Jan 24 2005 - 10:59:13 MST


I will only pursue this line of thinking with this final email on this
topic.

----- Original Message -----
From: "Brian Atkins" <brian@posthuman.com>
To: <sl4@sl4.org>
Sent: Monday, January 24, 2005 8:19 AM
Subject: Re: When does it pay to play (lottery)?

> It's interesting that you are using the exact same flawed analogy class
> as Ben did in his response. The answer is the same as I gave him: your
> analogy is incorrect because we know with great detail using existing
> physics whether a given car design will actually result in a working car
> or not. With a given AGI design we do not know at all (you may think you
> know, but if I ask you to prove it you will be unable to provide me
> anything near the level of concrete proof of design-worthiness that a
> car designer could provide using a physics model). Therefore your
> statement that a car is more likely to be created by "accident" by
> knowledgeable designers is incorrect in my opinion.

I have had literally hundreds of end users of my programs swear to me that
some data field was changed by a program all by itself. I have done global
searches and can prove at least to myself that the program had absolutely no
way to change that particular field. I have explained that software doesn't
just up and work in 1 way 10,000 times and then for no reason at all, do
something entirely different, even though it has no code with which to do
it. I am not talking about the occational hardware malfunction. I am not
talking about the occational garbage that might be dumped in any random
place on a hard drive.

If software could just do whatever it wanted, then absolutely none of the
projects I have made could ever have worked. There are times when some
problems occur because of some memory leak, or concurrency problems etc.,
but these are not what we are talking about. If I have designed a program
to learn on it's own, develop it's own goals, be creative, spread to other
machines etc, then your argument might have some weight but an AGI wannabe
program without these attibutes has *no chance* of just gaining these
abilities by itself. Software just doesn't work like that. The probability
of a supremely complicated system appearing *by mistake* is directly
proportional to it's complexity. The more complex, the less likely it could
just happen. If you are talking about a *very* compicated system then the
probability approximates zero.

> The whole point here is that you have no way of telling with any
> absolute certainty whether or not a project that thinks its design will
> lead to real AGI is close or not to taking off, or what exact point it
> may do so. This is why I see a large conflict in your original
> statements. For my analysis to be incorrect you would need to have as
> much concrete knowledge about how AGIs work as a physicist has about
> physics - and no one has that much knowledge yet.

I need the *concrete* knowledge to suceed, not fail. I have *never*
accidently created a working program by mistake. It is only with perfect
knowledge (in that small domain) that I have ever got any programs to work
correctly. I have *never* been pleasantly surprised at the resulting
program, quite the opposite.

It might be that you are thinking of people who think that intelligence is
emergent from some complex froth of computation and that this emergent
intelligence (that was never programmed for directly) could get away from
it's designer. I don't believe intelligence is emergent. I can't imagine
making a program based on that phyilosophy. I imagine an AI that is
programmed in a manner which conventional computers are programmed and my
arguments are directed at that view.

If emergent intelligence is what your argumments are directed at: I would
point out that even if human intelligence is obtained by an emergence from
our complex neuron froth, babies take at least 20 odd years to show even
normal human level intelligence. Regardless of how complex a human brain
is, without the contact with other humans over a huge amount of time, no
complex intelligence has very been detected.

> And perhaps you'll be surprised someday when your shiny new AGI
> prototype takes off. All I'm attempting to point out is that you should
> realize you are working with much greater uncertainty than your original
> statements implied you recognized.

Uncertainty of the design causes bugs and failure, not smarter than human,
super intelligences that have the capability of annihilating the human race.

-- David Clark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT