From: Anthony Berglas (email@example.com)
Date: Wed Jun 25 2008 - 03:16:35 MDT
Thanks for your feedback, responses below...
At 03:05 PM 25/06/2008, Bryan Bishop wrote:
> > prove complex mathematical theorems. It is difficult to predict
> > future progress, but if a computer ever became about as good at
> > programming computers as people are, then it could program a copy of
> > itself.
>No, it could just copy bits and bytes, nothing about programming is
>needed for copying from one machine to another.
The idea about copy of itself is just that working on its actually
running program might be like trying to do brain surgery on
oneself. But the program can just be copied, much like the way we
use an operating system to produce a new version of itself.
The ability to make new physical hardware might eventually limit
exponential growth of intelligence, but at a point so far in the
distance that I do not think that it is relevant.
> > Hardware has certainly become much, much faster, but software has
> > just become much, much slower to compensate. We think we understand
> > computers and the sort of things they can do.
>Are you a programmer and have you any idea ?
Yes. But in any case you know that modern machines do not feel to be
>What we know,
>as of now, is that the brain is doing something awesome, and that we
>want to figure out how to do it in other areas too.
Certainly the brain is cleverly "designed". But it does not contain
a large amount of arbitrary complexity, only 750K of DNA.
>> It doesn't matter if the ai runs for a few billion years or
>for a few seconds.
For an AI to compete with man it should probably run at at least
man's thinking speed.
>Actually, you might be interested in knowing that it has been shown that
>in the hu brain there are only maybe up to 100 neurons in a path from
>input to output,
Interesting. But there can only be about 100 neurons in a path
because they only fire at a few hundred per second. It would just be
too slow otherwise.
> > One major driver will be the need for practical intelligence as
> > robots leave the factory and start to interact with the real world.
>No, that's more a 'driver' for people to come to terms with the problems
>and realize that they might be interested in working on them, it's
>nothing about the actuality of solving them
Exactly. And then by trying probably having some success.
> > In particular cars can already drive themselves over rough desert
> > tracks and down freeways.
>You're talking about physical manufacturing and mechanics, tasks that
>machines can already do. Intelligence isn't really needed for those
Actually, the machine needs to be able to see/sense the environment,
determine a route through it, react to changes. Certainly not full
AI but much, much smarter than the program that tots up your bank balance.
>> Who cares if you are out of work? The machines are taking care
>of the necessities of life anyway, yes? Then what's the big deal?
People pay me because I can do valued work. If they do not pay me I
may starve. Socialism is all very well, but I would prefer not to bet on it.
> > Philosophers have asked whether an artificial intelligence has real
> > intelligence or is just simulating intelligence. This is actually a
> > non-question, because those that ask it cannot define what measurable
> > property "real" intelligence has that simulated intelligence does not
> > have. It will be "real" enough if it dominates the world and
> > destroys humanity.
>No, that "real enough" is re: any existential threat. That's completely
>different from the concept of intelligence. Whether or not it is
>intelligent is the issue ... not whether or not the result of death
>is ... sigh. There's so many complex strands of bullshit running
>through that paragraph of yours. It's not your fault, but I'm not
>prepared to go through it entirely. Let me try, but I can't guarantee
>anything here. Look: you are proposing that ai could end up with
>domination and death, and then you proceed to say that if the result is
>ending with domination or death and so on that then it was "real", even
>though we're talking about *intelligence*, not about your inability to
>plan for existential threats.
I am just addressing the issue of "Real AI" that is sometimes
raised. The second sentence is the response. The third sentence in
my paragraph is a bit of a joke, not meant to be taken as a literal
part of the argument.
Dr Anthony Berglas, firstname.lastname@example.org Mobile: +61 4 4838 8874
Just because it is possible to push twigs along the ground with ones nose
does not necessarily mean that is the best way to collect firewood.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT