From: James Higgins (firstname.lastname@example.org)
Date: Tue Jul 31 2001 - 17:34:22 MDT
At 05:00 PM 7/31/2001 -0500, you wrote:
>James Higgins wrote:
>But the relatively small *degree* of increase intelligence of Ben over
>rise to an extraordinary difference in *kind* of thinking.
>I think that this is what Gordon meant. If you have a .5 human
>intelligence, and you
>just speed it up by a factor of 2, then you've now given your idiot the
>ability to think
>up more stupid thoughts more quickly. This doesn't mean that it will make
>since it will still not be able to correctly judge which ideas are right
Very true. But if we can get an AI to at least a 1.0 level, then give it
sufficient processing power so that it is much, much faster than a human,
it will progress on its own. Because it will make tiny advances over time
(on it's time scale) which would lead to 1.1, 1.2, 1.5, 2.0, etc. If 1.5
takes more processing power, it could slow itself down some but improve the
quality of its thought.
>Still, I agree with this:
> > The big hurdle is getting general AI that is roughly equivalent to a
> > human scientist implemented and working. If we can do that I believe the
> > Singularity will be inevitable.
>It will be interesting to see if there is a long delay between building
>"human level dumbass" and our first "human level scientist", keeping in
>building even 1 billion human level dumbasses isn't likely to help us much
>that scientist built.
Well, I think we will learn a huge amount even by building 1 human level
dumbass. It, itself, won't help us make any progress. But having a bunch
of those dumbasses around to try *our* theories out on would make progress
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT