Re: Floppy take-off

From: James Higgins (jameshiggins@earthlink.net)
Date: Wed Aug 01 2001 - 18:40:05 MDT


At 02:02 PM 8/1/2001 +0200, you wrote:
>Seriously speaking, you didn't adress the other possibility. What if
>it needs to be seven times as smart as a human in order to improve its
>own code? Let's assume that there is no codic cortex. Let us also
>assume that Ben or Eli manage to create a human-level AI. What if it
>looks at it own code, just goes "Oh, wow, you've done some really cool
>stuff here" and then ~can't~ improve the code? If it takes two or more
>~intelligent~ people to create an AI equivalent to the ~average~
>human, what's to say that the AI can create a ~trans-human~ AI? Isn't
>that a leap of faith?

Personally, I'm thinking it should be on the level of Ben, Eli or similar
AI researcher. I believe I stated that it needs to be equivalent to 1 AI
researcher. So it may need to be somewhat more intelligent than the
average human, but its hard to say. I believe it will fail if it is less
intelligent than that, however.

It doesn't need to understand absolutely everything, and it certainly
doesn't need to know how to make massive improvements. But it would be in
a unique position to make minor changes and experience the
difference. Then, though practice and continued study it would get
better. It would start making minor improvements, which would then make it
smarter or at least faster. These improvements would help it make further
improvements, and the effect would snowball.

>Gordon Worley:
> > I'm willing to bet that, given enough time, Ben
> > could keep making improvements, though as
> > time goes on it will be harder. Of course, for
> > an AI this isn't a problem since it's dynamic
> > [...]
>
>As time goes on, it will be harder, I agree. So hard, in fact, that it
>might prove to be impossible for humans to create an intelligence
>seven times smarter than themselves. Of course, if we can get the
>positive feedback loop started, there's no telling how intelligent an
>AI can get. But how do we start it if the AI takes one look at its own
>code and just gives up?

I find this highly unlikely. It would at least start trying things, most
if which would probably fail. By the way, has anyone given serious thought
as to how the AI is going to experiment on itself? It is going to need, at
least, some method to automatically restore its previous state for when it
really screws up. I might suggest something like how the resolution
settings work in Windows. You make changes, hit apply, it brings up the
new settings with a dialog that says "keep these?" and if there is no
response in some period of time it restores the old settings.

>I realise that if I'm right, humanity is doomed, which is why I want
>someone to very clearly state why I'm wrong.

Your wrong, because if you were right we would all be doomed. And, humans
having the nature that we will, we won't give up until we succeed.

It won't take an AI that is 7 times smarter than the average human to
succeed. If that were the case, we would never get the original AI up and
running. At most we need to get an AI equivalent to smarter humans who do
AI research. If the AI is not very fast, we could network together many of
them to work as a team. As long as we can in fact create a general
intelligence at least as smart as a human, we will succeed. Of course,
this is all just based off my personal opinion. There won't be any
supporting facts until after we succeed. ;)

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT