From: Herb Martin (HerbM@LearnQuick.Com)
Date: Fri Dec 02 2005 - 14:17:21 MST
> From: H C
> >From: "Herb Martin" <HerbM@LearnQuick.Com>
> >Or, it's open source? How do you stop some human from taking
> >it offline and tinkering with it until it is able to do battle
> >again with the Supreme Friendly AI running the world.
> >People cannot even agree on what would constitute 'friendly'
> >or 'unfriendly' behavior -- if it is powerful enough to prevent
> >any competitors it will by definition have to PROTECT itself
> >and, almost by definition, have to evolve to counter new threats,
> >at which point you cannot count on ANY pre-programmed friendly
> >behavior being permanent.
> Stop here.
> You are making a very critical and very subtle implicit
> assumption here.
No, I was making no such assumption it was rather you who
were assuming that the word 'evolution' always means by
"natural selection" or works as we expect NATURAL evolution
> AI's exponential increase in intelligence is absolutely nothing like
While what you said about the Singularity was generally
not incorrect it had little to do with my point above,
that any AI which is going to defend itself from other
unfriendly AI will have to evolve or develop in the sense
of evolving capabilities.
That this will occur more rapidly and by means other than
natural selection should go without saying among those who
follow this topic.
Now you have indirectly made my point about friendly AI:
Once we reach the Singularity we cannot be assured of
a 'friendly AI' or even have any control or effect on
the developing and evolving AI.
Guaranteed control is an illusion.
Much like believing you can keep terrorists from taking
down an airplane by taking away sewing scissors from
-- Herb Martin
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:00:48 MDT