From: Ben Goertzel (email@example.com)
Date: Tue Jun 25 2002 - 17:04:41 MDT
> Suppose I did think that I was uniquely suited to playing a key role in
Then you'd almost surely be deluded, at least if you *literally* thought
this. There are 6 billion folks in the world, so how could you possibly
know you were *uniquely* suited for this purpose?
If you merely thought you were *very well suited* for it, that would be a
lot more rational-sounding...
> Anyway, I still think you're confusing "You are wrong" with "I am right".
I don't think so. There are plenty of other people in the world who think
I'm wrong in my theories. Many of them think I'm *much wronger than you
do*! Many of them think that I'm wrong that AGI is possible, or wrong that
such a thing as the Singularity will ever happen! Yet, the fact that these
people think I'm wrong, does not lead me to ascribe to them an
overconfidence in their opinions...
Anyway, I suppose that the issue of Eli's psychology and Ben's
interpretation of it is probably not very interesting to others; therefore I
suggest we terminate this thread and if we wish, continue it in private!
Back to the usual technophilosophy, I say ;)
> The strongest statement I would make about Friendly AI is "I've been
> at this section of floor for two years, I have my trap detectors shoved
> to absolute maximum, and I still haven't detected any basic flaws, so at
> this point, even taking into account how much is at stake, I'm ready to
> one foot down and start shifting my weight over."
Well, I and others have tried to point out some basic flaws in your thinking
on the topic for some time now, but you just won't listen!! (Or rather, you
just don't agree ;).
I still don't believe that you are anywhere near to understanding the
conditions under which a human-Friendly AGI goal system will be stable under
successive self-modifications... even mild and minor ones...
-- ben g
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT