From: Stuart Armstrong (firstname.lastname@example.org)
Date: Wed Jun 25 2008 - 02:52:52 MDT
>> Well, yes. The options seem to be
>> 1) A slave AI.
>> 2) No AI.
>> 3) The extinction of humanity by a non-friendly AI.
> #3 is bullshit. Just escape the situation. Yes, change sucks. Yes,
> there's the Vingean ai that chases after you, but not running just
> because it might eventually catch you is kind of stupid. Kind of
Are you saying that we shouldn't worry about starting a nuclear war,
because we can try and run away from the blasts?
>> Since "no AI" doesn't seem politically viable, the slave AI is the
>> way to go.
> Way to go for what? Are you thinking that ai is something that can only
> appear once on the planet here? That's completely absurd. Look at the
> trillions of organisms (ignore the silly single-ancestor hypotheses).
I refer to Eliezer's papers, and various others that argue that a high
level AI will so dominate the planet that other AI's will only come
into existence with its consent. You can argue that they are wrong;
but if you want to do that, do that. The idea is not intrinsically
absurd; and if the speed of inteligence increase, as well as the
return on intelligence, is what they claim it is, then the idea is
>> Of course there may be grey areas beyond those three posibilities -
>> but hideously smart and knowledgeable people argue that there are no
>> such grey areas. Even if there are, a non-lethal AI would be much
>> closer to "slave" than "non-friendly".
> Are they then hideously smart?
>> > To hell with this goal crap. Nothing that even approaches
>> > intelligence has ever been observed to operate according to a rigid
>> > goal hierocracy, and there are excellent reasons from pure
>> > mathematics for thinking the idea is inherently ridiculous.
>> Ah! Can you tell me these? (don't worry about the level of the
>> conversation, I'm a mathematician). I'm asking seriously; any
>> application of maths to the AI problem is fascinating to me.
> Have you seen the name Bayes thrown around here yet?
Yes, I've seen it thrown around a lot. The name "Bayes" that is; the
mathematics of it never seem to darken this list at all. Can you
explain how a method for updating probability estimates based on
observations is incompatible with a rigid goal structure?
And unless you are quoting current cutting edge research in the area,
my level of knowledge is enough to understand all the maths, so don't
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT