Re: [sl4] Re: More silly but friendly ideas

From: Bryan Bishop (kanzure@gmail.com)
Date: Wed Jun 25 2008 - 08:27:33 MDT


On Wednesday 25 June 2008, Stuart Armstrong wrote:
> >> Well, yes. The options seem to be
> >> 1) A slave AI.
> >> 2) No AI.
> >> 3) The extinction of humanity by a non-friendly AI.
> >
> > #3 is bullshit. Just escape the situation. Yes, change sucks. Yes,
> > there's the Vingean ai that chases after you, but not running just
> > because it might eventually catch you is kind of stupid. Kind of
> > deadly.
>
> Are you saying that we shouldn't worry about starting a nuclear war,
> because we can try and run away from the blasts?

I'm saying you shouldn't be putting your eggs all in one basket. That an
UFAI starting a thermonuclear war will kill you. That saying then
that "but the UFAI can run after us if we try to escape before the
nuclear war" is also stupid. You're going to have to try. Not trying
means death. Trying means 50/50 chance of death. That sort of
thing. ;-) And I must ask you to please not start nuclear wars, as it
would be a significant detriment to my plans, but I understand that you
might do so anyway, and in which case it is my fault for not being
building spaceships fast enough.

> >> Since "no AI" doesn't seem politically viable, the slave AI is the
> >> way to go.
> >
> > Way to go for what? Are you thinking that ai is something that can
> > only appear once on the planet here? That's completely absurd. Look
> > at the trillions of organisms (ignore the silly single-ancestor
> > hypotheses).
>
> I refer to Eliezer's papers, and various others that argue that a
> high level AI will so dominate the planet that other AI's will only
> come into existence with its consent. You can argue that they are
> wrong; but if you want to do that, do that. The idea is not
> intrinsically absurd; and if the speed of inteligence increase, as
> well as the return on intelligence, is what they claim it is, then
> the idea is true.

Political domination is not the same thing as dominating the underlying
technological capacity, the physical basis of reality and such. Has
this been addressed in a paper yet, and if so can you point me to it?

> >> > To hell with this goal crap. Nothing that even approaches
> >> > intelligence has ever been observed to operate according to a
> >> > rigid goal hierocracy, and there are excellent reasons from pure
> >> > mathematics for thinking the idea is inherently ridiculous.
> >>
> >> Ah! Can you tell me these? (don't worry about the level of the
> >> conversation, I'm a mathematician). I'm asking seriously; any
> >> application of maths to the AI problem is fascinating to me.
> >
> > Have you seen the name Bayes thrown around here yet?
>
> Yes, I've seen it thrown around a lot. The name "Bayes" that is; the
> mathematics of it never seem to darken this list at all. Can you
> explain how a method for updating probability estimates based on
> observations is incompatible with a rigid goal structure?

Who cares about a rigid goal structure? Last time I checked my
neuroscience, nobody actually understands what the hell intelligence
is, except that the brain is doing it. See my comments in the last
email re: folk psych.

- Bryan
________________________________________
http://heybryan.org/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT