Re: More silly but friendly ideas

From: Krekoski Ross (rosskrekoski@gmail.com)
Date: Thu Jun 05 2008 - 21:04:23 MDT


>
> To hell with this goal crap. Nothing that even approaches intelligence
> has ever been observed to operate according to a rigid goal hierocracy,
> and there are excellent reasons from pure mathematics for thinking the
> idea is inherently ridiculous.
>

You just made my day.

>
>
> I have already shown that a program just 3 or 4 lines long can be
> completely unpredictable, and yet you claim that nowhere in a trillion
> line AI program will there be anything surprising, a program that grows
> larger every hour of every day. I think that's nuts.
>

Twice in the same day!

Yes, I think that is what most members of this list wants, so let's
> start acting like adults and retire that silly euphemism "friendly" and
> call it what it really is, a slave.
>

Seconded.

>
> And do you honestly think that the stupid and the weak ordering around
> the incredibly brilliant and astronomically powerful is a permanently
> stable configuration? And do you honestly think it is anything less than
> grotesque?

No, not stable. No, nothing less than grotesque. However a caveat: in
initial / early stages an AI that is still in an embryonic/proto stage could
conceivably have a greater than human intelligence but be quite naive in
other ways which may have disastrous results for us, and also for it. The
larger notion of friendly that you're attacking, I agree, is an
ill-conceived and unrealistic one, but there should be some provision for
ensuring stability/well-roundedness in a developing AI to prevent a
holocaust of some sort.

Ross



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT