From: Ben Goertzel (firstname.lastname@example.org)
Date: Fri Jul 15 2005 - 07:23:09 MDT
> You find it 'improbable', Robin finds it 'lunatic' and
> M.Wilson thinks I'm 'merely engaging in wishful
> thinking' This is great! :D It shows that my theory
> is actually crazy enough to be true.
> Niels Bohr Quote:
Of course, the percentage of highly crazy-sounding ideas that are true is
But the fact that it's not zero is one reason I'm continuing this
dialogue -- along with the probably-slightly-perverse entertainment value it
> How could unfriendly utilities be limiting the
> predictive ability? The answer I think, is that
> growth is somehow connected to respecting volition.
> The process of interacting with other sentients in a
> harmonious way actually helps us to grow (become
> better people ourselves). So I think growth is a
> *moving towards* altruism. This sounds vaguely
This argument moves in the wrong directly, logically.
You are arguing that friendliness implies growh, which may be true to an
But what you need to show, to bolster your proposition, is that growth
> Now in the case of the example you gave, an AI whose
> goal is to advance science, math and technology as far
> as possible, it seems to me that such an AI might
> actually become friendly in the long-run (but it
> wouldn't be friendly to start with!)
To quote John Maynard Keynes, "In the long run, we are all dead."
While his statement pertained to economics and didn't account for
transhumanism, it seems to apply to your theory here.
>From a human point of view, one may say: So what if an AI gets friendly far
in the future if it's annihilated us already? From a purely personal
perspective, I don't care very much if it feels sorry after annihilating me
and my family...
Of course, though, from a general moral perspective an AI that turns
friendly is better than one that always stays unfriendly...
> hurting others adversely affects mental health in
> general? Note that as an observed fact, evil people
> do tend to suffer from mental instability more than
> decent folks.
a) this is all closely tied to human neuropsychology, and you've made no
argument why it should be general
b) even in humans, where these factors hold, intelligence and
friendliness/kindness/morality/whatever don't seem to be correlated
> This are admittedly all rather weak sounding
> arguments, buyt they are suggestive none the less.
> They do I think, move my conjecture from being
> 'ludicrous' to being 'vaguely plausible'
Unfortunately, I think the arguments actually hurt your conjecture.
At the start, it sounds highly improbable, but appealing enough if it were
true that it's worth considering..
But the arguments are SO bad that they actually make the conjecture seem
even less likely than it did at the start, at least from my point of view
But anyway, it was an entertaining email to wake up to today, thanks...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT