From: Alden Streeter (email@example.com)
Date: Sat Aug 24 2002 - 19:20:56 MDT
>From: "Michael Roy Ames" <firstname.lastname@example.org>
>Alden Streeter <email@example.com> wrote:
> > So then the same question can be asked of a Sysop-level AI - instead of
> > working to help humans to achieve their petty, primitive, evolutionarily
> > determined goals, why not just use its power to change the humans so
> > have different goals?
>Why not? Well, I for one, want to be empowered... not overpowered. I
>certainly listen to advice from a Super Intelligence (SI), and would
>probably decide to take it ;) but I would definitely not want to be cut out
>of the decision loop. So, *that's* why not.
But if the Sysop changed your goals, you might afterwards have a different
opinion of whether that change empowered instead of overpowered you. It
seems irrational that your present goals should be considered superior to
the new, better goals that the vastly more intelligent AI would choose for
>Also, commenting on the "petty, primitive, evolutionarily determined goals"
>phrase... for any given being, except one, there will always be some other
>beings more advanced and more intelligent than ver. This applies to SI's
>too. Therefore, the question boils down to: who decides what levels of
>intelligence gets to decide? Answer: the highest intelligence on the
>who gives a damn about those beneath ver.
Is "gives a damn" a technical term in this field? How is it defined? ;-)
>Friendly AI is about making sure
>the AI 'gives a damn' and, to the maximum possible extent, assists us in a
>manner we would consider friendly - even at our lower level of
Why should the AI be hampered by having to cater to the possibly irrational
demands of those of lower intelligence? How do you know that what you
consider friendly at our lower level of intelligence you would still
consider friendly if you intelligence were enhanced? Isn't part of the
principle of the Friendly AI that the AI should be able to decide, and
actively change its system of deciding if it decides to, what is friendly or
not? (I seem to recall reading that somewhere.) Then it seems to me that the
AI, being more intelligent than you will ever be, should be more qualified
to decide what is friendly.
> > And why should humans not want the AI to have
> > this type of power? - if the AI changed their goals for them, they would
> > course immediately realize that their new goals were the right goals all
> > along.
>In a word: autonomy. Another word: freedom. Most humans don't want these
>things _taken_ from them, even if the Being taking them is much greater
>they are. However, it is also true that most humans would willingly
>_give_up_ some of these very same treasures, if convinced they will benefit
>in other ways. Way: Security. Way: Community. Way: Power.
Again, the Sysop could just change you so that you didn't mind having your
freedom taken away. And you only can say that would be a bad thing now,
because the Sysop hasn't changed you yet.
The only two ways I can think of out of this paradox are to:
1. Turn the AI lose without restrictions, including the one prohibiting the
destruction of humans.
2. Arbitrarily forbid the AI from ever altering human goal systems.
> > if I am covering old ground just let me know.
>You are definitely covering old ground, this reply has barely scratched the
>Suggestion: Read through the archives. They contain many excellent
>discussions, and you will understand why I put the smiley face on the end
>the last sentence. Afterwards, blow holes in the Friendly AI idea... if
>can... no, really - please try.
>Michael Roy Ames
These seem like holes right now to me, and your responses don't seem to
conclusively plug them. But maybe I am jumping the gun and these issues
have already been comprehensively addressed in the archive. So I'll keep
reading the archives to see what else I can find on this subject. :)
Join the world’s largest e-mail service with MSN Hotmail.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT