From: Tommy McCabe (firstname.lastname@example.org)
Date: Sun Feb 29 2004 - 06:16:07 MST
--- Marc Geddes <email@example.com> wrote:
> --- Tommy McCabe <firstname.lastname@example.org> wrote: >
> You say that moralities 'consistent' with each other
> > don't have to be identical. They do. Morality
> > mathematics. In order for them to be consistent,
> > they
> > have to give the same result in every situation,
> > other words, they must be identical. 'I like X'
> > isn't
> > really a consistent morality with 'Do not kill',
> > since
> > given the former, one would kill to get X. I don't
> > like the idea of an AI acting like a human, ie, of
> > having heuristics of 'Coke is better tha Pepsi'
> > no
> > good reason. Of course, if their is a good reason,
> > Yudkowskian FAI would have that anyway. You may
> > the 'personal component of morality is necessary'
> > thing as an axiom, but I don't and I need to see
> > some
> > proof.
> O.K, 'conisistent with' wasn't a good word to use as
> regards moralities. But I think you know what I
> meant. Perhaps 'congruent with' would be a better
> I could define morality Y as being congruent with
> moralitity X, if in most situations, Y did not
> conflict with X. And if in the situations where Y
> conflict, X took priority.
> So for instance, say morality X was 'Thou shall not
> kill', and morality Y was 'Coke is Good, Pepsi is
> Evil'. Y is congruent with X if a sentient can
> Y without conflicting with X (The sentient looks to
> promote Coke, but without killing anyone).
Here's an idea: Perhaps (although I have no idea how
you would relate them) you could have a supergoal of
what you call Universal Morality, and then, if
supporting Coke over Pepsi somehow supported Universal
Morality, you could have it as a subgoal. That way,
1. You support Coke over Pepsi
2. Your support is justified
3. If, at any time in the future, supporting Coke
contradicts Universal Morality, it can be easily
> The reason I think a 'Personal Morality' component
> neccessery, is that WE DON'T KNOW what the Universal
> Morality component is.
That's like saying, "I don't know what the perfect car
is, so that means I'm going to assume that having gum
in the engine is necessary". Makes no sense at all. If
you don't know how to build an engine, substituting
sticks of gum isn't going to work.
> It might be 'Volitional
> Morality', but that's just Eliezer's guess.
A 'guess' implies that Eliezer's ideas about morality
aren't justified. I'm sure that Eli has good reasons,
whatever they are, for thinking that Volitional
Morality is the objective morality, or at least good.
Anyway, AIs don't have moralities hardwired into them-
they can correct programmer deficiencies later.
> designed to try to reason out Universal Morality for
> themselves. Programmers don't know what it is in
> advance. It's unlikely they'd get it exactly right
> begin with. So, in the beginning some of what we
> teach an FAI will be wrong. The part which is wrong
> will be just arbitrary (Personal Morality). So you
> see, all FAI's WILL have a 'Personal Morality'
> component to start with.
I'll have to agree with you there. Programmers aren't
perfect, and moral mistakes are bound to get into the
AI. However, the AI can certainly correct these. And
that's not even 'all FAIs'- it's just all FAIs built
> > "Well yeah true, a Yudkowskian FAI would of course
> > refuse requests to hurt other people. But it
> > aim to fulfil ALL requests consistent with
> > (All requests which don't involve violating other
> > peoples right)."
> > And that's a bad thing? You really don't want an
> > deciding not to fulfill Pepsi requests because it
> > thinks Coke is better for no good reason- that
> > to an AI not wanting to fulfill Singularity
> > because suffering is better.
> > "For instance, 'I want to go ice skating', 'I want
> > Pepsi', 'I want some mountain climbing qquipment'
> > and
> > so on and so on. A Yudkowskian FAI can't draw any
> > distinctions between these, and would see all of
> > them
> > as equally 'good'."
> > It wouldn't- at all. A Yudkowskian FAI, especially
> > transhuman one, could easily apply Bayes' Theorem
> > and
> > such, and see what the possible outcomes are, and
> > their porbabilities, for each event. They
> > aren't identical!
> > "But an FAI with a 'Personal Morality' component,
> > would
> > not neccesserily fulfil all of these requests.
> > instance an FAI that had a personal morality
> > component
> > 'Coke is good, Pepsi is evil' would refuse to
> > a
> > request for Pepsi."
> > That is a bad thing!!! AIs shouldn't arbitrarily
> > decide to refuse Pepsi- eventually the AI is then
> > going to arbitrarily refuse survival. And yes, it
> > arbitrary, because if it isn't arbitrary than the
> > Yudkowskian FAI would have it in the first place!
> > "The 'Personal morality' component
> > would tell an FAI what it SHOULD do, the
> > morality' componanet is concerned with what an FAI
> > SHOULDN'T do. A Yudkowskian FAI would be unable
> > draw this distinction, since it would have no
> > 'Personal Morality' (Remember a Yudkowskian FAI
> > entirely non-observer centerd, and so it could
> > have Universal Morality)."
> > Quite wrong. Even Eurisko could tell the
> > between "Don't do A" and "Do A". And check your
> > spelling.
> Sorry. What I meant was that the FAI can't
> distinguigh between 'Acts and Omissions' (read up on
> moral philosophy for an explanation).
The FAI can't distinguish between heuristic A that
says 'do B' and heuristic C that says 'don't do B'?
> > "You could say that a
> > Yudkowskian FAI just views everything that doesn't
> > hurt others as equal, where as an FAI with an
> > oberver centered component would have some extra
> > personal principles."
> > 1. No one ever said that. Straw man.
> > 2. Arbitrary principles thrown in with morality
> > bad things.
> > "Yeah, yeah, true, but an FAI with a 'Personal
> > Morality' would have some additional goals on top
> > this. A Yudkowskian FAI does of course have the
> > goals
> > 'aim to do things that help with the fulfilment of
> > sentient requests'. But that's all. An FAI with
> > additional 'Personal Morality' component, would
> > have the Yudkowskian goals, but it would have some
> > additional goals. For instance the additinal
> > personal
> > morality 'Coke is good, Pepsi is evil' would lead
> > the
> > FAI to personally support 'Coke' goals (provided
> > such
> > goals did not contradict the Yudkowskian goals)."
> > It isn't a good thing to arbitarily stick
> > and goals into goal systems without justification.
> > If
> > there was justification, then it would be present
> > a
> > Yudkowskian FAI. And 'Coke' goals would contradict
> > Yudkowskian goals every time someone asked for a
> > Pepsi.
> But ARE all 'arbitary' goals really a bad thing?
> Aren't such extra goals what makes life interesting?
It may make life 'interesting' (even this isn't
proven), but it's sure not something you would want in
the original AI that starts the Singularity.
"First come the Guardians or the Transition Guide,
then come the friends and drinking companions"-
> Do you prefer rock music or heavy metal? Do you
> Chinese food or Sea food best? What do you prefer:
> Modern art or Classical? You could say that these
> preferences are probably 'arbitrary', but they're
> actually what marks us out as individuals and makes
> If all of us simply pursued 'true' (normative,
> Universal) morality, then all of us would be
> (because all sentients by definition converge on the
> same normative morality).
> Now in the example of an FAI with the additional
> === message truncated ===
Do you Yahoo!?
Get better spam protection with Yahoo! Mail.
This archive was generated by hypermail 2.1.5 : Mon May 20 2013 - 04:00:45 MDT