Re: Destruction of All Humanity

From: Jef Allbright (jef@jefallbright.net)
Date: Wed Dec 14 2005 - 09:01:25 MST


On 12/14/05, David Picon Alvarez <eleuteri@myrealbox.com> wrote:
> From: "micah glasser" <micahglasser@gmail.com>
> Intelligence cannot help you ypu select for the good. The Good must be
> programmed into the AI. Once the AI knows what the Good is then its
> intelligence will surpass any human intelligence in figuring out how to
> obtain bringing about the Good. If the Good is failed to be programmed into
> the machine as its super-goal then it wil certainly be malevolent. Super
> intelligence is not a god. Its merely a tool.
>
>
> Were you programmed with the good? Are you certainly malevolent? What
> distinguishes you from an AI, evolution? Evolution doesn't bring about the
> good, it brings about what works in evolutionary environments, far from the
> good. If the good is objectively existent a super AI can find it, if not
> then there's no point in talking about "the good", we'd rather talk about
> what we want instead.
>

David makes good points here, but interestingly, as we subjective
agents move through an objectively described world, we tend to ratchet
forward in the direction we see as (subjectively) good. Since we are
not alone, but share values in common with other agents (this can be
extended to non-human agents of varying capabilities) there is a
tendency toward progressively increasing the measure of subjective
good.

Appreciating and understanding the principles that describe this
positive-sum growth would lead us to create frameworks to facilitate
the process of (1) increasing awareness of shared values, and (2)
increasing awareness of instrumental methods for achieving our goals.

This paradigm would supersede earlier concepts of morality, politics
and government.

In my humble opinion. ;-)

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT