From: Jef Allbright (email@example.com)
Date: Fri Mar 23 2007 - 12:28:57 MDT
On 3/23/07, Russell Wallace <firstname.lastname@example.org> wrote:
> On 3/23/07, BillK <email@example.com> wrote:
> > Now, revealing new research shows that people with damage to a key
> > emotion-processing region of the brain also make moral decisions based
> > on the greater good of the community, unclouded by concerns over
> > harming an individual.
> There's a critical omission in the above quote. It should read "...based on
> what they _wrongly believe_ to be the greater good of the community".
> Because let's face it, such judgements _are usually wrong_. Look at history,
> look at all the people who've lived by philosophies like "the greatest good
> for the greatest number" and "the end justifies the means". The track record
> hasn't been all that great, has it? In fact, it's been mostly a string of
> bloody disasters, hasn't it? And the smarter the people involved, the worse
> the disaster.
> There are damn good reasons why we have limits on what we're allowed do no
> matter how loudly we chant the slogan "greater good of the community" while
> we're doing it.
Yes, this is indeed why our evolutionary heritage has provided a moral
damper on actions that generally tend to be anti-adaptive. You also
rightly point out some limitations and incoherence of utilitarian
What is traditionally missing though, is recognition of our more
recent understanding of cooperative dynamics, game theory, etc. We
are now at the point where we can begin implementing systems of
collaborative social decision-making based on increasing awareness of
our shared values that work promoted by increasing awareness of
methods that work.
Rather than "Thou shalt not kill" because it feels wrong, and because
it's a moral imperative encoded into our culture, we can see how this
is actually a reflection of principles of cooperative advantage that
made us what we are and will also make our future.
Rather than fears of "Science" enforcing an oppressive convergence of
humanity into a single mold of the "ideal man", we are learning that
diversity must be one of our highest values, and that the increasingly
probable tree of knowledge leads to an increasing variety of possible
Just at the moment we face novel technological threats of complexity
and speed outmatching our innate sense of moral right and wrong, our
slow system of cultural ethics, we find ourselves within grasping
distance of technologically augmented morality, truer than the moral
capabilities of any humans, based on evolving human values.
Since this is the Sl4 list I'll be very explicit that this is not a
vision of a savior machine, but rather, the human machine becoming a
more capable machine.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT