Re: Friendly AI in "Positive Transcension"

From: Metaqualia (metaqualia@mynichi.com)
Date: Sun Feb 15 2004 - 23:34:40 MST


> > Is it such a bad compact summary?
> Yeah. For one thing, no positive proposal consists of avoiding something.

When you are explaining a new way of doing things it makes sense to clarify
which things will no longer be necessary. If I propose to "atomically
assemble icecream" I could say "We no longer need to milk cows, and
will...."

> For another, I would no longer use the term "moral supergoal". The part
> about recreating cognitive architecture is an alarming prospect if you
> leave out the renormalization - you'd get the icky parts too. Etc.

I thought that by renormalization you intend that the moral architecture can
look at itself and improve itself by its own moral parameters. This is
something humans do. Can you please better define this second objection to
the summary I proposed?

mq



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT