RE: Ben vs. Ben

From: James Higgins (jameshiggins@earthlink.net)
Date: Fri Jun 28 2002 - 13:03:07 MDT


At 11:04 PM 6/27/2002 -0600, Ben Goertzel wrote:

>2)
>It's important to put in protections against unexpected hard takeoff, but
>the effective design of these protections is hard, and the right way to do
>it will only be determined thru experimentation with actual AGI systems
>(again, experimental science)

Does this indicate that there will be no fail safe system in place until
after your AGI system has been conscious and running for awhile?

>3)
>Yes, it is a tough decision to decide when an AGI should be allowed to
>increase its intelligence unprotectedly. A group of Singularity wizards
>should be consulted, it shouldn't be left up to one guy.

And, as someone pointed out, more than just "Singularity wizards" should be
consulted. I'm not suggesting you invite the Pope or President Bush, but a
somewhat broader group should be used.

>MAYBE I will also replace the references to my own personal morality with
>references to some kind of generic "transhumanist morality." However, that
>would take a little research into what articulations of transhumanist
>morality already exist. I know the Extropian stuff, but for my taste, that
>generally emphasizes the virtue of compassion far too little....

Your quote from your AI Morality essay (as used by Brian):

"But intuitively, I feel that an AGI with these values is going to be a
  positive force in the universe ­ where by “positive” I mean “in accordance
  with Ben Goertzel’s value system”." - Ben's idea of how an AI figures out
  what's "right"

I think this point is in fact 100% dead on. All it says is if the AGI
successfully learns and uses the same values as Ben Goertzel then they AGI
would be a positive force in the universe according to Ben Goertzel's
viewpoint. How could that statement be anything other than correct?

However, there is a significant difference between being technically
correct and having the most appropriate or wise solution. Ben's statement
is technically correct, yet it is quite possible that many people who are
not Ben Goertzel would not like the outcome.

This is where wisdom comes into play and I, personally, think Ben gets it
and Eliezer does not. But I'm perfectly open to thinking that they both
get it, neither gets it or that I've got it backwards; if someone can
present substantial reason.

James HIggins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT