From: Thomas McCabe (email@example.com)
Date: Wed Mar 12 2008 - 20:10:50 MDT
On Wed, Mar 12, 2008 at 8:16 PM, Mark Waser <firstname.lastname@example.org> wrote:
> > Can you explain exactly what problem you are trying to solve without
> > using the word "Friendly"?
> Yes, I am trying to prevent entities from performing horrible and unethical
> acts by convincing them (because it is true) that such actions are not in
> their self-interest.
Performing unethical acts is usually in the self-interest of, not only
AIs, but most humans. Billionaire drug-barons and third world
dictators make themselves huge piles of money off horrible and
> An excellent side-effect of my theory is that I find myself able to derive
> many (if not all) laws of ethics from it so that it actually provides
> guidance as to what is ethical and what is not.
Show us examples of such derivations.
> This is all the basis of my slogan.
> > If you have a solution, can you explain
> > how it works to a computer?
> If I have a computer with the intelligence of a low average human
Error, reference not found. There's no such thing as a computer "with
the intelligence of a human", because computers will have vastly
different skillsets than humans do (see
> and some
> time, I believe so. The proof of that will be if I can start convincing
> people on this list that *they* should convert to (declare) Friendliness.
The people on this list already have a great deal of human-universal
architecture, which AIs won't have. See
> > I see only words that might explain
> > things to a human who already understood the situation.
> That is true. I am assuming a low average human intelligence. This theory
> will not work on an insufficiently intelligent system but I argue that an
> insufficiently intelligent system will not be a danger to humanity.
Any AI intelligent enough to actually understand all this will be more
than intelligent enough to rewrite itself and start a recursive
self-improvement loop. See http://www.acceleratingfuture.com/tom/?p=7.
> > Of course I
> > don't expect all implementation details, it's just that I see none.
> Actually, you see all the implementation details that are necessary. I just
> haven't done a great job of conveying the entire idea . . . . YET. I'm
> going to keep riding this horse until someone shoots it out from underneath
> me OR I start getting people to make declarations of Friendliness.
> Thank you for you thoughtful insights.
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT