From: Thomas McCabe (email@example.com)
Date: Tue Mar 25 2008 - 13:54:29 MDT
On 3/24/08, Mark Waser <firstname.lastname@example.org> wrote:
> > Ridiculous? This happens in reality ALL THE TIME.
> I'm sorry. By ridiculous, I meant the expectation that I should be able to
> implement Friendliness that quickly on a hostile entity with other
Our expectations are set by the difficulty of the problem, not by some
human standard of reasonableness. Is it ridiculous to expect a
twelfth-century peasant to cure the Black Plague? Of course. This does
not mean that they are automagically immune to the disease.
> >> and
> >> demanding that I defend it. I decline to do so unless you can show how
> >> it
> >> is at all justified or relevant.
> > An UFAI need not listen to us long enough to be infected, either!
> > So let's give you more time. Someone kidnaps you for ransom, ties you
> > up in the basement, and for whatever reason lets you talk to them for
> > a few hours. How do you convince them to be Friendly and let you go?
> I find what he is most interested in/driven by.
> I show him my handy, dandy meme-map that logically proves that he should be
> my friend.
> He is not amused and shoots me dead having an IQ below that necessary for it
> to work in a reasonable length of time.
First of all, you actually need to prove that it's advantageous for an
AI to be Friendly to humans. You need to do this if you're going to
keep assuming it.
Secondly, human cognitive architectures are extremely similar to each
other. If you have a hard time convincing someone who shares 99.9% of
your DNA, how are you going to convince an AGI *without* DNA?
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:25 MDT