Re: [sl4] to-do list for strong, nice AI

From: Pavitra (celestialcognition@gmail.com)
Date: Tue Oct 20 2009 - 04:43:48 MDT


Luke wrote:
> Alright, it is no wonder you guys can't get anything done. I start a single
> thread, with a single, simple purpose: to trade versions of a single
> document: the to-do list. And you all can't resist the urge to get into the
> most arcane, esoteric mathematical bullshit imaginable. "Degree of
> compressibility". "Test giver must have more information than test-taker".
> wank wank wank.

I don't think it's possible to have a coherent conversation without math.

> I've noticed that conversations on this list consistently do one thing:
> grow.
>
> The to-do list was an effort to contract the conversation. An English
> teacher of mine in high school once laid out the process of writing a paper
> like this:
>
> <><><=

You're right, of course.

But, then, this list has been working on the problem for a long time.
Don't you think the simple, high-level concepts have been worked out by now?

I'm still a newbie, but I imagine it worked something like this: people
saw an important problem, started breaking it down into its component
pieces, solved the easy parts, and started banging their heads together
over the hard parts.

Since they've broken the problem down and done away with the easy parts,
they're necessarily fussing over relatively small details; since all the
easy parts are solved, they're necessarily left with the hard parts.

So what you see when you come in and look at them in the middle of the
problem is a pack of hyenas tussling over a bit of gristle. It looks
like they're focused on random minor details, because you didn't see how
that detail got sorted out from all the rest. It looks like they're not
making any progress, because you didn't see them making progress through
all the other parts. All you see is the seemingly interminable struggle
over the last remaining scraps.

This should be true of any field that's been around for a while, I
think, so the theory should be testable in multiple situations.

> Oh, and as for "mathematical" definitions of friendliness, I'd say to hell
> with economics (which never models more than one entity and has no concept
> of allies). I'd ask the military: how do you gauge whether another entity
> is friendly? I'm sure the CIA's been working on that problem for a
> looooooong time. Maybe they won't share their findings though. Maybe
> they're not friendly to our goals. But being humans, who also stand to die
> at the hands of circular saw-wielding terminators, they might just see the
> benefit in sharing knowledge. Anyone here do military theory?

I think you're confusing the human-empathic, ordinary-language meaning
of "friendly" with the more formal, specialized term "Friendly" (with
the capital F). A Friendly AI is one that we would want to have go foom;
in particular, a well-intentioned but misguided AI (e.g., a smiley-face
maximizer) is friendly but not Friendly.

> Sorry if I offended anyone. I'm only trying to up the mutation rate;
> nothing personal.

Hopefully you'll succeed. Your post did seem to break up the usual pace
somewhat, which is probably a good thing. Aerate the water and all that.





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT