From: Russell Wallace (email@example.com)
Date: Tue Jun 27 2006 - 00:19:57 MDT
On 6/26/06, Eliezer S. Yudkowsky <firstname.lastname@example.org> wrote:
> Just for the record, my main issue with Loosemore so far is that I'm
> still waiting for a bold, precise, falsifiable prediction - never mind a
> useful technique - out of his philosophy. So you don't know a bunch of
> stuff. Great. Personally, I don't know the position of every atom in
> Pluto. What *do* you know, and why is it useful?
Richard, to clear the tribal-allegiance stuff out of the way, my view of AI
is closer to yours (or what I vaguely think yours is) than it is to
But that's beside the point. Eliezer makes definite statements - definite
enough that I can point to them and say why I think they're wrong.
You've made vague criticisms, that have people exasperated because you won't
even say exactly who it is your criticizing. Leave that aside.
The reason I'm bothering to reply is that I suspect you have enough
intelligence and knowledge to have a useful contribution to make, if you'll
focus enough to make it.
Discarding the stuff about who's wrong and why: What, specifically, do you
believe to be the case? Do you have any concrete results yet? If not, fair
enough (I haven't either!), but then do you have any concrete predictions?
Any concrete prescriptions? Anything about what you're trying to do, what
you think other people should do, and what results you think would be
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT