From: Richard Loosemore (firstname.lastname@example.org)
Date: Tue Jun 27 2006 - 11:36:28 MDT
Eliezer S. Yudkowsky wrote:
> Just for the record, my main issue with Loosemore so far is that I'm
> still waiting for a bold, precise, falsifiable prediction - never mind a
> useful technique - out of his philosophy. So you don't know a bunch of
> stuff. Great. Personally, I don't know the position of every atom in
> Pluto. What *do* you know, and why is it useful?
Falsifiable predictions are not the issue, and I think you know that: I
have said before (very clearly) that this is a question at the paradigm
You have read enough that I am sure I do not need to educate you on the
difference between paradigm-level issues and normal-science issues.
If this were a debate about particular results within a science, your
request for falsifiable predictions would be justified. But because you
*know* full well that I have made my statements at the paradigm level
-- in other words, for people who might be reading this and do not know
what I mean by that, I am attacking the foundational assumptions and the
methodology of the mainstream AI approach -- your request for a bold
precise, flasifiable prediction is specious.
[I have said this in the past, and if I recall correctly all I got in
reply was a dismissive comment that said something like "when someone
doesn't have anything concrete to say, of course they always trot out
the "paradigm" excuse". I sincerely hope this does not happen again.]
Now, we could go one of two ways at this point:
1) I could back up a little and pretend that you do not know this, and
explain to you exactly why your request for a falsifiable prediction is
without merit. I am happy to do this, but I would find it tedious
because I have always assumed that you were sophisticated enough to know
2) I could go straight on to the real issues (this is my preferred
option), but I can only do this if you explicitly accept that the
argument must take place at the paradigm level, taking a look at the
problems of the entire approach to AI that you espouse.
So: please choose. Would you put aside the "falsifiable predictions"
request and promise not to reintroduce it, so we can discuss the real
Let me know: I am happy to proceed with the next step, according to
which of the two options you choose.
An aside to Russell Wallace: I hear what you say, and for the record I
have made quite strenous efforts to explain my position, but whenever I
start to put a lot of effort into it, I get blindsided by exactly the
kind of negative reaction that I referred to (in other words, sarcastic
dismissals). If anyone wants to discuss the matter maturely, I am
always ready to explain at great length: as you know, I am not shy
If anyone thinks that anything I am saying is not clear they should
address *particular* points that seem to lack clarity, so that between
us we can home in what needs clarification. What I cannot deal with is
the kind of mindless, kneejerk response that copies my entire post and
sprinkles it with "This is all just stupid, fuzzy thinking".
As I have said before, there are plenty of people out there who
understand these issues well enough that they would look at what I write
and say "This is clear as daylight". The fact that they do not belong
to this list, or they belong, but only contact me in private, seems to
convince some people on the list, who do not understand what I write,
that the fault is all mine, and it gives them license to open up and be
abusive and dismissive. It is not my job to educate anyone who can't be
bothered to check they own knowledge level before they open their mouths.
I look forward to writing about the actual issues in the next post,
according to which way Eliezer prefers to take it.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT