Re: analytical rigor

From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Jun 28 2006 - 10:07:55 MDT


Justin

The below message was written immediately after I got your post that is
analysed in it: a problem with my ISP prevented me from sending it for
a couple of days, and then, when I got access to outgoing mail, I forgot
that it was still unsent.

So, in reply to your recent message in which you complain of getting no
answers, in this case you deserve an apology for lateness.

My mistake.

Richard Loosemore

There are many people who know about the history of AI, and who also
know about the relationship between "experimental mathematics" and
nonlinear systems research, and who know about the way that
non-scientific factors shape scientific research, who would not have had
any trouble understanding exactly what I was talking about when I made
my original comments.

You are clearly not one of those people, but if you had started out your
message by expressing interest and asking for clarification, instead of
dispensing insults and taking every opportunity to be patronizing, I'd
have cheerfully taken the time to elaborate on my remarks and make them
more accessible.

I've responded to some of your specific charges below, but I can't say I
have much enthusiasm to write an extended essay.

justin corwin wrote:
> Ah, another Monday morning coming back to mischaracterizations:
>
> On 6/25/06, Richard Loosemore <rpwl@lightlink.com> wrote:
>> You may be mistaking fuzziness in your own understanding of the issues
>> for fuzziness in the issues themselves.
>
> Lets be clear here. You had a quote by Strogatz, where he quotes Ulam
> explaining that there are a lot of nonlinear systems. Then you went on
> for two paragraphs about 'vast' areas and 'tiny piles of analytically
> tractable systems' and "Outer Darkness" You characterize this
> situation as a very obvious and important divide between linear and
> nonlinear systems. You do, of course, neglect to reference anything
> more than a quote of a casual quote, mention any specific claims of
> intractability, or reference any "impossible" problems that have stood
> for the "centuries" you're broadly painting.
>
> That's what I mean by fuzziness. If you /mean/ something specific,
> then by all means correct me. As far as I can tell, your point thus
> far in your first two paragraphs and subsequent responses is to assert
> that there are lots and lots of nonlinear systems, most of which are
> insoluble, and that all of mathematics (and presumably other sciences)
> has resulted in very little progress in this space, on the basis of a
> quote and a lot of adjectives.

The statement about nonlinear systems is so obvious to a mathematician
that nobody in their right mind would ask for references. You are as
good as asking a person who mentions that "gravity makes things fall" to
give some references to experiments that have substantiated this fact.

Like I say, you mistake your own lack of knowledge for something else.

>> If Stephen Strogatz could
>> understand these points well enough to write about them in his book, and
>> if he has not yet lost his job or his status as a world-class
>> mathematician specializing in nonlinear systems, and if I repeat them
>> here, applied to AGI issues, then it begins to look like the points I
>> made actually have a great deal of depth, whereas your criticism
>> contains nothing that indicates understanding, only complaints.
>
> Yes, a well respected author is evidence for some depth in a debate
> somewhere.
>
>> Are you claiming that the SL4 list imagines the
>> > world is a linear place? Did you think that the statement "most claims
>> > of impossibility usually turn out to be unreliable" applied largely to
>> > mathematical claims of intractability(which it doesn't, as far as I
>> > can tell, it refers to specific technological achievement, deriving
>> > from the older quote "If a respected elder scientist says something is
>> > possible, he is very likely right, if he says something is impossible,
>> > he is very likely wrong", deriving from older embarassing anecdotes
>> > about Lord Kelvin, who was very prone to declarative statements.)
>>
>> I can't think of anything more vague and fuzzy than the idiotic quote
>> about elder scientists. I am not operating at that level, I am talking
>> about deep methodological issues. I wouldn't dream of wasting my time
>> on debating whether or not old scientists talk garbage.
>
> This is nice. Respond to the sub note, it's the most important part of
> this paragraph. Do you have an answer to either question? To me, it
> seems like you're twisting the definition of 'impossibility' (and the
> applicability of the quote) to make a vague point about the big and
> scary world of Complex systems.
>
> Who are you talking about? What are you accusing them of?

Now you are asking me to interpret my post because you are apparently
not clear about what exactly I was refering to: I am trying to point
out that you could have done that without launching an ad hominem attack
first.

> <snip>
>
>> Oh, please: let's keep "conspiracy" talk out of this. I didn't say or
>> imply anything about conspiracies, so it would helpe if you didn't put
>> words into my mouth.
>
> You are accusing unknown persons of having a 'stranglehold' on 'AI
> research'. I'll use a different word if it makes you feel better.

A stranglehold can be had without there being a consiracy of any sort
whatsoever. By trying to imply that I was talking about a conspiracy,
you were attempting to mock the claim I was making.

>> This is a ridiculously naive view of what science is actually like. Get
>> out there and talk to some real scientists about biasses and funding
>> bandwagons and prejudices and power centers. Or, if you can't do that,
>> read some books by people who study how science works. Failing that, at
>> least don't say anything about it.
>
> What 'view' would you say that paragraph was espousing? Did I claim
> any mechanism or make, in fact, any claims about 'what science is
> actually like'? All I said was that scientists don't care about what
> you do. And have no motivation to interfere with your work.

>> You could trying to read about the role played by Behaviorists in the
>> psychology community. That situation is closely analogous to the
>> present situation in AI.
>
> Luckily, I've already been interested in psychology a long time, and I
> am familiar with the dominance of the Behaviorist model. It's true
> that a lot of good research didn't get done because it did not fit the
> Behaviorist paradigm. Unfortunately for your point, a lot of
> scientists continued with what they were doing anyway, on their own
> tenure, or in private research, or even outside established science.
> That's why psychology has moved on. Certainly if the majority view is
> 'wrong', and that view is being taught to students, then less people
> will be doing science the 'right' way. That's not an active force
> preventing anything.

But - this is just factually incorrect!

How can I discuss this with you when you can make such a distorted
statement? I mean, where do I begin?

Cognitive scientists consider that entire period an almost complete
waste of time. They speak of the behaviorists as dogmatically and
ruthlessly suppressing any other kinds of research. If you tried to get
published doing cognitive science in those days, you simply couldn't get
a job.

Sure, a few people carried on (especially in Europe), but if I go to the
nearest academic library from where I currently sit, I could take 90% of
all the psychology books - about 200 linear feet of shelf space -
and put them though a shredder without making any diminution in the
quality of their collection. That's how bad it was.

>> It would have been nicer if, anywhere in your message, you had addressed
>> a single, solitary grain of the issues I raised, or asked questions to
>> clarify some aspect of what I said, so you could go further and talk
>> about the issues.
>
> I objected to your argument on the grounds that you weren't making any
> concrete points, and challenged you to make some specific claims.
> That's a valid criticism, under whatever philosophy of science you
> follow. Also, I asked three questions, although the last was partly
> rhetorical:
>
> Are you claiming that the SL4 list imagines the
> world is a linear place?

I don't know why you speak of the "SL4 list". I am a member of the SL4
list as well. I was talking about a particular approach to AI research.
  This list is dominated by that view, but it is not exclusive.

I was talking about a particular, extended meaning of the word "linear"
that is prevalent in mathematics. This has to do with the way that
systems can be composed in such a way that local mechanisms DO NOT
combine so that the system as a whole has structured behavior that is
not analytically derivable from the local mechanisms. That is "linear"
in the extended sense.

There are some researchers for whom the structure and behavior of AI
systems are linear enough that analytic proofs of the behavior of those
AI systems are possible. More importantly there is an *attitude*
towards the structure of AI systems that tries to make the system
components look as much *like* cleanly decomposable, linear systems as
possible. There are some for whom the only viable structures are ones
about which proofs can be made, and it manifests itself in many ways,
like the emphasis on logical reasoning, the emphasis on symbols with
objective or compositional semantics, the possibility of detaching
learning mechanisms from thinking mechanisms, the neglect or
postponement of issues about grounding symbols in the real world, the
pursuit of analytic types of neural network learning algorithms, etc etc
etc ..... Are any of these issues meaningfull to you, and do you
understand them in depth? Do you need me to provide further
clarification of what they mean?

One of the ways it comes out is in what Alan Bundy called "theorem
envy". The desire to do something that looks, feels or smells like
analytic mathematics because that gives a superficial feeling of validation.

When and if you understand the subtlety of this meaning of "linear"
(which would be familiar to people like Strogatz), you will understand
what I was referring to. My comments were directed at people already
very familiar with mathematics and AI, who might have some hope of
comprehending the implications of that difference between the
tractability of linear and nonlinear systems.

> Did you think that the statement "most claims
> of impossibility usually turn out to be unreliable" applied largely to
> mathematical claims of intractability?

In the original context, people were heard complaining that it was a
waste of time to attempt to prove this or that aspect of the behavior of
an AGI, because such analytic proofs were "impossible". The reply (in
the form of a reference to Scott Aaronson's dispute with a physicist)
was that some foolish people like to dismiss proofs as "impossible", but
in the long run these fools usually turn out to be wrong ...... hence
the phrase "most claims of impossibility usually turn out to be unreliable".

MY point was that this was a silly thing to say, set against the claims
of impossibility of finding analytic solutions to the behavior of
nonlinear systems. Those particular claims of impossibility (which
directly relate to our issues regarding the structure of AGI systems)
are as robust as any, and they are vast in number.

> I'm sorry you don't like what most scientists are doing, so
> what?
>
>> The fact that you did not, but instead just complained about nebulous
>> faults that you imagined you saw, is part of the collective abdication
>> of scientific responsibility I was talking about in the first place:
>> you avoided the issue.
>
> You filled an email with unspecific adjective-laden sentences,
> accusing unnamed persons of being self-deluded, of obstructing AI
> research, of not recognizing some grand truth about the
> incomprehensibility of the world. I felt that you were sniping
> opportunistically, writing without specific claims, making dramatic
> points without support. I don't have a case to prove, you do.
>
> If you think all AI research is missing something, what is that? You
> refuse to specifically point to errors or approaches, or make any
> specific predictions. You write about what stupid, naive, blinded folk
> we are, and are aggravated when someone points out that you are just
> asserting so with big words. I don't even have a clear idea who or
> what theories or community you are slandering, only that they're
> holding back all progress. So, by induction, since I think I'm making
> progress, and part of 'the establishment' by virtue of having a job in
> AI, I assume I must be part of the problem. So, where to begin
> correcting my many faults?
> What, in fact, is your issue that I'm avoiding?
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT