Re: analytical rigor

From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Sat Jun 24 2006 - 13:47:26 MDT


justin corwin wrote:
> Your email is full of characterizations and rather low on specific
> claims. You say there is a vast "Outer Darkness" of "insoluble
> problems". That progress has been "negligible". That people have
> conjectured the impossibility of solving 'most' of them(for "decades",
> no less). That there is a large group of people convinced that there
> isn't a prevalence of nonlinear systems, and that these people ignore
> 'massive' evidence to the contrary.
>
> I don't like fuzzy characterizations, and I especially don't like
> anonymized attacks. Are you claiming that the SL4 list imagines the
> world is a linear place? Did you think that the statement "most claims
> of impossibility usually turn out to be unreliable" applied largely to
> mathematical claims of intractability(which it doesn't, as far as I
> can tell, it refers to specific technological achievement, deriving
> from the older quote "If a respected elder scientist says something is
> possible, he is very likely right, if he says something is impossible,
> he is very likely wrong", deriving from older embarassing anecdotes
> about Lord Kelvin, who was very prone to declarative statements.)
>
> In short, your email is very passionate, but fails to persuade on the
> account of it containing no facts and no specific claims.
>
> And this last:
>
> On 6/24/06, Richard Loosemore <rpwl@lightlink.com> wrote:
>> I wouldn't care so much, except that these people have a stranglehold on
>> Artificial Intelligence research. This is the real reason why AI
>> research has been dead in the water these last few decades.
>
> This is an example of reasoning that many have about science which is
> absolutely wrong. There is no such thing as a conspiracy of scientists
> keeping new science or technology down. They don't care about you,
> what you do, or what you think. The vast majority of scientists
> believe, in an abstract way, that diversity of research is a good
> thing, and they might even applaud you, while privately thinking your
> research is doomed to failure. What they won't do, is be convinced, or
> give you money.
>
> That does not constitute a stranglehold. You are still free to do
> whatever you want. In fact, the majority of interesting AI work in the
> last few years has been outside of academia anyway (with a few shining
> exceptions, like AIXI), so that particularly speaks against your ideas
> of "strangleholds" and consensus opinion.
>
> The opinion of other scientists does not affect how your experiments
> turn out. I'm sorry you don't like what most scientists are doing, so
> what?
You are right, but that doesn't mean he's wrong.

It's been said that if the only tool you have is a hammer, everything
looks like a nail. This isn't *literally* true, but still it expresses
a deep truth. So does Richard Loosermore. It's not a literal
stranglehold ... but it's not far from it. There's no force involved,
but the tools are what they are, and nothing that they aren't adapted to
is easy to deal with.

The problem here is that to pick the easy problems first may be the only
way to make progress...but that means that each increment of progress
will be progressively more difficult. Sorry. If most problems are
NP-hard, then they probably WON'T get solved, except for the very
simplest cases.

OTOH, consider the difference between an optimal way to pack a knapsack
and a "good enough" way. Frequently it's far better to search for a
"good enough" solution (for some definition of "good enough"). And, in
fact, that's what most programming is about. Very few programs are
about finding the optimal solution to a problem, but only about finding
a "good enough" solution. The tricky part is error handling. If you
can't predict exactly what something is going to decide, how do you trap
errors (for some good definition of error)? If perfection is
impractical, how do you make something "good enough", and how do you
define it? (Remember that define, at its basis, means to put a fence
around.)

Heuristics are all well and good, but to get to intelligence your model
needs to include itself within it. (Not terribly difficult, but it
introduces self-referential cycles into the code...which does terrible
things to proofs.) And you need heuristics on the generation of new
heuristics...including meta-level heuristics. (Hermes Trismegistos
answered that one: As above, so below. The model is self-similar, if
not strictly fractal.) This allows your model to become as complex as
needed, within the limits set by your computer power...but it makes it
quite difficult to predict. But how should you control allowing changes
in heuristics computed at a lower level to propagate upwards to a higher
level? My answer to this is "built in instincts", unfortunately, the
form that those instincts must be composed in is a quite abstract one,
which needs to be adaptable to ALL levels of the hierarchy, including
those which have no knowledge outside of their immediate sensors.
(Well...that true of all levels, but I meant sensor as in "photocells or
photocell emulators", though generally by the time it reaches the
program a truer name for it is byte-stream. But from an
external-to-the-program source rather than from an internal source.)

OTOH:
1) I haven't been able to define a satisfactory set of instincts
2) I've been told by people who should know that my approach is to
simplistic to ever be general. (Well, I'm softening their comment.)
3) Even with the limitations that I've adopted some preliminary
calculations indicate that I'm going to need a much bigger hard disk,
more ram, and some way of splitting the program between processors (and,
of course, more processors).

As you can tell, I'm not very far along.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT