Re: State of the SI's AI and FAI research

From: sam kayley (thedeepervoid@btinternet.com)
Date: Tue Feb 15 2005 - 13:56:39 MST


From: "Eliezer Yudkowsky" <sentience@pobox.com>
> Slawomir Paliwoda wrote:
> > I'm curious about the amount of theoretical progress SI has made since
CFAI
> > and LOGI in the areas of FAI and AI research.
...
> My thinking has changed dramatically. I just don't know how to measure
> that. Progress is not the same as visible progress. Now that I have a
> better idea of what it means to understand something, I also understand a
> little better what it means to "explain" something, and it's clear that my
> earlier explanations failed - people did not apply or build on the
> knowledge. So now I try to explain simpler things at greater length, for
> that it is better to understand just one thing than to be confused by two
> dozen. But the flip side is that my progress past LOGI is something that
I
> wouldn't try to explain until the reader had already understood basic
> things on the order of Bayesian probability, expected utility, and the
> character of mathematical logic, and of these things I have so far only
> tried to explain my thoughts about Bayesian probability. So there is
...
It may take too much time to explain your current thoughts, but hinting at
them as Michael Wilson's wiki book reviews have done may satisfy some of the
curiosity and may allow people to point out other relevant work that has
been done in the same area. Perhaps you could let other people do the
editing of your rough drafts and release the results if you find them
acceptable?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT