CategorizedSL4Archive - looking back over my old posts I'm embrassed :-(

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Wed Feb 16 2005 - 22:32:45 MST


Looking back over some of my old posts all I can say
is: Oh shit!

Why oh why couldn't I keep my big mouth shut? :-(
When I first came to SL4 I was firing off wild
opinions and guesses all over the place. Then I
entered a 'silliness' phase, where I imagined I could
program the FAI myself on a ZX Spectrum in a few
hundred lines. Oh dear. I think everyone passes
through a phase where they imagine that they can code
the FAI themselves in a week. But still. I had an
acute case of grandiosity. I'd better apologize for
all these silly posts.

Now I'm worried, because I realize that post-human
historians will probably be checking the SL4 archives.
  Shit man :-(

After much early gibberish I have finally managed to
come up with the outlines of some ideas for FAI that
do at least make sense. I summarized some of these
ideas coherently in my short essay here:

http://transhumanism.org/index.php/th/more/692/

On the basis of my ideas I also made 10 falsifiable
predictions (which are 'informed guesses' based on my
ideas) here:

http://www.sl4.org/archive/0501/10623.html

I'd better get some of those right man, or I'm gonna
go down as a real arse. I've *got* to be right about
at least half of those predictions dammit!

My 'bottom line' idea is of course Universal Morality.
 I simply do not believe that evolution was purely
based on natural selection alone. Nor do I believe
that the purpose of the human mind was simply to make
babies (achieving reproductive fitness). Frankly, I
do not understand how people imagine they could create
an SAI at all, based on these assumptions about the
human mind.

My bottom line, of course, is the existence of some
sort of spectacular self-similarly properties running
through reality at all levels of organization. So the
science of complex systems is where all my bets are
placed. So for instance, I think that 'order for
free' (aka Kauffman) got slipped into evolution direct
from the laws of physics and this order is somehow
reflected in the human mind. So although I agree that
it's certainly true that parts of the human mind are
the way they are because that was the best way to make
babies, I think that evolutionary psychology is
actually a red herring and the science of complex
systems is where the *real* understanding of minds in
general lies.

My idea is that if I'm right, this would greatly cut
down the complexity of the seed FAI code. We need
only find the big self-similarity properties and then
we can 'slay' multiple levels of organization in one
go. The idea is that all the different levels of
abstraction that appear to be neccessery for general
intelligence, all those different models etc all that
complication can be 'whalloped' in one go with just a
few big 'fractal' (self-similarity properties).

Any way, that's my 'bottom line' idea. But I think
I'll definitely keep my mouth sut about FAI from now
on.

       
   

=====

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:53 MST