Re: ESSAY: Program length, Omega and Friendliness

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Thu Feb 23 2006 - 00:16:25 MST


Yeah, measuring the computation resources encapsulated by genes isn't shedding any light on the process of human brain intelligence, just evolution. Wheat has genes. There are a whole bunch of other factors to consider both within the intelligence generating functions of a human brain (many of which we can presently only guess regarding the locii of) and in the environment surrounding a developing brain.
   
  I see an AGI as two parts: a superhuman physics learner (for engineering projects) and an above-average moral actor. I don't think a learned physics fact needs to be stored in any type of RAM. But AGI's moral actions; its influence on the world, will certainly have to always be everpresent in some sort of RAM. I see a Friendly AI as one that learns humans prefer to be physically and mentally healthy before it learns 17th century physics. I don't know, is it possible to limit AGI's recursive self improvement to a level only sufficient to develop and implement the ability to "value" humans before it learns other forms of engineering? Or does recursive improvement necessarily pick up physics along the way?
   
   
  Brian Atkins <brian@posthuman.com> wrote:
  Mikko Särelä wrote:
>
> Anything wrong with my analysis?
>

I've tried pondering this issue before (it would help if I actually knew a great
deal about biology, etc., but I don't), and I come to the conclusion that it
must be inaccurate to try and relate computer bits to base pair data. The reason
is because I think you have to take into account the tremendous amount of "data
compression" involved in the base pair information.

Let's take for example what happens when you run a typical piece of computer
software on a PC vs. what happens when you "run" some DNA/RNA in a live cell. In
the PC, the bits on the hard drive (DNA) aren't usually even compressed, and are
simply loaded into RAM (transfer via RNA) and run (via ribosomes etc.)
sequentially or via simple jumps. Unless the program that runs is compressed
down beforehand into extremely crazily reduced forms (such as perhaps in a
assembly language coder contest by some insanely good hacker), what it produces
when run isn't a great deal more complex than what was sitting on the hard drive.

By contrast, the whole cellular system has evolved to take extreme advantage of
the real, physical, 3D environment inside the cell, and what the DNA data
translates into when run is an extremely complex protein that goes on to perform
yet more complex behaviors. So it seems the DNA data, when run on the
appropriate hardware, represents in extraordinarily compressed form something
very complex. Or maybe a better way to say it is that it might make more sense
to try and measure the _uncompressed_ data amount generated by DNA + its running
environment vs. the uncompressed running computer code.

This seems obvious when you compared 700MB of DNA and what it can produce when
run over time, vs. 700MB of Windows XP OS code. Now, would it be possible to
create some sort of unique computing environment that would allow you then run a
relatively short computer program that would over time lead to something very
complex? I think so. But again, to really compare things you should try and
compare apples vs. apples and "uncompressed" systems against each other.

At least, that's my take for now...

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/
		
---------------------------------
Relax. Yahoo! Mail virus scanning helps detect nasty viruses!


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT