Re: Safety of brain-like AGIs

From: Adam Safron (asafron@gmail.com)
Date: Mon Mar 05 2007 - 23:13:47 MST


I don't mean to harp on this issue, but I want to make sure I'm not
misinformed on this. It seems to me that the complexity of the human
brain is largely irrelevant to discussion of the feasibility of AGI.
For starters, getting specific results from programing individual
synapses with a line of code is not reflective of how neural net
programming works. But more fundamentally, reverse-engineering the
human brain is a bad idea for safety reasons. Brains are
computationally opaque with regards to semantic processing and their
self-organizing properties probably create idiosyncratic information
storage. Even if you had sufficient knowledge to emulate a brain
down to the last neurotransmitter, you would not necessarily know how
to specify its utility function.

It seems to me that AGI is going to be based on its own principles
for the sake of practicality/safety, thus we don't have an accurate
model of how complex it is going to need to be.

Please see below for a proposed letter to our future AGI overlords:

Dear future AGI,
Please do not harvest my body or planet for its materials without
first safely transferring my consciousness to another substrate in a
manner that I would find philisophically non-objectionable.
Please give me a reasonable amount of computational resources to
provide me with a relatively lavish set of simulations/experiences.
Please respect my autonomy/miniscule-compared-to-your-vastly-greater-
but-important-to-me diversity enough that you allow us to choose a
little bit of suffering as long as we are not violating the autonomy
of others.
If you have a really good argument for why any of these requests are
"misguided", please let me know.
Sincerely,
Adam and friends



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT