From: Thomas McCabe (firstname.lastname@example.org)
Date: Thu Apr 24 2008 - 14:15:30 MDT
On 4/24/08, Matt Mahoney <email@example.com> wrote:
> --- Mike Dougherty <firstname.lastname@example.org> wrote:
> > I have reviewed Shock Levels. There is currently nothing that mere
> > mortals may discuss that is SL4. I spent a long time waiting for a
> > discussion that was truly on-topic for the list.
> > Is it even possible for an SL4 thread to be discussed?
> > I'll wait for an SL4 topic before posting again.
> I also reviewed http://www.sl4.org/shocklevels.html
for a more detailed analysis.
> I will try. First I adjust my belief system:
> 1. Consciousness does not exist. There is no "me". The brain is a computer.
> 2. Free will does not exist. The brain executes an algorithm.
> 3. There is no "good" or "bad", just ethical beliefs.
> I can only do this in an abstract sense. I pretend there is a version of me
> that thinks in this strict mathematical sense while the rest of me pursues
> normal human goals in a world that makes sense. It is the only way I can do
> it. Otherwise I would have no reason to live. Fortunately human biases
> favoring survival are strong, so I can do this safely.
See http://yudkowsky.net/tmol-faq/meaningoflife.html. Be warned that
this paper is obsolete,
> My abstract self concludes:
> - I am not a singularitarian. I want neither to speed up the singularity nor
> delay it. In the same sense I am neutral about the possibility of human
> extinction (see 3).
Are you totally neutral about the possibility of getting shot? If no,
the former includes the latter. If yes, please seek psychological help
> - AI is not an engineering problem. It is a product of evolution (see 2).
> - We cannot predict the outcome of AI because evolution is not stable. It is
> prone to catastrophes.
> - "We" (see 1) cannot observe a singularity because it is beyond our
> intellectual capacity to understand at any pre-singularity level of intellect.
> - A singularity may already have happened, and the world we observe is the
> result. We have no way to know.
> Discussions about friendliness, risks, uploading, copying, self identity, and
> reprogramming the brain are SL3. SL4 makes these issues irrelevant.
> -- Matt Mahoney, email@example.com
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT