Re: CNN article with Bostrom interview and Kurzweil quotes

From: Scott Yokim (scottyokim@yahoo.com)
Date: Wed Jul 26 2006 - 11:20:10 MDT


R. W.,

   Your questions are answered here:
http://yudkowsky.net/tmol-faq/tmol-faq.html

Scott

"R. W." <rtwebb43@yahoo.com> wrote: Yes. ACCEPTING mortality. I don't expect love or even rationality. In fact, I don't expect any response. What good is there in outliving all the stars in the universe?
  There is an infinite holonic depth to knowledge. The question is: "What is useful within the boundary and constraints of quasi-finite existence?" I personally accept that my particular energy pattern which constitutes "me-ness" is replicable given time, space, physics, and evolution, etc. And that I may not be the first organization of "me" in time given the combinatorial possibilites over an infinite expanse.
   
  Should our goal be infinite transcendence or humility in quasi-finite beingness. That question seems to be a critical underlying theme to the singularity movement and its opponents. I don't need to be an infinite transcendent being unto myself when I accept that I am already a finite aspect of an infinitely transcendent being whose modality will reoccur in an infinite manner across an infinite expanse of space and time. Would it matter if I can not recall or have an awareness of everytime my specific pattern of energy evolved into being? I don't have perfect recollection of every moment I have experienced in this particular evolution! But being that energy can neither be created nor destroyed but changes phase states, there is a recombinant certainty that over the expanse of infinite time or infinitely parallel time that my energy pattern will repeat itself just like any prime aspect of an infinite continuum.

  I guess I am simply comfortable with being a prime aspect of consciousness with the certainty of replication at some distant and/or past point in time with an infinite potential of replication. Hence 'death' as we perceive it is a discrete step in a continual (i.e. not continuous) manifestation of being.
   
   Is there a limit to our transcendent status? Would having the ability to create or destroy universes at whim be enough to satisfy our egos? Is there a limit to perceivable awareness, i.e. at what point in the holon of the mandelbrot set does the picture become inperceivable or just plain useless? Can I be happy with a knowledge of how to live in complete harmony with absolute chaos? Or maybe limited harmony with limited chaos? How much is enough?
   
  I keep repeating the question in different ways because I want to be clear to most anyone capable of understanding that the people on this list are capable, within my estimation, of realizing a technologically limitless future where our transcendent power would be near limitless; but to what end?
   
  Even if I could create or destroy universes at will, that still would not make me G-d.
  Extremely intelligent-- yes. Extremely powerful -- yes. But still not G-d.
  The best answer that I can come up with for justifying infinite transcendence is that it would be fun! What other purpose is there once you've crossed the boundary of all necessity?
   
  
H C <lphege@hotmail.com> wrote:
>From: "R. W."

>accepting mortality?

Maybe I'm misunderstanding the context in which you say this, but don't
expect any love for that proposition here.

-hank

>From: "R. W."
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: CNN article with Bostrom interview and Kurzweil quotes
>Date: Wed, 26 Jul 2006 06:26:59 -0700 (PDT)
>
>Apparantly, we are already benefitting from limited AI. I am am cautious
>in my promotion of a fully self-conscious strong AI. The necessity of
>having a being like that amongst us is dubious at best. The probability of
>this potential becoming a reality increases at least geometrically and most
>likely exponentially with our increased intellectual capacity and knowledge
>base. The argument for a strong AI seems very similar to the argument for
>a nuclear weapon...the first country to have one would have an incredible
>advantage over others in at least a superficial political sense. Humans are
>often irrational and this kind of motivation could lead to a strong AI
>'arms race'. Sane people would want FAI simply to reduce to existential
>risk such a being would pose to us; but people often act on emotion and not
>reason.
>
> Creating an uncontrollable being who's intelligence is beyond ours to
>the nth degree is what many are forecasting to be our future reality. We
>already have a G-d whom we can't understand or control. Why do we need
>another one? Wouldn't it be better just to build better tools and better
>methods for developing rational, moral human beings, and accepting
>mortality?
>
> I keep returning to the same thought..."Someone will succeed in this
>endeavor at some point in my lifetime, my children's lifetime or
>grandchildren's lifetime." Therefore, we must build an invariant
>mathematical model of friendly behaviour before we build a mind capable of
>destroying us.
>
> If we could build such a model of behaviour and methods of instilling
>this model in humans then what would we really need an FAI for? The answer
>seems to be that we need insurance against an unfriendly AI.
>
>Olie Lamb wrote:
> On 7/26/06, M T wrote:
> >
> > http://www.cnn.com/2006/TECH/science/07/24/ai.bostrom/index.html
>
>I noticed their little quick-poll posed the question "Does Artificial
>Intelligence pose a greater threat or benefit to humanity?"
>
>*Sigh*
>
>I mean, if they'd asked "How real is the threat from AI?" and "How
>much benefit is AI likely to bring?" Then we might get some useful
>results.
>
>-- Olie
>
>
>
>---------------------------------
>Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates
>starting at 1�/min.

      

---------------------------------
See the all-new, redesigned Yahoo.com. Check it out.

                 
---------------------------------
Do you Yahoo!?
 Next-gen email? Have it all with the all-new Yahoo! Mail Beta.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT