From: Lee Corbin (email@example.com)
Date: Sun Jul 13 2008 - 00:56:01 MDT
> On Saturday 12 July 2008, Lee Corbin wrote:
>> > Yes, you illustrate this talking-past phenomena when you're more or
>> > less pairing DOI against "overnight change". One does not need
>> > overnight change for DOI. Who else is going to do it? Santa? :-)
>> It will be done by well-funded groups, by corporations, with, I
>> agree, some contributions from the merely enthusiastic.
> Is this what your magic eight balls says?
It's what I think. That is, the above are the most likely source
of breakthroughs in anti-aging, age reversal, advanced nanotech
and GAI, seems to me.
> So, let's say it's not "my" solution for not dying, and let's just call
> it Aubrey's. I was only using it as a general reference to the fact
> that people are working on it. People are -- not magical unicorns.
> So if you really care about people not dying, go pursue that work,
> set up giant archives and impressive life support systems, deploy
> better ambulatory capacities, whatever it might take.
Like I said, I'm pretty busy with other stuff. (I know my priorities
perhaps aren't what they should be, but that's how it is.)
> But please don't confuse that with the issues of the design,
> construction and feasibility issues of intelligence. You respond
> to this same line below ..
Here is how our *current* miscommunication evidently got started:
> > > > I posit that in many cases the people simply are not capable of
> > > > self-rule or are absolutely too unprepared for it by their own
> > > > cultural history. We also can surely judge that this was the case
> > > > in many historical societies we're aware of.
> > >
> > > I'm wondering what makes you think that nations and governments
> > > somehow magically (vitalism) makes it so that these same unprepared
> > > people are able to come together and somehow make it so that they
> > > are prepared. It's simply not true.
> > Yes it is! Yes it is! :-)
> Oh, okay. How's that death thing working out for you? Kinda sucks, eh?
and we got going on the problem of "dying". The property rights
issue that started this apparently arose from me suggesting that
"in many cases the people simply are not capable of self-rule or
are absolutely too unprepared for it by their own cultural history"
and that caused you to take off on some magic/vitalism accusation.
My statement in this last paragraph inside the quotes still stands,
it seems to me.
>> >> There you go again. "Let's go engineer solutions." When, this
>> >> afternoon? Later this evening? Actually, my next two weekends
>> > Next few seconds. What's your number again? Nevermind, I have it.
>> > I'll call you.
>> That is not an answer! You're deep in trouble friend, someone christ
>> - king of the jews. You can someone in your state
>> be so cool about our fate?
> You asked when .. [and the reference is lost on me. I did my searching,
> still can't get it. Are you Jesus?]
Blasted Xian spammers! Let's not allow them to
derail our discussion. But they did supply a link:
http://lib.ru/SONGS/jesus.txt . So it evidently was
just a reference to an old rock opera song.
>> I'm saying that many people here consider it interesting or
>> fruitful to consider the logical possibilities inherent in a sudden
>> AI takeoff. Okay, so you don't. Then maybe you might
>> consider not replying to posts like that, imploring, (the way
>> you do) that the subject be changed.
> Maybe I'm digging in a dry well.
I couldn't say. But it looks as though perhaps it was I who
started the thread "Property Rights" replying to Stathis.
(If someone else, started the thread, I'd appreciate knowing
who and when.)
Martin then wrote "My guess is that many think become really
really cheap under the singularity, but never completely free.
Just some cheap stuff might not be worth billing [for] it."
Stathis then said: "Even if we all had Santa Claus machines we might still
covet our neighbour's unique objects, land and raw materials. But the
crucial change will come when we are able to rewrite our own source
And you wrote
> On Wednesday 09 July 2008, Lee Corbin wrote:
>> > Specifically we were talking about 'property rights' and I
>> > was wondering how it is that you want an ai to behave in
>> > some characterized manner with respect to them (hint:
>> > the characterization is actually incomplete but nobody
>> > admits this methinks).
>> What's wrong with me *wanting* or finding desirable an
>> AI that does respect property rights?
> Nothing is wrong with wanting that. I was talking about the actual task
> of figuring out what intelligence is and building that; property rights
> and such are secondary, they are not properties of being intelligent as
> far as we know.
Okay, okay. You've changed the subject line to "Shock level
confidence." What do you think is the key issue now?
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:01:10 MDT