Re: An essay I just wrote on the Singularity.

From: Perry E. Metzger (perry@piermont.com)
Date: Sat Jan 03 2004 - 00:20:29 MST


Samantha Atkins <samantha@objectent.com> writes:
> On Fri, 02 Jan 2004 17:22:40 -0500
> "Perry E. Metzger" <perry@piermont.com> wrote:
>> > Dah. But the point I was attempting to explore is that a definition
>> > of Friendliness that covered the present and immediately foreseeable
>> > situation (friendliness toward humans) might be sufficient to speak
>> > of Friendly AI. A pan-sentient definition might also be possible
>> > and even natural but may not be required in the first attempt. So
>> > it is not clear to me that "absolute" or "universal" morality, or
>> > even universal definitions of friendliness are required in order to
>> > meaningfully proceed.
>>
>> I will grant that it is possible that someone will come up with a
>> working definition of "Friendly" that is good enough, and a way to
>> inculcate it into an AI they are building so deeply that it won't
>> slip. I similarly grant that it is possible some talented person will
>> come up with an algorithm that solves the traveling salesman in
>> polynomial time. I'm not holding my breath, though.
>
> As a Friendly AI may be necessary to our survival, it presumably has
> a higher priority than the traveling salesman problem. While
> holding one's breath is not advisable, putting energy into the
> problem is.

I'm not convinced. First, I'm unsure that there is any evidence that
our survival requires such a thing -- the existence of humans has not
eliminated the existence of ants. Second of all, I'm not sure other
strategies might not be much more fruitful in that they might have a
shot at working. For example, with good enough intelligence
amplification, I could achieve a lot of things I'm interested in,
including, likely, personal survival. Third, it is far from clear to
me that I trust any of the people involved in this not to produce
something far worse than the disease they're trying to cure.

>> I put it that way because this is not a new argument. The argument
>> over the nature of "the good" goes back thousands of years. I could
>> easily hand anyone who liked 50 fine books produced over the last
>> 2500 years -- from The Republic through stuff written in the last year
>> or two -- exploring the question of how to make decisions about what
>> is and isn't "moral" or "good", and no one has made much progress to
>> the goal, though they've explored lots of interesting territory.
>>
>
> Yes indeed. A lot of the arguments fall apart and some seem
> promising. Perhaps they are all lacking. Perhaps we humans aren't
> even smart enough to ask the question cleanly enough or to fully
> support a "good-enough" answer even if we stumbled upon it. But
> this does not mean the question is fundamentally and forever
> unanswerable.

Many questions men ask are meaningless. For example, for much of our
history we've asked "what is the meaning of life" -- as though the
question has an objective answer.

The reasonable evidence from the hunt for an objective moral system is
that there isn't one, any more than there is phlogiston or the
ether. I think the evidence against objective morality is very very
strong. Better, then, to simply live as one can without it.

>> Absent a way to determine if, say, eating a cow is immoral, there will
>> be no way for The Friendly AI to determine if it should be protecting
>> cows from being eaten -- doubtless the PETA types would argue that it
>> is fundamental that they should not be, and the folks at Ruth's Chris
>> would argue otherwise, and perhaps they would both petition The
>> Friendly AI for resolution, only for none to be achieved.
>
> I would guess that the AI would point out that eating cows is not at
> all necessary post-singularity and to forbid the behavior toward a
> possibly upliftable sentient. Not sure we would even have cows.
> But I take your point.

Indeed. The cows are merely an example.

>> > I was not asking you to predict what you think would happen but to
>> > express what it is you would like to happen and believe worthwhile
>> > to work toward bringing into being.
>>
>> I would like to see strong nanotechnology and IA technologies, because
>> I could apply them to my own personal survival, but beyond that, I
>> don't know what the spectrum of things that could happen are, or how I
>> might choose among them meaningfully.
>
> So, do you have anything you care about beyond your own personal
> survival?

I'm a small 'o' objectivist -- beyond survival, I choose my own
personal pleasure and happiness, and I sincerely hope others do the
same.

> Any preferences for the type of world you live in

I have no idea what the choices available will be -- and no one else
really does, either. Hard to meaningfully choose under those
circumstances.

> or the kind of company that may or may not be around, for instance?

Well, I rather like my friends and loved ones, but I have no idea if
most of them would even want to become something beyond what they are
now. Many of them have expressed a strong disinterest in achieving any
sort of personal transformation from human acorn into posthuman oak --
indeed, a violent distaste even for the idea of extreme personal
longevity. I see no reason I'd want to impose it upon them. Perhaps
they will change their minds. Perhaps I'd want for new
friends. Perhaps I'd make them among other posthuman intelligences, or
perhaps I'd make them more literally by building them, or perhaps I'd
edit out my desire for companionship. Likely the nature of my social
relationships, if any, will be utterly incomprehensible to me as I am
now. Ants speculate poorly, I suspect, upon the social relationships
of higher primates. I have no idea what things might be like -- and
neither does anyone else, really.

>> I don't pretend I have the foresight to be able to guide history into
>> a direction I would like -- I don't even pretend to be able to guide a
>> small company with any certainty and I have at least operated those
>> enough to have understanding of the problem and feel like I can do a
>> reasonable job at it. The variables involved with an entire society on
>> the scale of the one we have are beyond my comprehension. That's why
>> I'm a libertarian, not a central planning freak.
>
> You present a false dichotomy between inability and having to have
> near godlike foresight to make much difference;

Oh, one can indeed make a lot of difference -- but it is nearly
impossible to do so deliberately. Sir Tim Berners-Lee didn't set out
to change the world, and neither did Georg Cantor, and neither did
most of the folks who've made a big difference. I'm not aware of many
who really deliberately set out to change the world and succeeded --
mostly it happens through serendipity. The work I've been most known
for over the years has often been trivial stuff I've done in a couple
of hours while hung over one weekend. If I ever alter the future in a
big way, it will probably be in some similar manner.

Anyway, in a world like that, one tries above all to do as little harm
as one can, and to try to have a great deal of fun. If along the way
you happen to invent something nifty, well, great. One also tries to
take a reasonably conservative position with respect to personal
survival -- but not so conservative as to ruin one's fun.

> between being a libertarian and being a "central planning freak". I
> think it is in each person's range of responsibility to consider to
> the extent of their abilities what kind of world they wish to
> inhabit and to do what they can to acheive it. It doesn't take any
> pretense or super-ability to do what one can guided by one's best
> knowledge and values to the extent of one's abilities.
>
> If we don't work at least in part at the level of envisioning what
> we want then how in the hell do we expect to have any chance of
> getting there?

We don't have any chance of getting to any future we can envision
right now. The forces at work are beyond any single individual's
control -- and I'm glad for that, because most individuals would
impose terribly myopic desires, and indeed things that would end up
more dystopiac than utopiac.

-- 
Perry E. Metzger		perry@piermont.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT