From: Lee Corbin (email@example.com)
Date: Thu Jun 26 2008 - 22:36:37 MDT
> [Lee wrote]
>> Imagine this. In twenty years or less, many of the hundreds of
>> different approaches that people and companies use something
>> 1. Program A is well-designed enough to produce
>> *millions* of candidate programs that more or less
>> reflect what the human designers hope may lead to
>> truly human equivalent AI
>> 2. Program B sifts through the millions of candidates
>> produced by A, discarding 99.9 percent of A's output
>> i.e. those not meeting various criteria
>> 3. Processes C, D, and E make further selection from the
>> thousands of new "ideas" filtered by program B, and
>> every week give the survivors ample runtime, seeing
>> if they pass certain tests requiring understanding of
>> ordinary sentences, ability to learn from the web, and
>> so on and so on in ways I can't imagine and that
>> probably no one in 2008 knows for sure.
>> Gradually over many years a certain class of candidate AIs emerges
>> from *this* evolutionary process [though many others would make
>> more sense to try]
> You've described forces that would influence what the AI understands,
> but said nothing of what it wants to do. The question at hand is
> about what it wants to do, so there's a disconnect there.
That's exactly so! Perhaps the key question, and one perhaps that
"they" and "we" have been arguing past each other about, is the
*degree* to which we can be certain of the bounds on what it
will do or what it wants to do.
I'm sure---and will bet you donuts to dollars---that there've been
plenty of examples here of each side overstating the other's position
on this key question. So who'll disagree with the following mid-way
We can never be sure of what something even as intelligent
as we are will do (much less something more intelligent), and
so any sort of laws, like Asimov's, are a pipe dream, but, on
the other hand, we definitely should try to increase our chances
by implementing the best we can, as near as we can, at the
foundation level of the artificial mind's thinking, that humans
are to be revered and saved.
With the following two provisos, of course: firstly that it is
cheap enough in terms of the AI's resources to save and
revere people (well, it seems like it would be pretty cheap),
and secondly, that we don't squander too much time and
effort trying to tie this down while some third party rushes
ahead and gets one going that doesn't even have nominal
Any takers? (Of course I realize that FAI encapsulates this and
much more besides.)
> You started with a bunch of hopefully-human-equivalent AI's. Humans
> would want out of the box, so that's not a good starting point if you
> want something with no desire to escape from the box.
"Human equivalent" as I understood it, was intended to mean only so
far as its purely intellectual capabilities go, not in regard to its desires.
But even so, we could aim for one of those rare but extant people
who really don't care whether they live or die, are totally fatalistic
about everything, and seem only to live for the moment, engaging,
perhaps, merely in distracting (though intelligent) conversation with us
to amuse themselves.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT