From: James Higgins (firstname.lastname@example.org)
Date: Sat Jul 27 2002 - 20:18:20 MDT
outlawpoet - wrote:
>>Please explain this as I don't understand how she released the AI
>>without knowing she did so.
> she did know she was letting the AI out. However, once she had done so, She realized that she'd been manipulated into contravening her earlier intention to keep the AI in the box, regardless of what it said.
Please explain, exactly, how this occured. Was there a magic word or
phrase you had in mind that if typed you were out? Were the people
aware of this and what it was? Etc. Please go into detail.
Since confidentially was not part of your tests, could you post the log
just prior to and after the release of each AI?
> Besides, I don't see the AI box as simply 'convince me to let you out, you can't do it" I see it as, "can a person accurately identify Friendly and unFriendly intelligences via this medium" And I design the tests accordingly.
That is a good question, and one very much worth exploring. However, I
suspect that we can't explore this effectively with our present
knowledge of AI. An AI will be a very different intelligence than a
Human. It will likely think differently, have different sensory input
and certainly have a very different background. It will also have much
less experience than the average human (unless it has been running for
20+ subjective human years). The AI may not understand feelings,
emotion, humor or other concepts. It is likely to have an extremely
difficult time deceiving humans until it is very intelligent with
considerable experience interacting with humans. I don't believe we can
begin to address the question you pose until we have infra-human AI
available. I could be wrong, but that's my best guess on the matter.
> Of course they would. But this is a test of that particular media, and as the intelligence is uncertain, and shifting, I was simply trying to collect more data points than I had before. As it is nearly impossible to duplicate the exact social interaction, I simply was trying to get a handle on what kinds of patterns arise when 'something' is negotiating for it's freedom. And I believe that certain kinds of patterns arose which are significant can be generalized regardless of the specialized knowledge that each party may have on each other.
I don't believe that is a safe assumption. Lets take a much simpler
case. Assume we have a prisoner who has been in jail for 10 years and
is due for a parole hearing. Do you think the same pattern of dialog
would occur if a random person were to interview them instead of a
person from the parole board who is experienced at such things?
> As i've mentioned before, this is hardly a technical issue. All they need are sufficient understanding of the problems involved, unless you believe that the AI Researcher is likely to be able to predict personality traits within an AI.
I believe your asking a very different question than Eliezer is. For
your question technical background may not matter as much, but as
explained above I don't think we can really examine your question
without a better understanding of the nature of an AI.
> That may be so. But representative of what, exactly? Of SL4 subscribers? why should that be more important?
Representative of people who would likely be conducting a real AI Box
Test, of course. And, specifically, people who would have the power to
release the AI as a result of the Box Test. This will be a very, very,
very tiny number of humans (I would be surprised if 100 ever have this
power - at least pre-SI). Ben Goertzel is a likely candidate. Eliezer
Yudkowsky might also be in this position some day.
> You go on to explain domain competency. This is important. However, within the context of the interview, it is more likely that technical knowledge of AI and related technologies will take a back seat to debating ability and investigative intelligence. Along with a basic dose of stubbornness, the problems inherent in bargaining for your freedom have more to do with rational discourse and insight into interaction than they do with AI, nanotech, and other fancy words.
This is only true when negotiating with Humans for their release. An AI
is a completely diferent beast and thus a completely different case. If
you let a human out of prision you, at most, end up with a few thousand
dead people. If you let an AI out you could be destroying the entire
human race. This is a *very* different issue. I very much doubt that
the vast majority of humans fully comprehend this difference and its
potential impact. For example, less than a serious understanding of AI,
NanoTech, etc. could lead the person to believe that the AI could
possibly be stopped if it started down the wrong path. There is a
subtle difference that occurs in reasoning about this issue gained after
extensive thought on the subject. The people you choose were ill
picked, in my opinion, for this reason.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT