Re: Psychopathic Uploads and other SIs

From: Rick Geniale (rickgeniale@pibot.com)
Date: Fri Feb 17 2006 - 09:47:07 MST


turin,

We are just thinking that all of us are victims of a big mistake and of
a big misunderstanding.
We believe that this mistake has been caused by too much fake speeches
and scenarios that have been appeared on SL4.
Probably, these fake speeches are due to envy and jealousy regarding our
work (that nobody knows in detail).
Moreover, AGI is a field with undefined edges and contours, so it's very
easy to talk nonsense about it.
Yet worst, the lack of a deep cognition about AGI's building problems
and the lack of standard of knowledge in this field, lead people to the
most erroneous conclusions about everything.
In short, if we are not talking about the cognitive architecture of an
AGI, but we are merely talking about it's behavior, it's obvious that
any AGI must essentially be: 1) useful to mankind; 2) useful to mantain
or improve 21st century lifestyles and values. Nobody, in the Occidental
World, can have doubts about that (and we are a profitable company in
the Occidental World).
Regarding friendship, we think that friendship is a relation that is
established between two or more people or entities that share a set of
fundamental values about mankind and its society.
Friendship is a relation that is entirely based on trust.
Friendship is respect.
Friendship is loyalty.
Friendship is association.
Friendship involves the exchange of ideas, thoughts, knowledge and
cultural values.
Friendship is solidarity.
Friendship is not subjection.
Other than friendship are courtesy, kindness, complaisance, compliancy,
deference, etc.
Friendship is something that happens in human interactions: but the
nexus between friendship and AGI could be a nonsense.
Finally, before that we can make any kind of scenario (beautiful or bad)
in any direction we (humans), first of all, must be able to pass the
Turing Test (NOW, WITHOUT PASSING THIS TEST, any discussion regarding
any possible scenario remain a NONSENSE!!!).
In the case in which PIBOT will pass the Touring Test, we WILL MERIT A
MONUMENT for a very important scientific invention.
Therefore, nobody should be worried about the work RGE Corp.

Our best wishes.

RGE Corp.

turin wrote:

>EOT;
>
>
>--- Brian Atkins <brian@posthuman.com> wrote:
>
>From: Brian Atkins <brian@posthuman.com>
>
>
>
>>This is the problem with friendly SI. I am afraid that if we do not allow
>>them to understand first hand subjective experience, we could produce
>>psychopaths
>>
>>
>>
>
> I was being hyperbolic here, let me qualify this statement. I don't think am autnomous friendly SI which does not understand first person subjective experience would magically become a psychopath. But I am worried that -any- autonomous friendly SI that does not understand first person subjective experience will be impoverished in its decision making as that decision making relates to human whose existence is for the most part centered around our own subjective experiences.
>
>The architecture need not produce a psychopath, that is merely an extreme example, we could make sociopaths, obsessive compulsive, manic depressive SIs. I am not talking about their cognitive architecture here; I am talking about their behavior, the way they socially interact with humans. I do not expect their cognitive architecture to resemble ours in the slightest. I think our cognitive architecture has gotten us into a lot of trouble. I am speaking here in part metaphorically, but also behaviorally.
>
>I would like for the SI when it tells someone, Hello, to understand what saying hello means to me, whether when it says Hello it means the same thing as I do or not.
>
>This to me seems to be require giving the SI an understanding of human subjectivity, and does not require the SI to possess subjective states itself, but I wonder if there is an advantage to subjectivity itself or if an SI can really make good decisions without its own subjectivity.
>
>Then the question of general SI subjectivity comes into play. This is something that is difficult to quantify, and so we end with armchair philosophy. We talk often about feasibility, survival, etc, but I am interested in what an autonomous SI would want to do aesthetically, philosopically, scientifically, if it possessed its own subjectivity. There is in an effort to maximize our own survival or efficiency, there a danger of course of losing subjectivity itself, which is something we as humans value, and I personally would like some of the SI to be autonomous and possess subjectivity because otherwise it seems the future would be impoverished without them.
>
>How to do this safely? I don't know. As I said, I don't think happiness is of much value or human subjectivity in and of itself, and I am curious as to what shape SI subjectivity would take and if any subjectivity can be considered "friendly".
>
>In truth, I would rather we build autonomous and "awake" SI with subjectivity that were pscyhotic and tyrannical in the old scifi horror movie sense and wipe out our species than merely building braindead SI completely under our control which would be used as powerful slaves to maintain 21st century lifestyles and values until the end of the universe. Neither scenario is very likely, I am merely trying to illustrate the point that you have ot risk survival to be creative. 1 in 10 Nasa astronauts have died, granted Nasa often makes major errors in safety due to red tape, etc, but to be explorers one has to face a certain amount of risk.
>
>I don't think there is such a thing as a clean Singularity. It would be nice if it turned out to be all peaches and cream the way Kurzweil hopes, I like the idea of being a femtotech ghost, -but- for creativity within the Singularity or to bring about any sort of Singularity, like giving SI subjectivity.
>
>I would like to do so patiently, soberly, and in a promethian fashion, but shit happens...... this whole problem of friendly AI troubles me.
>I am coming from a different background than most of the poeple on this list, but Socrates asks this, "who is the friend?"
>
>Is the friend the slave, the person who does everything you say, then we just make brain dead SI, like a hammer. Or does the friend present opposition, and if so, what is acceptable opposition? When we speak of friends we are speaking of other people, if we do not consider SI people or do not think that the concept of the person is important in relationship to SI, we shouldn't use the word friendly at all in reference to SI.
>
>The idea of the friend and subjectivity has implications beyond SI as it relates to the human institution of slavery as well as animal labour,I am just trying to start a discussion about the "inner life" of machines.
>
>EOT;
>
>
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT