Re: Psychopathic Uploads and other SIs

From: Woody Long (ironanchorpress@earthlink.net)
Date: Fri Feb 17 2006 - 11:55:16 MST


Rick,

As an inventor of an "android robot brain" also to be revealed in 2006, I
face the same issues as you. From reading your website and your posts here,
I see that we share the same basic philosophy. From your writings I believe
your AI system will not be built purposely as hostile, but friendly. As you
have said we corporations are driven by the profit motive, and that means
Customer Satisfaction, and not building defective, unsafe products that
injure the customer or community. Thus I believe you are not purposely
building an unsafe, unfriendly AI system, and want nothing to do with that.
So I wish you the best of luck in commercializing your AI PIBOT.
There is plenty of room in the strong AI/conscious machines industry for
all of us, and the benefits that we all will enjoy in this future
technological paradise mst be brought about for us and our children. So I
am looking forward to your historic presentation in November of 2006 in
Toulouse (France), and in fact I am glad it is you who might have all or
part of the AI marvel. And who can argue with where your system and efforts
will be employed, as stated in your mission statement:

"Rick Geniale Enterprises Corp. will devote the greater part of its efforts
and resources to implement an AGI in all the fields related to medicine and
healthcare."

>From the reasons given above, I believe the people of earth can safely
support your efforts to build such a "safe-built" singularity machine/seed.
Now all you have to do is wow us at your Presentation. haha lol.

Good luck Rick; I will be buying your book and supporting you (because I
believe you firmly and purposely support the "safe-built" standard of the
industry),

Ken Woody Long
www.artificial-lifeforms-lab.blogspot.com

> [Original Message]
> From: Rick Geniale <rickgeniale@pibot.com>
> To: <sl4@sl4.org>
> Date: 2/17/2006 11:48:52 AM
> Subject: Re: Psychopathic Uploads and other SIs
>
> turin,
>
> We are just thinking that all of us are victims of a big mistake and of
> a big misunderstanding.
> We believe that this mistake has been caused by too much fake speeches
> and scenarios that have been appeared on SL4.
> Probably, these fake speeches are due to envy and jealousy regarding our
> work (that nobody knows in detail).
> Moreover, AGI is a field with undefined edges and contours, so it's very
> easy to talk nonsense about it.
> Yet worst, the lack of a deep cognition about AGI's building problems
> and the lack of standard of knowledge in this field, lead people to the
> most erroneous conclusions about everything.
> In short, if we are not talking about the cognitive architecture of an
> AGI, but we are merely talking about it's behavior, it's obvious that
> any AGI must essentially be: 1) useful to mankind; 2) useful to mantain
> or improve 21st century lifestyles and values. Nobody, in the Occidental
> World, can have doubts about that (and we are a profitable company in
> the Occidental World).
> Regarding friendship, we think that friendship is a relation that is
> established between two or more people or entities that share a set of
> fundamental values about mankind and its society.
> Friendship is a relation that is entirely based on trust.
> Friendship is respect.
> Friendship is loyalty.
> Friendship is association.
> Friendship involves the exchange of ideas, thoughts, knowledge and
> cultural values.
> Friendship is solidarity.
> Friendship is not subjection.
> Other than friendship are courtesy, kindness, complaisance, compliancy,
> deference, etc.
> Friendship is something that happens in human interactions: but the
> nexus between friendship and AGI could be a nonsense.
> Finally, before that we can make any kind of scenario (beautiful or bad)
> in any direction we (humans), first of all, must be able to pass the
> Turing Test (NOW, WITHOUT PASSING THIS TEST, any discussion regarding
> any possible scenario remain a NONSENSE!!!).
> In the case in which PIBOT will pass the Touring Test, we WILL MERIT A
> MONUMENT for a very important scientific invention.
> Therefore, nobody should be worried about the work RGE Corp.
>
> Our best wishes.
>
> RGE Corp.
>
> turin wrote:
>
> >EOT;
> >
> >
> >--- Brian Atkins <brian@posthuman.com> wrote:
> >
> >From: Brian Atkins <brian@posthuman.com>
> >
> >
> >
> >>This is the problem with friendly SI. I am afraid that if we do not
allow
> >>them to understand first hand subjective experience, we could produce
> >>psychopaths
> >>
> >>
> >>
> >
> > I was being hyperbolic here, let me qualify this statement. I don't
think am autnomous friendly SI which does not understand first person
subjective experience would magically become a psychopath. But I am worried
that -any- autonomous friendly SI that does not understand first person
subjective experience will be impoverished in its decision making as that
decision making relates to human whose existence is for the most part
centered around our own subjective experiences.
> >
> >The architecture need not produce a psychopath, that is merely an
extreme example, we could make sociopaths, obsessive compulsive, manic
depressive SIs. I am not talking about their cognitive architecture here; I
am talking about their behavior, the way they socially interact with
humans. I do not expect their cognitive architecture to resemble ours in
the slightest. I think our cognitive architecture has gotten us into a lot
of trouble. I am speaking here in part metaphorically, but also
behaviorally.
> >
> >I would like for the SI when it tells someone, Hello, to understand what
saying hello means to me, whether when it says Hello it means the same
thing as I do or not.
> >
> >This to me seems to be require giving the SI an understanding of human
subjectivity, and does not require the SI to possess subjective states
itself, but I wonder if there is an advantage to subjectivity itself or if
an SI can really make good decisions without its own subjectivity.
> >
> >Then the question of general SI subjectivity comes into play. This is
something that is difficult to quantify, and so we end with armchair
philosophy. We talk often about feasibility, survival, etc, but I am
interested in what an autonomous SI would want to do aesthetically,
philosopically, scientifically, if it possessed its own subjectivity. There
is in an effort to maximize our own survival or efficiency, there a danger
of course of losing subjectivity itself, which is something we as humans
value, and I personally would like some of the SI to be autonomous and
possess subjectivity because otherwise it seems the future would be
impoverished without them.
> >
> >How to do this safely? I don't know. As I said, I don't think happiness
is of much value or human subjectivity in and of itself, and I am curious
as to what shape SI subjectivity would take and if any subjectivity can be
considered "friendly".
> >
> >In truth, I would rather we build autonomous and "awake" SI with
subjectivity that were pscyhotic and tyrannical in the old scifi horror
movie sense and wipe out our species than merely building braindead SI
completely under our control which would be used as powerful slaves to
maintain 21st century lifestyles and values until the end of the universe.
Neither scenario is very likely, I am merely trying to illustrate the point
that you have ot risk survival to be creative. 1 in 10 Nasa astronauts
have died, granted Nasa often makes major errors in safety due to red tape,
etc, but to be explorers one has to face a certain amount of risk.
> >
> >I don't think there is such a thing as a clean Singularity. It would be
nice if it turned out to be all peaches and cream the way Kurzweil hopes, I
like the idea of being a femtotech ghost, -but- for creativity within the
Singularity or to bring about any sort of Singularity, like giving SI
subjectivity.
> >
> >I would like to do so patiently, soberly, and in a promethian fashion,
but shit happens...... this whole problem of friendly AI troubles me.
> >I am coming from a different background than most of the poeple on this
list, but Socrates asks this, "who is the friend?"
> >
> >Is the friend the slave, the person who does everything you say, then we
just make brain dead SI, like a hammer. Or does the friend present
opposition, and if so, what is acceptable opposition? When we speak of
friends we are speaking of other people, if we do not consider SI people or
do not think that the concept of the person is important in relationship to
SI, we shouldn't use the word friendly at all in reference to SI.
> >
> >The idea of the friend and subjectivity has implications beyond SI as
it relates to the human institution of slavery as well as animal labour,I
am just trying to start a discussion about the "inner life" of machines.
> >
> >EOT;
> >
> >
> >
> >
> >
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT