Value of a machine that acts like a human brain (was Re: [sl4] The Jaguar Supercomputer)

From: Tim Freeman (tim@fungible.com)
Date: Thu Nov 26 2009 - 09:03:38 MST


From: Matt Paul <lizardblue@gmail.com>
>...what exactly the perceived value of the AI you
>guys discuss is beyond normal scientific desire to understand. I don't
>see the practical and prudent value of a machine that acts like a
>human brain. Fascinating and cool certainly, but I don't see the
>actual benefits to mankind. I do see many potential problems for
>mankind though...

Well, a machine that acts like a human brain might be a good thing to
live in, especially if your own brain stopped working. Copy the state
information and then carry on with your life after the failure of the
organic version. Some people have philosophical issues with that, but
they are unlikely to be with us for all that long, so I'll just wait
them out. Reality has a way of resolving philosophical disputes. I
have had friends who had philosophical issues with cryonics die and
get buried, so I have seen this principle at work, even though I don't
like the outcome. I should have argued more with the guy. Philosophy
is an important game, but it drains enthusiasm to play it with people
who don't take it seriously.

On the other hand, maybe you're talking about a machine that acts like
a human brain but is radically different somehow (more intelligent or
maybe just faster). I agree that that poses many potential problems
for mankind. Groups of humans tend to go nuts and attempt genocide
every generation or two, and it's especially easy to choose to attack
a group of humans that are different from you and your group. The
ordinary humans are going to be different from the better-or-faster
uploads, so we will probably lose if things go that way.

Another option is a machine that is intelligent, but doesn't work like
a human brain. Business competition will result in something like
that being built. If we get it right, one that likes humans will win,
and if we get it wrong, one that likes specific corporations will win,
or maybe one that is insane will win. Corporations generally have a
fiduciary responsibility to deliver value to their shareholders, which
may be other corporations. If corporations stay in control of society
and eventually they don't need human employees any more we'll all be
marginalized and eventually recycled. It seems to me we'd better make
sure that a nonhuman AI that likes humans wins.

Some people imagine a world where there's a stable community of AI's
with opposing goals, much like we presently have a relatively stable
world with a bunch of humans with opposing goals. This is different
from the winner-takes-all scenarios described in the previous
paragraph. Unfortunately, I think it is a winner-take-all game. If
two AI's have opposing goals and neither can overpower the other, it
seems technically feasible and mutually beneficial for them to replace
themselves with one AI that has a compromise set of goals. (For the
purposes of this argument, I count two separate computational nodes
that have the same utility function as one AI with multiple pieces,
not as two AI's.) Humans can't do that so extrapolating from human
behavior gets the wrong result here, IMO.

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT