Re: What are "AGI-first'ers" expecting AGI will teach us about FAI?

From: Vladimir Nesov (robotact@gmail.com)
Date: Mon Apr 28 2008 - 10:56:03 MDT


Hello Samantha, thanks for the feedback.

On Sat, Apr 26, 2008 at 12:12 PM, Samantha Atkins <sjatkins@gmail.com> wrote:
> Vladimir Nesov wrote:
>
> > I think that the right kind of investigation of Friendliness should in
> > fact help in AGI, not the other way around. It should be able to
> > formulate the problem that actually needs solving, in a form that can
> > be addressed. The main question of Friendliness theory is how to build
> > a system that significantly helps us (which results in great changes
> > to us and the world), while at the same time (with the goal of)
> > preserving and developing things that we care about.
> >
> >
> Since we are quite aware of how limited our intelligence is and how tangled
> and suspect the roots of our values are, I am not able to be sure that
> "things that we care about" are the best a superior mind can come up with or
> should unduly limit it. I am not at all sure that even a super efficient
> direct extrapolation from the kinds of beings we are leads to the very best
> that can be for us much less to the best state for the local universe.
>

I apparently used "things we care about" in a deeper sense than
"results of the poll". You say "direct extrapolation" -- what do you
mean by that? Whatever path you *choose*, it's some kind of
extrapolation, and what I'm addressing in this message is a general
idea of how it could be expressed. How to decide which path to take?
If you ignore humans as a core of such extrapolation, what do you base
extrapolation on? Not that human-based Friendly modification of the
kind I talk about must stick with particular apish properties, it
might as well develop something much more "pure", who knows... I
think it most certainly will.

Friendly modification is supposed to act as a kind of extension to
*your* intelligence, even if it's implemented separately. It's not
supposed to have will of its own, but to guess your intentions and
help to further them without contaminating them with its own bias,
developing technologies, providing advice. And do this at the
fundamental levels of organization, not in a "genie mode". It's a
direct analogy to what your own intelligence does for you. Is your
mind always Friendly to you? How do you develop a system that is at
least as Friendly to you as your own mind? Or even more Friendly? What
is the measure of how much a mind is Friendly to itself? What kind of
system is Friendly in this sense to some blob of physical matter?

> > This role looks very much like what intelligence should do in general.
> > Currently, intelligence enables simple drives inbuilt in our biology
> > to have their way in situations that they can never comprehend and
> > which were not around at the time they were programmed in by
> > evolution. Intelligence empowers these drives, allows them to deal
> > with many novel situations and solve problems which they can't on
> > their own, while carrying on what their intention was.
> >
> >
> Intelligence to some degree goes beyond those drives, sees the limits of
> their utility and what may be better. If we are to 'become as gods' then we
> must at some point somehow go beyond our evolutionary psychology. A
> psychological ape will not enjoy being an upload except in a carefully
> crafted virtual monkey house. A psychological ape will not even enjoy an
> indefinitely long life of apish pleasures in countless variations. At some
> point we are more and more not as our EP says.
>

Yes, we move further away from our initial human nature, but in which
direction? E.g. randomly jumping to become a being of eternal pure
suffering looks like a bad choice. There are many bad choices out
there, and precise understanding of our current nature should be a
better guide on this road than random decrees issued by committee of
moral philosophers.

> > This process is not perfect, so in modern environment some of purposes
> > don't play out. People eat wrong foods and become ill, or decide not
> > to have many children. Propagation of DNA is no longer a very
> > significant goal for humans. This is an example of subtly Unfriendly
> > AI, the kind that Friendliness-blind AGI development can end up
> > supplying: it works great at first and *seems* to follow its intended
> > goals very reliably, but in the end it has all the control and starts
> > to ignore initial purpose.
> >
> >
> Are you saying that good AGI must keep us being happy little breeders?
> Purpose evolves or it is a dead endless circle.
>

We are not as single-minded as idealized DNA replicators. Humans have
enough moral anarchy to explore all kinds of possibilities, but as we
are seemingly constrained by physical limitations, we'd have to
prioritize based on something.

> > Grasping the principles by which modification to a system results in a
> > different dynamics that can be said to preserve intention of initial
> > dynamics, while obviously altering the way it operates, can, I think,
> > be a key to general intelligence. If this intention-preserving
> > modification process is expressed on low level, it doesn't need to
> > have higher-level anthropic concepts engraved on its circuits, it even
> > doesn't need to know about humans. It can be a *simple* statistical
> > creature. All it needs is to extrapolate the development of our corner
> > of the universe, where humans are the main statistical anomaly. It
> > will automatically figure out what does it mean to be Friendly, if
> > such is its nature.
> >
> >
> I don't for a moment believe that the wise guiding of the development and
> evolution of the human species will or can be achieved by some automated
> statistical process. To me that is much more dangerously un-sane a notion
> that simply developing actual AGI as quickly as possible because we need the
> intelligence NOW.
>

Any AGI is some kind of "automated statistical process". Likely I'm
wrong about the part where it's unnecessary to even tell this AGI
about humans, but the main point of my argument is in shifting the
focus from AGI that needs to be told what to do, to AGI that by its
nature does what we want, without explicit Friendliness content or
careless demands. It doesn't base its Friendliness on the idea that
was taught by human programmers on initial stages of development and
later refined, but on its study of physical makeup of humans or
civilization as a whole (directly performed on much-higher-than quarks
levels of organisation, of course).

I think this line can probably help in developing actual AGI of any
kind: a smarter AGI can be obtained by stumping a Friendly
modification device on top of a stupider one, so figuring out Friendly
modification should provide a specification for scalable AGI system
(and vice versa). I'm currently further down the road on the AGI side,
but adding another view from the side of Friendliness enriches the
perspective, which helps as another sanity check.

-- 
Vladimir Nesov
robotact@gmail.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT