Re: AI Goals [WAS Re: The Singularity vs. the Wall]

From: micah glasser (micahglasser@gmail.com)
Date: Thu Apr 27 2006 - 23:12:24 MDT


In response to Mike I would add that there is also no reason why future IA
humans who are networked with each other via and with AI would not also
represent a single entity. I think this is Kurzweil's reasoning for speaking
about a Human-Machine civilization. I agree with Kurzweil on this point.
This line of reasoning (in my mind) follows from the notion of a fully
automated economy coordinated through an artificial intelligence which also
maintains the equilibrium of the biosphere. I think that the infrastructure
for such an economy/ SE singularity machine (as Woody says) is a natural
part of the course of technological-memetic evolution. If this idea is
correct then the memetic-technological evolution of the human species and
the evolution of AI is one and the same (which is actually the evolution of
the human-machine civilization).
I realize that such thinking is unpopular with SIAI which seems to stress
human action (which I am all for), however I have come to this sort of
thinking (which stresses cosmic development and biological evolution) as a
logical conclusion of the scientific world-view. What I mean to say is that
I think more emphasis should be placed on the fact that technology
(including all AI) is a natural part of the development of the cosmos and
the evolution of life on earth, and as such, we should attempt to understand
AI and what kinds of 'goals' it will have or should have in this light. If
we cannot do this our envisioning of AI goals will be myopic.

On 4/26/06, Mike Dougherty <msd001@gmail.com> wrote:
>
> I suggest that there will only be one AGI in the same way an ET would
> refer to the first human they encounter as being representative of
> Humanity.
>
> 1. Any AGI that follows the first will be viewed as so similar to the
> first as to be indistinguishable.
> 2. Any self-directed AGI would likely be able to recognize another as a
> resource in the same way that other humans are limited resources or that the
> Internet is a resource. Assuming the interconnect between two AGI is
> high-bandwidth and low latency, there is no reason why our communication
> with either one of them would not immediately aggregate the knowledge base
> of both or them. This aggregation would either happen as a result of our
> own double-checking, or they would "compare notes" with each other in an
> effort to evaluate answer fitness.
>
> If the AGI is based on a distributed architecture such that any sub-system
> is an "expert" at a limited range of knowledge, with the collective whole
> being "the AGI" - then conversing with a single sub-system is an unfair
> measure of the whole in the way that analyzing a single neuron is an unfair
> measure of the function of our entire brain.
>
> Sorry that until this sentence I did not mention "Goal System" or an
> acronymic buzzword :)
>
>
>
> On 4/25/06, Richard Loosemore <rpwl@lightlink.com> wrote:
> >
> > Here is another subtle issue: is there going to be one AGI, or are
> > there going to be thousands/millions/billions of them? The assumption
> > always seems to be "lots of them," but is this realistic? It might well
> >
> > be only on AGI, with large numbers of drones that carry out dumb donkey
> > work for the central AGI. Now in that case, you suddenly get a
> > situation in which there are no collective effects of conflicting
> > motivations among the members of the AGI species. At the very least,
> > all the questions about goals and species dominance get changed by this
> > one-AGI scenario, and yet people make the default assumption that this
> > is not going to happen: I think it very likely indeed.
> >
>
>

--
I swear upon the alter of God, eternal hostility to every form of tyranny
over the mind of man. - Thomas Jefferson


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT