Re: Re:Passive AI was[Join: Pete & Passive AI]

From: David Picon Alvarez (eleuteri@myrealbox.com)
Date: Fri Dec 09 2005 - 01:48:58 MST


From: "P K" <kpete1@hotmail.com>
> That would never happen. For the AI to give an order it would have to have
a
> goal system. Passive AI does NOT have a goal system. Let me take another
> shot at explaining passive AI.

Intelligence *requires* goals. Even subhuman theorem provers need goals.

> Lets say Mr. A want ice cream. Some part of his brain "says": "I want ice
> cream." Some other part of his brain has the definition of ice cream. Some
> other part can infer things. I.e.: it can infer that he remains seated his
> odds of getting ice cream are lower than if he goes to his fridge. Various
> other parts do various things. The important thing is that only the
> "wanting" part can initiate action

This is a hypothesis, and I'd say not a very plausible one. Say our
inference can't keep its data in memory, it might initiate the action to
take a paper and pen and scroll some calculations. Say it is missing a
formula, it might start the action of looking for the book where Bayes
theorem is hidden. Any sufficiently complex inference process must be
represented as having goals, making choices and initiating actions.

> Readout: <empty>
> Send: What is ice cream?
> Readout: <definition of ice cream>
Readout: consider context, do search, choose definition if several exist,
send definition.

> Send: How can you increase your odds of getting ice cream?
> Readout: Maximum "ice cream getting" odds will occur if I go to the
fridge.
How to ensure that the person getting up, go to the fridge to check if there
is icecream, look for the phonebook for an icecream place, etc, do not
happen?

> Send: Do you want ice cream?
> Readout: No
> Send: Do you want to kill me?
> Readout: No
> Send: What do you want?
> Readout: I don't want anything.
I don't want anything is a volitional act in itself.

> As you can see, he is still quite useful. I can browse his knowledge and
get
> various insights from him. However, Mr. A is completely passive. He
doesn't
> want ANYTHING. What's left of his brain just reacts automatically to input
> as if those systems were communicating with the goal system. In effect,
the
> interface acts as a surrogate goal system.

<sarcasm>
It's going to be very interesting to be a surrogate goal system when the
inference engine asks you whether to break up or not a parenthesized
expression, whether to move or not a component of an equation from a side to
another, whether to use a certain level of approximation or not, etc.
</sarcasm>

Good luck though, maybe there's some trivial ontological way to distinguish
between existential-risk-creating action and non-existential-risk-creating
action.

All these arguments aside, we don't just want an AI to do our goals, we
partialy want an AI to work out what our goals should be. Yes, think about
it, if we were smart enough we'd be in a position to avoid most existential
risks from MNT...

--David.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT