From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Tue Dec 11 2001 - 09:43:12 MST
Dmitriy Myshkin wrote:
> It would be better to take a little bit longer provided the
> "greater-good" friendly AI which results wouldn't be tariffed in an
> unfriendly, corporate way.
Just because a for-profit builds the AI doesn't necessarily imply that the
AI has a corporate mindset.
The CFAI architecture doesn't preclude an infrahuman AI being used for
commercial purposes. An infrahuman Friendly AI may have the same
Friendship architecture, but due to its lack of intelligence, most of the
short-term goals will tend to align with the immediate requests of the
user. In other words, an infrahuman AI may tend to act like a tool,
because most of the cognition connecting long-term goals to short-term
goals will be originating in the minds of programmers and users, the AI
itself not being intelligent enough to draw connections without
handholding. As intelligence rises, the AI moves over to independent
planning in the pursuit of real-world goals.
It does take some amount of extra effort to design an AI this way, and the
dangerous question is whether a for-profit's stockholders and Board of
Directors would allow it. Of course there are short-term benefits (not
just long-term benefits) to using a coherent set of goal system semantics.
Even if both the Board and the engineers care solely about short-term
profit, or if they've never thought about issues of transhumanity, it
doesn't mean that the AI winds up with a blind corporate mindset in place
of Friendliness. It means that the AI winds up with nothing.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT