From: Phillip Huggan (email@example.com)
Date: Sat Dec 10 2005 - 13:48:12 MST
No, PAI reduces to a kind of AI Boxing even the most pessimistic would agree might actually work. The real question is would the required real-world retrictions on potentially dangerous actions be too severe as to restrict the PAI from actually being useful. ie) We obviously cannot carry out any PAI computer hardware or software tinkering suggestions. I don't know if PAI would be singularity useful but it would still be the greatest human invention to date.
If an AGI can be designed that won't fiddle with its own source code (one of many "friendly" safeguards), it should be possible to design one that won't fiddle with the external world.
Define PAI as AI Boxing defines all AGI models including FAI as an exercise in AI Boxing.
Chris Capel <firstname.lastname@example.org> wrote:
<SNIP> The difference between an adversarial technique and a design-level
change are not always clear-cut. If I were to design an AI and just
sprinkle in inhibitions here and there to keep the AI from wanting to
optimize the "external world", assuming the phrase could be anchored
properly, then that's ultimately an adversarial technique. And if this
is what passive AI is, then it does reduce to AI boxing, albeit
subtly. To really implement an AI that *really* doesn't have any
non-passive goals, you have to pretty much solve Friendliness. So PAI
isn't really an alternate strategy, as I think it was originally
Find Great Deals on Holiday Gifts at Yahoo! Shopping
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT