From: maru dubshinki (email@example.com)
Date: Fri Sep 08 2006 - 13:40:33 MDT
On 9/8/06, ps udoname <firstname.lastname@example.org> wrote:
> Hey everyone,
> I hear that there is turmoil in this list recinatly with people getting
> kicked off for critisizing FAI. Which makes this an interesting time for me
> to join especially as I think FAI might be a bad way to approach things, for
> the following reasons (I realise they have probably been raised before):
> 1) I think Roger Penrose's idear that the mind relys somehow on quantum
> effects could be correct, and if they are this makes an AI on a classical
> computer impractical.
Well, so what? If consciousness is purely impossible without quantum
effects, and consciousness really is the simplest way of producing all
the observable consequences we attribute to intelligence and
consciousness, then we can simply make a quantum computer. Progress is
steady and sure in that domain these days anyway.
> 2) Assuming AI on a classical computer is possible, do you actually know
> what ethics to give the AI? Utilitarianism would end with humanity being
> turned to computronium, not in the sence of being uploaded but in the sence
> of being replaced with somthing happier.
Well, that's act utilitarianism you're thinking of there, not rule or
any of the more exotic variants. Eliezer's latest thinking is
apparently on the lines of his
it's worth reading.
> 3) Assuming you have the ethical theory sorted, it seems to me (not that I
> know much about this) that programming seed AI might be quite easy with
> brute force, but programming FAI is increadbly hard.
Yeah. That's one of SIAI's arguments for funding them instead of
regular seed AI projects - better to get FAI first and then regular AI
can come later when FAI presumably will be able to ameliorate the
effects of a rogue seed AI.
> 4) Even if FAI is the best idear, why singinst and not univercity AI
> Instead, I think brain-computer interfacing might be a better idear. AI
> attempts should not give the AI direct control over anything, and the AI
> should be asked how to bootstrap humans. A big off button would also be a
> good idear.
Your second line is a suggestion for AI boxing. See
http://sl4.org/archive/0207/4935.html and http://sl4.org/wiki/AI_Jail
Long story short, not giving the AI direct control over anything would
probably only work in the early stages and so would be of minimal use.
Ditto for the big off button.
Uploading brains doesn't seem to be a popular suggestion on this list
either, since their reasoning goes that humans are not the stablest
and sanest mentalities you can get, and would probably become
unfriendly in an upload.
> I could say more about myself, but I might as well do the questionaire
This archive was generated by hypermail 2.1.5 : Thu May 23 2013 - 04:01:30 MDT