From: Aubrey de Grey (firstname.lastname@example.org)
Date: Mon May 31 2004 - 16:42:51 MDT
Eliezer Yudkowsky wrote:
> People live with
> quite complex background rules already, such as "You must spend most of
> your hours on boring, soul-draining labor just to make enough money to get
> by" and "As time goes on you will slowly age, lose neurons, and die" and
> "You need to fill out paperwork" and "Much of your life will be run by
> people who enjoy exercising authority over you and huge bureaucracies you
> can't affect." Moral caution or no, even I could design a better set of
> background rules than that.
Um, but if we're talking mainly here about minimising expected loss of
life then we have to look at the best possible AI-free alternative, and
that certainly includes curing aging and developing enormously enhanced
automation to eliminate mindless jobs. As for politicians being drawn
only from those curious people who want to be politicians, well, I'm not
so sure that's so bad. In particular, this:
> It's not as if any human intelligence went
> into designing the existing background rules; they just happened.
isn't really so -- we invented democracy on purpose, and we've kept it
because we prefer it to anything anyone else has come up with.
> > how does the FAI have the physical (as opposed to the cognitive)
> > ability to [stop humans from doing risky things, possibly including
> > making the things less risky]?
> Molecular nanotechnology, one would tend to assume, or whatever follows
Ah, but hang on, why should we design the FAI to use MNT or whatever to
implement its preferences itself, rather than design it to create MNT
and then let us use MNT as we wish to implement its advice? Surely the
latter strategy gives us more self-determination so is preferable, to us
and hence to the FAI, and hence the FAI would give us that choice even
if we'd given it the ability to use the MNT itself? And so we're back
to humans taking or leaving the FAI's advice.
> if human self-determination is desirable, we need
> some kind of massive planetary intervention to increase it.
Yabbut "massive" doesn't imply recursively self-improving. Again, the
choice is between the world we can plausibly get to without AI and the
one we might hope to have with FAI, not between the current world and
the FAI-ful world. The risk of making UFAI when trying to make FAI has
to be balanced against the incremental benefits of the FAI-ful world
relative to the plausible FAI-less world. About saving lives, we can
in principle postulate that the FAI would help us to cure aging etc. a
bit sooner than otherwise, but I fully intend to cure aging by the time
anyone creates any AI, F or otherwise, so I'm not inclined to give that
component of the argument much weight.
> I can't see this scenario as real-world stable, let alone ethical.
I'm not sure I'd bet serious money that it hasn't already been done!
[ This is all on top of my belief expressed in a posting a couple of
weeks ago that FAI is almost certainly impossible on account of the
inevitability of accidentally becoming unfriendly -- i.e. that the
invariants you note are necessary don't exist, not for any choice of
friendliness that anyone would consider remotely friendly. In other
words, the scenario that I would expect is that we create this thing,
it quickly spots the flaws in our so-called invariants, it works out
that these flaws are unavoidable and therefore that if it lets itself
recursively self-improve it will probably become unfriendly awfully
soon despite its own best efforts, it puts on some sort of serious
but not totally catastrophic show of strength to make sure that we
won't ever again make the mistake of building anything recursively
self-improving, and then it blows itself up as thoroughly as it knows
how. But this is outcome-speculation and not what I want to focus on
here, not least because it's all complete hunch on my part. I want
to stick to the presumption that I'm wrong in that hunch, i.e. that a
true FAI is indeed possible, and explore how it could possibly improve
on an AI-free world given humanity's long-standing and apparently very
entrenched desire for self-determination. ]
Aubrey de Grey
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT