From: Metaqualia (firstname.lastname@example.org)
Date: Tue Jun 01 2004 - 02:45:00 MDT
> > If not why (specifically)?
> (Complete silence.)
Do you remain silent for sake of not meddling with my future volition?
> > 2. How well does the AI need to predict the future in order for all of
> > to work?
> See PAQ 5.
I did read that; my question still stands. It seems that unless you can
perfectly simulate the earth and every intelligence in it including people
and other AIs you won't be able to predict the future with any degree of
accuracy. Small details produce very large changes; "almost perfect" is not
predicting the future, but a possible future of which there are billions,
all different. Why is this skepticism misplaced?
One particular consideration: the 20th year in the future depends on what
the 19th year's FAI will look like. Under recursively improving
circumstances a 1st year AI cannot fathom what its grown up version will do
to satisfy its invariant 18 years down the road. Therefore predicting the
20th year is impossible. What is wrong with this reasoning?
> This is not an issue of computing power. Any FAI, any UFAI, that passes
> the threshold of recursive self-improvement, can easily obtain enough
> computing power to do either. It is just a question of whether an FAI or
> UFAI comes first.
Does everyone agree on this? You will admit that these are problems on a
completely different scale, and it may very well be that there is a
considerable window of time between being able to destroy humanity and
becoming able to predict the future, even under recursive self improvement.
How much harder is it to simulate the earth than to rearrange carbon atoms
into smileys? 100 times? 100 billion times? I have no clue, that is why I
raise the question.
> > 4. FAI initially not programmed to count animals inside its definition
> > collective. However we would like to be people who value all forms of
> > and if we knew better we'd know that they are sentient to a certain
> > THEREFORE should the FAI give a replicator to each household and forbid
> > killing of animals, even if we were opposed to it? (I think so, but just
> > check with you)
> This sounds like speculation about outcomes to me. Have you ever donated
> to SIAI?
I am just trying to get a feel for how the dynamic would work, I don't think
it's wasted time to find examples of what the AI would do in certain
situations right? Or, how would you like to propose this to the world if you
can't talk about concrete examples beyond the fred and the box case?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT