Re: KARDASHEV SCALE

From: Jef Allbright (jef@jefallbright.net)
Date: Thu Dec 22 2005 - 17:49:43 MST


On 12/22/05, Jef Allbright <jef@jefallbright.net> wrote:
> On 12/22/05, turin <turin@hell.com> wrote:
> > In other words, one could literally become so small and so smart, one could just disappear through the cracks in the floor like a cockroach.
> >
> > It's just an idea. I have no clue how plausible it would be, but it does make one wonder how dated the Kardashev scale is.
>
>
> Suggest you search the SL4 archives and SL4 wiki which contain
> previous discussion on the Kardashev scale. I also recommend John
> Smart's _Answering the Fermi Paradox_
> http://accelerating.org/articles/answeringfermiparadox.html
> which is very close to my own view.

I had (what seemed to me) an interesting thought a few days ago which
may be relevant. Someone asked whether the end justifies the means,
and in the course of responding with what I see as the bigger-picture
answer to that question, I stated something like "wisdom is being able
to assess the consequences of one's actions in the largest possible
context, while acting within the smallest effective context." The
point being that the most effective action is that which accomplishes
the intended goal while minimizing the probability of unintended
consequences, but it takes a big picture view to assess.

A while later I was thinking that this might imply something about the
Fermi Paradox, which I already suspected is a matter of going inward.
It seems to me that with increasing intelligence (leading to
increasing wisdom) advanced civilizations would naturally tend to
minimize all extraneous effects thus supporting the idea of going
inward.

We've already become quite aware in the years since Kardashev, the
Drake equation, and the start of the SETI project, that the most
effectively encoded information is indistinguishable from random
noise. We're also seeing all around us that accelerating technology
increasingly lets us do more with less, and that the older obvious
assumption that growth necessarily corresponds with increasing energy
consumption (and therefore size) may not be universally true. Indeed,
when we think about optimal design, there is a lot to be said for
diversity in type as well as location, so we might expect benefits
from a system with the appearance of fine dust rather than awesome
matrioshka brains or other concentrated configurations of
computronium. Robert Bradbury, who did quite a lot of rigorous
thinking on this subject, especially matrioshka brains, published a
paper earlier this year where he suggested that he now think it more
likely that advanced civilizations would live near the outer edge of
the galaxy rather than in the more energy-dense central regions. (I
may have this wrong, since I haven't carefully read his paper.)

Another aspect to consider is whether the values of an advanced
civilization should be expected to reflect the kind of outward growth
that we may think is normal based on our current values. This takes
us back to my initial statement about acting within the smallest
effective context, which seems to me very much a value statement
(saying what is generally good) based on what is seen to work.

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT