Re: Edge.org: Jaron Lanier

From: j.Maxwell Legg (income@ihug.co.nz)
Date: Fri Nov 28 2003 - 22:42:23 MST


Yan King Yin wrote:

> From: "j.Maxwell Legg" <income@ihug.co.nz>
> ...
>>pioneers didn't get right was how to cope with the frailty
>>and fear inherent in human nature; - spam, FUD, etc.
>
> [...]
>
> Hi,
>
> Yesterday I was chatting with someone in private about
> Intel and how it has been involved with the militia (as did
> IBM) and also about how they continually tried to exclude
> the Japanese from participating in chip technologies.
> This bothers me a lot because we need to understand
> that technology is not separate from politics and those
> who think that this is irrelevant are probably beneficiaries
> of such questionable dealings.
>
> Which doesn't mean that I'm the final arbitar of what is
> right or wrong, but I think more discussion of this may
> help, rather than letting innuedo trail off in random
> directions and failing to communicate effectively.
>
> Also I asked a question about fault tolerance in P4
> which was purely technical. If we were to scale down
> to molecular electronics operating in room temperature,
> fault tolerance is quite essential.

I read recently via slashdot an interview with a fellow at
Intel who was mostly concerned with classifying
on board the different qualities of circuit. That future
hasn't arrived so I suppose the P4 of today will throw
errors and blue screens when encountering a failed transistor.

I found it again for you here.
http://www.intel.com/labs/features/mi07031.htm

Posted by Hemos on Monday August 25, @07:06AM
from the learn-more-about-it dept.
prostoalex writes "The August issue of Intel Developer
Update has an interview with Shekhar Borkar, Intel Fellow
and Director of Circuit Research at Intel Corp. talking
about the future of microprocessor design and what goes on
inside Intel Labs. Borkar tells why we need even faster
processors and how probability will make its way into future
chip designs - "It's like the shift from Newtonian mechanics
to quantum mechanics. We will shift from the deterministic
designs of today to probabilistic and statistical designs of
the future.""

He goes on... "Today, if you look at two transistors sitting
side by side on the chip, they vary a little, but the
difference is not significant. In the future, the
transistors side by side will have a lot of variation. We
are pushing the limits again, so it's all exponential. And
there is another fundamental physical effect coming in. The
transistors are becoming so small that the atoms and
molecules that used to look like a continuum now look like
discretes.

As a result, we need to fundamentally change how we use
these transistors. We have to design the circuit in such a
way that it's tolerant of variations. It's like the shift
from Newtonian mechanics to quantum mechanics. We will shift
from the deterministic designs of today to probabilistic and
statistical designs of the future.

So we now say, "If I do this in the design, the transistors
and therefore the chip will perform in this way." In the
future, we will say, "If I design with this logic depth or
this transistor size, I will increase the probability that a
given chip will perform in this way." It's pretty wild. "
>
> 'Government' doesn't necessarily need to be a bad
> thing. Capitalism is a form of meritocracy which has
> been quite successful in organizing large-scale
> economic activities and division of labor. Unless we
> find better ways to replace these things they are
> unlikely to change.
>
> YKY
>
It bothers me that everybody gets ridiculed who looks for
better ways. Just look at the exquisite world view of the
slashdotters and their open source + nano technology utopia.

Not that I think Marshall Brain is a future novelist but he
uses the guise to get across a better way. You'll have to
read all the way to chapter eight to get to the guts of his
economic manifesto called "The Australia Project". Here's
his free online book called Manna, which I agree with
because everything in my software will converge with his
concept of the Vertabrane.

http://marshallbrain.com/manna8.htm -

[[my excerpt from Chapter 8]] ... One thing I did think
about more and more was the security of this whole system.
Computers had been plagued with bugs and viruses since the
beginning, but the Australia Project seemed to suffer from
none of these problems. One day I asked Linda about it.

"What's to stop someone from taking over the system and
turning us into an army of zombies?" I asked.

"I'm no engineer," Linda said, "But here's the best
explanation I've heard. Why can't someone take over your brain?"

"What do you mean?"

"Why has no one ever been able to take over billions of
human brains and create an army of zombies that way?"

"Well, it's inside of me. How would they take it over?" I
replied.

"Why can't they just upload a program into your brain, and
that program takes over your brain and turns you into a
zombie a minute later? Why does that never happen?" She asked.

"Because there is no way to 'upload' a program into my
brain. And my brain does not execute programs anyway. It is
not a computer." I replied.

"Yes." She said. "[[I mostly agree]]Everything you learn
comes in through your eyes and ears. It passes through your
conscious mind one piece at a time, and your conscious mind
evaluates it. Then your conscious mind 'executes' the things
you learn consciously, thinking about each one. If someone
were to try to teach you to cut off your own arm, your
conscious mind would reject that as ridiculous when the
lesson came in, and your brain would certainly never cause
you to cut off your arm except in the most extreme
situations. The Vertabrane system is operating in the same
way. It is learning things, not running programs. It acts
consciously rather than being 'programmed', and it has a far
more rigid moral code than most human beings do. The
Vertebrane system never blindly 'executes' a program, so it
cannot be taken over. That's true of all of the robots here.
The Australia Project would have collapsed long ago if this
were just a bunch of computers blindly executing code that
humans had written. That is how things were in the
beginning, [[of]] course, but we advanced beyond it fairly
quickly."



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT