From: Bryan Bishop (email@example.com)
Date: Thu Sep 25 2008 - 11:20:32 MDT
On Thu, Sep 25, 2008 at 9:28 AM, Matt Mahoney <firstname.lastname@example.org> wrote:
> --- On Wed, 9/24/08, Bryan Bishop <email@example.com> wrote:
>> Hardly. I know many people who are very unmotivated to do
>> much of anything, yet are 'intelligent' as you would call it.
> Ask your unmotivated friends whether they would rather spend 6 hours watching TV or 6 hours staring at the wall.
Heh, that's funny because we were just discussing that the other day
and indeed, these same friends do indeed tend to stare at the wall for
a few hours a day. Maybe a leaf from a tree. That sort of thing.
> If they can't modify their source code, how can they improve?
What they modify is the source code in the system producing, say, the
clones. And of course they are 'allowed' in this thought experiment to
modify their own biological systems of course, just as we can stab
ourselves now if we opted to, but it's not my intention to draw any
conclusions from their own stabbings and tinkering with their active
> The more general problem is that you cannot simulate your own source code. You cannot predict what you will think without thinking it first. You need 100% of your memory to model yourself, which leaves no memory to record the output of the simulation.
Right, but look, if there's a baseline "clone" that has some amount of
variation yet some amount of stability about the genotypic parameters,
then there you go. That's what you keep producing in cycles as your
baseline for improvements.
> Nor can you cleanly separate your fixed goals from the rest of the code. This is what I am trying to show in the my paper. When you formally define what it means for a program to have a goal, it can't improve with respect to that goal faster than O(log n). This loses to faster methods that accept external input such as learning and evolution. But those methods don't allow the improving agent to choose its goal.
No, the goals aren't embedded into the individual agents but rather
into the design of the system that allows them to modify that loop
that will "bake a new clone in the next nine months". So, if you
wanted this system to be completely locked down and to never budge,
then you wouldn't allow the clones and re-implementations to modify
the genome in use. This is an architectural constraint. An unfortunate
fact of the present situation is that there's not much source code
that could be written for the architecture that I'm talking about.
Maybe you could have various CAD/CAM files, and of course source code
for the microcontroller elements in the implementation, but that's not
quite the same thing as what I suspect you're thinking about, i.e.
source code to a mind/brain with goal states and such intrinsically
built into the axiomatic nature of it and whatnot. This was originally
why I stood up to reply when someone mentioned the issue of
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT