Re: Apollo Project to get to the Singularity

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Fri Aug 25 2006 - 22:22:09 MDT


On Aug 24, 2006, at 10:41 AM, Richard Loosemore wrote:
> J. Andrew Rogers wrote:
>> Only if a company does not know what they are doing. AI is not
>> generally considered foundry work. $10M is a hell of a lot of
>> money for a well-run software startup.
>
> We could go into this in more detail, but I once costed the
> operations of a combination software/hardware startup similar to
> the one that would be involved here. There were about 40
> engineering people, and some of them were doing work that was
> robotics related. Remember: there would be an absolute need to
> get peripherals involved (robot manipulators, vision systems,
> auditory systems, etc), and that involves both buying and playing
> with expensive hardware.
>
> The burn rate for year one was projected between $16M and $22M.

That still seems excessive any way I can figure it, and I've built
successful hardware-centric startups before that did far more with
far less. For starters, the requirements for vision, auditory, and
similar peripherals falls below the noise floor cost-wise. There is
no justification for anything exotic here. And the interfacing work
for such hardware has already been done. The part you would actually
have to do yourself is the AI part and an entirely trivial amount of
glue.

One of the basic rules of successful startups is that you don't buy
anything (hardware, software, people) until there is an immediate,
direct, and unavoidable need. And even then, you buy the minimum you
need at the last possible moment because the discount rate is steep.
The scenario you outlined above is a *classic* failure pattern for
startup organizations, creating resources without well-defined or
reasonable purpose.

> Don't forget: I am not talking about some kind of shoestring
> internet startup. I am addressing the issue of really building an
> AGI.

It sounds a bit more like you are addressing the issue of building a
nice research lab, which is only indirectly related to the issue of
really building an AGI. A fancy research lab would be *nice*, but
not remotely necessary and possibly detrimental to your goal. Such a
well-structured organization will tend to find a lot of resource
being spent perpetuating itself.

> There are ways to get results that do not involve making gross
> assumptions at the outset which cripple the project later on.
> While it would not be possible to be completely agnostic about the
> approach, it is quite feasible to be less ideologically committed
> and more receptive to what the system data tell you to do.

The problem with blue sky research is that everyone can do it (not
really, but to the people who actually have money to spend for such
things it will seem that way).

>> This statement seems a bit baseless and arbitrary. It might be
>> true, but it is not obvious.
>
> Let's not get into politics. It is stupidly obvious to some who do
> a lot of reading and who have a lot of experience. Let's just
> leave it at that.

It was not about politics, though thinking that it might be certainly
gives some explanation of your assertion. Granted my experience does
not go all the way back to the 1960s, but I have not noticed a
substantive change since, say, the early 1990s. It is a little
leaner now I think, but in a *good* way; I saw more pork barrel in
the '90s. It is not in evidence for any projects I am currently
involved in, which is why I still doubt your assertion. But maybe I
am in the rarified company of the non-pork R&D projects. Most of the
government people I work with are pleasantly clueful.

> I made the comment because I have enough specific goals and
> detailed designs to put a gang of programmers to work within days.
> I don't need a community to tell me exactly what I should be doing,
> I need a community to DO exactly what I tell them to do, and no
> messing.
...
> The problems would be: (a) you would need some kind of central
> organization because the hardware would be serious, and could not
> be completely distributed, and (b) the personal attitudes in the
> community would be critical, and perhaps not appropriate for the
> work to be done: they couldn't be a collection of mavericks who
> hated the idea of being given chunks of work to do.
...
> On balance, I think a community approach probably would not work.
> I think it would need a company, and real funding.

In other words, an AI startup but with blackjack and hookers. You
could have just said that. :-)

J. Andrew Rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT