Wednesday, November 26, 2008

Democratic market socialism redux

All in all, the conclusion I'm coming to lately ... as reflected in my last two blog posts, as well as in some other thinking ... is that government is going to need to do some rather careful and specific things to guide society toward a positive Singularity.

Yes, if someone creates a hard-takeoff with a superhuman AGI, then the government and other human institutions may be largely irrelevant.

But if there is a soft takeoff lasting say 1-3 decades ... before the hard takeoff comes along ... then, my view is increasingly that the market system is going to screw things up, and lead to a situation where there are a lot of unhappy and disaffected people ... which increases the odds of some kind of nasty terrorist act intervening and preventing a positive Singularity from occurring.

It seems we may need to review the general line of thinking (though not many of the specific proposals) from old democratic-socialism-style texts like Economics and the Public Purpose ...

Perhaps one positive consequence of the current economic crisis is that it may cause the US public to understand the value of well-directed government spending....

And yes, I'm well aware that most of my colleagues in the futurist community tend to be libertarian politically. I think they're just wrong. I am all in favor of getting rid of victimless crimes ... legalizing drugs and prostitution and so forth ... but, given the realities of the next century and the risks of a negative Singularity, I don't think we can afford to leave things up to the unpredictable self-organizing dynamics of the market economy ... I think we as a society will need to reflect on what we want the path to Singularity to be, and take specific concerted governmental actions to push ourselves along that path...

This is not the political dialogue for 2008 ... different issues are occupying peoples' minds right now ... but it may be the political dialogue for 2012 or 2015 ... or at latest, I'd guess, 2020 ...

5 comments:

Matt said...

I'm curious as to why you don't think we can afford the unpredictable dynamics of the market economy, but we can afford the all too predictable dynamics of government intervention. I humbly suggest that being able to imagine things that governments might do that would help is a very different thing from either those things actually helping, or the government actually doing those things.

You may find an expansion of your sources for explanations of the current mess helps. You might start here and here.

Ben Goertzel said...

M@, I'm familiar with the conservative/libertarian arguments that government regulation is significantly causally related to the financial crisis. Actually, it's obvious this is true, but it's not that interesting an observation, because in a situation as complex as the current one, nearly every aspect is significantly causally related with any other aspect. No one has any way to calculate "what would have happened" in the financial world if everything had been exactly the same except that there had been less regulation.

Unlike what you say, I don't think the dynamics of government economic intervention are particularly easy to predict ... but I do think one can try to create government interventions whose impacts would be relatively less difficult to predict (as compared to other interventions).

After all there is not even any consensus on the impact of historical government economic interventions, in hindsight. For instance, as Wikipedia notes at

http://en.wikipedia.org/wiki/New_Deal#Conflicting_interpretation_of_the_New_Deal_economic_policies

"While most historians believe that the New Deal helped resolve the Great Depression, economists are less certain, with a substantial minority believing that it actually worsened the depression"

Just to be clear: I absolutely do not advocate a planned society or economy ... I'm very well aware of the deficits of explicit planning algorithms in AI as well as in economics.

However, the "free market" is a fictional abstraction just like the "rational actor" ... and my contention is that as we approach Singularity, the advent of technology is going to make these abstractions even worse approximations to reality than they have been.

As an aside, I have fairly little faith in government agencies to spend money effectively or get useful things done -- but I have more faith in government funding programs to direct money to useful private for-profit or non-profit enterprises that the market fails to adequately fund.

Getting back to my own field of AI ... I suppose a libertarian would argue that if there were a real value for advanced AI then in a free economy the market would fund it ... and if the current market isn't funding advanced AI, in spite of its real value, then this is because the government is somehow preventing the market from acting naturally. But I don't buy this. I think the reason the market is not funding advanced AI adequately is because of a classic "tragedy of the commons" type dynamic: few individual market actors see a high enough odds of getting enough individual benefit from this R&D to justify investing in it. In cases like this ... which will only multiply as Singularity approaches ... explicit government funding initiatives are extremely valuable.

For example, what do you think are the "predictable dynamics" of the NSF or the NIH? I submit that if the NSF and NIH had their funding increased by a factor of 10, this would be to the good of humanity, and increase the odds of a positive Singularity. A lot of great and important R&D would get done, which the market is not funding due to its own particular self-organizing dynamics.

Believe me, I know that government R&D funding has its problems. But without it, in the current socioeconomic order, we'd be left only with industry R&D funding which, except in rare cases, is far more shortsighted and less exploratory in nature. I don't know how to make wealthy companies fund more forward-looking R&D ... by the nature of the markets, companies are going to seek short-term profits, and forward-looking R&D is often not going to be the best way to find these ... but in order to get the NSF and NIH to fund more forward-looking R&D would be comparatively straightforward.

Ben Goertzel said...
This comment has been removed by the author.
Matt said...

G'day Ben,

I make two main points below. As it's fairly off-topic for your blog, I try to avoid the rest of the obvious libertarian / interventionist debate to be had here. The two points are summarized as:
1. It's much easier to imagine a helpful intervention than it is to make our democratic institutions actually give us what we imagine. When proposing policy, I'd encourage you to take the extra step of imagining the regulations that are likely to extrude from the other end of the sausage machine, rather than the intentions you fed in.
2. Government funding of prizes rather than direct funding of research is likely to be a much more efficient way to correct the market's neglect of positive externalities.

Ben said (slightly more than)…
>
> Unlike what you say, I don't think the dynamics of government economic
> intervention are particularly easy to predict ... but I do think one can try to
> create government interventions whose impacts would be relatively less
> difficult to predict (as compared to other interventions).

I agree completely: one can *try* to create helpful government interventions. The interventions occasionally even are helpful, just… not as often as we hope.

> As an aside, I have fairly little faith in government agencies to spend money
> effectively or get useful things done -- but I have more faith in government
> funding programs to direct money to useful private for-profit or non-profit
> enterprises that the market fails to adequately fund.

Government funding of prizes incentivises the private sector. If you're able to make your prize setting body (somehow) size the prizes at close to the size of the positive externality achieving the prize will have, the market is likely to approach economic efficiency more closely (neglecting the non-transfer costs of taxation).

> Getting back to my own field of AI ... I suppose a libertarian would argue that
> if there were a real value for advanced AI then in a free economy the market
> would fund it ... and if the current market isn't funding advanced AI, in spite
> of its real value, then this is because the government is somehow preventing
> the market from acting naturally.

That looks like a straw man argument to me.
There are market failures and few of libertarianism's strongest advocates deny that. A free market will gather very good information about how profitable any available venture will be, but it will neglect profits that can not be captured (positive externalities).
Government funding boards are allowed to consider positive externalities, but they're strongly incentivised to consider many other things that bear more directly on getting re-elected, getting more funding, increasing the size of their departments, etc.

> But I don't buy this. I think the reason the
> market is not funding advanced AI adequately is because of a classic "tragedy
> of the commons" type dynamic: few individual market actors see a high enough
> odds of getting enough individual benefit from this R&D to justify investing in
> it.

The "tragedy of the commons" more usually refers to overuse of extant resources, rather than underdevelopment of potential resources. I'm pretty sure the economic concept you're looking for is "positive externality".

> In cases like this ... which will only multiply as Singularity approaches
> ... explicit government funding initiatives are extremely valuable.

Explicit funding initiatives *could* be extremely valuable. I'm interested in whether we can reasonably expect that they *will be*, and whether there isn't a much better way of achieving the same end.

> For example, what do you think are the "predictable dynamics" of the NSF or the
> NIH? I submit that if the NSF and NIH had their funding increased by a factor
> of 10, this would be to the good of humanity, and increase the odds of a
> positive Singularity. A lot of great and important R&D would get done, which
> the market is not funding due to its own particular self-organizing dynamics.

I'd prefer to see the money the government's going to spend in this area directed towards prizes, rather than the huge bureacracies of NSF or the NIH. Robin Hanson and others have written much on this topic (for example here, which links here).
A minor point: I'd also expect a larger positive impact from reducing the brake on research and development achieved by the FDA (we'll probably have different priors on the trustworthiness of this research, but fdareview.org summarizes most of it).

> Believe me, I know that government R&D funding has its problems. But without
> it, in the current socioeconomic order, we'd be left only with industry R&D
> funding which, except in rare cases, is far more shortsighted and less
> exploratory in nature. I don't know how to make wealthy companies fund more
> forward-looking R&D

Can you suggest why govt funded prizes are not a better solution?

> ... by the nature of the markets, companies are going to
> seek short-term profits, and forward-looking R&D is often not going to be the
> best way to find these ... but in order to get the NSF and NIH to fund more
> forward-looking R&D would be comparatively straightforward.

Much political pork resides in those two words "comparitively straightforward" :)

Ben Goertzel said...

About prizes for research ... I have thought about this a lot as many folks have asked me about what would be a good way to formulate an "AGI Prize" ... but I really don't see how to do it in a good way, and haven't heard any good ideas from anyone else either...

The Loebner Prize of course is a total joke... of zero value for motivating AGI research... because it doesn't reward meaningful incremental progress toward AGI ... the Loebner winners are not serious AGI researchers, and essentially no serious AGI researchers compete in the Loebner contest...

I don't think this is an obvious matter, and I can't prove I'm right without making some equations or coding some simulations or something, but I have a fairly strong intuition that prizes are not the right path for AGI.

More generally: the bottom line, I suspect, is that prizes are useful to incent people to do already-achieved things more cheaply and better ... but I'm pretty dubious that they're useful for incenting people to do new things that have never been done before

If someone created an AGI Prize, how would that help me? I'm already working on AGI as hard as I can, consistent with the need to care for my family ... the existence of a prize wouldn't help motivate me personally at all.

After all, the creation of an AGI is worth far more than any financial prize possibly could be worth... (which is not true e.g. with the X prize for going into space...)

The argument would need to be that an AGI prize would motivate some rich person or organization to fund me (and other AI theorists) to build an AGI aimed at winning the prize

Maybe this would work if the prize were one billion dollars.

But if the prize were, say, ten million dollars, I doubt it would.

Why would someone want to put in, say, 3-5 million dollars investment for the chance of getting a 10 million dollar prize if their horse beats the others? They'd have to be VERY confident of the plausibility of success ... but, I fear that for something whose possibility has never been clearly established, it will be too hard to convince potential funders of the plausibility of success.

And if there were a billion dollars to invest in AI research, it would be far more usefully invested in research directly than in prize money, I suspect.

Also, posing research as a RACE really really motivates people to find tricky ways to fulfill the letter of prize conditions with necessarily fulfilling the spirit. This means the prize conditions need to be formulated incredibly carefully, which is harder for AGI than for something like flying to the moon...

As another concern, note the ease of stealing software and repurposing others' software ideas. If one had multiple teams competing for some big AGI prize, what happens when they start doing industrial espionage, hiring each others' programmers and so forth? You'd then have teams hiding from each other and doing research in secret. Very very very bad for progress, as compared to a more cooperative model. This is not quite the same as with space flight, a situation where different teams taking different hardware design approaches arguably have less to learn from each other.

Anyway, I do think the prize idea merits more thought ... and I realize the above thoughts have been a bit disorganized (I don't have time to write a well-formulated essay on the topic today, though maybe I will later), but please note that there are ZERO examples of prizes motivating fundamental research, and a huge abundance of examples of direct government funding motivating fundamental research.

I hate government funding agencies ... but, I'm reminded of Churchill's statement that "democracy is the worst possible system of government, except for all the others."

p.s. thx for the correction regarding "positive externalities", you're right

p.p.s. i agree of course that government spending seems to involve excessive amounts of "pork" that is wasteful. but this seems an ok price to pay in many cases, as the alternative is not getting some important stuff done at all, because the market doesn't value it sufficiently