Monday, November 05, 2007

Future Imperfect: High Points

Saturday I spoke at a Foresight Institute Unconference, using material from my next book, Future Imperfect. Since the audience at their events is already familiar with a lot of odd idea about the future, I decided to focus on a few things that I thought were interesting and might be unfamiliar. Since I suspect many readers of this blog have similar backgrounds, I thought they might be interested in a very brief precis. For details, see the webbed manuscript of the book.

1. Privacy.

Public key encryption has the potential to give us a level of privacy in cyberspace greater than anything we have ever experienced in realspace. Not only would it be possible to communicate with reasonable confidence that only the intended recipient could read your messages, it would be possible, using digital signatures, to combine anonymity and reputation--have an online persona with provable online identity, but control the link between that and your realspace persona.

Surveillance technology, the combination of video cameras on poles, face recognition software, and databases, has the potential to give us a level of privacy in realspace lower than anything we have ever experienced--everything you do in public places not merely recorded but findable. Wait a few years until we can produce video cameras with the size and aerodynamic characteristics of mosquitos, and "public places" become more or less everywhere.

What if we get both? The net result depends on two questions. Can you control the interface between realspace and cyberspace--strong encryption does you no good if a video mosquito is watching you type. How important is realspace anyway? The latter question depends on a third technology--virtual reality. In the limit, nothing much of importance is happening in realspace, just bodies in storage lockers being fed nutritious glop which VR turns into sushi and chocolate, while all the real action is in (encrypted) cyberspace.

2. Should we regulate nanotech?

Some of the Foresight people, despite generally libertarian biases, think we should, given the specter of a high school kid in his basement lab destroying the world. I think we need to consider the balance between offensive and defensive technologies. If, in nanotech, offense has a huge advantage, then we're probably done for. If not, it's worth remembering that there will be lots of private demand for defense but the only people who spend really large sums on finding better ways to kill people and smash stuff are governments. So putting governments in charge of regulating nanotech has a strong feel of setting the fox to guard the henhouse.

3. Can technological progress make us worse off?

Yes. Making human society work depends on a very intricate coordination--someone has to make the inputs to make the inputs to make the inputs to what I am producing. The centralized solution to that problem works only on a small scale. The decentralized solution--markets and trade, or something similar--depends on being able to break the world up into pieces (my stuff and your stuff) such that what I do mostly affects my piece (except with your permission) and what you do mostly affects yours. Technological progress can, among other things, increase the size and scale of what individual humans can do, which might result in each person's actions having effects most of which are divided among a very large number of other people. If so, the number of solutions to the coordination problem might be reduced from one to zero.

Comments welcome. Anyone who wants to criticize the above for being only a sketch is invited to first read the longer version.

10 comments:

Anonymous said...

Regarding point #1, the interface wouldn't have to be monitored by a physical camera, you could just have government assume control of the internet. Is p's, already embroiled in traffic shaping and spying scandals, could eventually band into a cartel as well.

Regarding point #3, are you suggesting that a society of superheros and Olympian Gods can't have market-based solutions? Does that mean socialism, state-centered or otherwise is the society of choice for superbeings?

Michael Vassar said...

As far as I can tell MNT does have a huge offensive advantage, but we are obviously not done for if MNT can be prevented from proliferating. This isn't as easy as preventing nukes from proliferating, but whoever wants to do it will have MNT tools to do it with, which is a big advantage.

Anonymous said...

One of the standard objections to libertarianism is that "it might have worked in the 18th century, but it can't cope with the complex modern world." The grain of truth in this is Dr. Friedman's point #3, that modern technology makes it easier for individual actions to affect large numbers of other people.

Point 2 relates to the Fermi Paradox (if aliens exist, why have none of them visited us?). At some point, something will be invented which makes it possible for a single person or a small group to destroy an entire planet. Maybe it will be nanotech. Whatever it turns out to be, it's only a matter of time until somebody uses it. Unless the human race has spread out to other planets by that time, that's the end of us.

I believe that's the answer to the Fermi Paradox. The technology needed for interstellar travel is so far ahead of the tech needed to destroy the world that any intelligent race is bound to wipe itself out before it can get out of its home system.

Anonymous said...

About nanotech, I think you mean "self-reproducing nanotech". In science fiction, it could come in two forms: genetically engineered micro-organisms, or self-reproducing machines. In the real world, self-reproducing machines of any size aren't even a laboratory curiosity yet, and I think it unlikely that small useful ones are possible at all. Things get much harder to do when you get down to the size of a few molecules.

To build one nearly-microscopic microcircuit currently requires several material refiners and factories running many different processes - and then even with piezo-electric actuators, etc., there are only a few things a bare chip can move in the real world.

Add a few more factories to add lenses for camera circuits, "legs", "arms", etc., and I can see factory-produced nanomachines being useful, but not building themselves, and obviously not destroying the world unless we're too dumb to keep the military models under tight control. I can see networked automated factories capable of making more automated factories by 2050, but microscopic things doing the same thing, never. And none of these give a teenager world-destroying capabily.

OTOH, genetic engineering is already practical on a limited scale, and it does include tiny self-reproducing organisms. There is some "destroy the world" potential there, although designing an organism that outcompetes the natural ones is going to be difficult. We're presently at a stage where we can make one change at a time, with hope but no certainty as to how it's going to turn out. I think it's going to take far more than that to create a super-bug that can kill many people, rather than just killing a few and provoking strict quarantine measures.

And I wouldn't worry about the teenager with a home genetic engineering lab nearly as much as I'd worry about government-funded military projects... Suppressing the first isn't going to do anything about the second, and may even make it easier to keep such projects secret.

markm

Anonymous said...

Regarding point #3, are you suggesting that a society of superheros and Olympian Gods can't have market-based solutions? Does that mean socialism, state-centered or otherwise is the society of choice for superbeings?

Very tangentially, at the moment I'm playing with a kindred concept for an upcoming role-playing campaign: A world where there are superbeings who are powerful enough to fight national armed forces, and the international legal system has recognized this by declaring them to be legally sovereign entities. So Superman's apartment in Metropolis becomes the Kryptonian embassy, as it were, and when Superman goes to London or Beijing it's treated as a state visit. Largely because it's easier to formally recognize that he's too powerful for conventional law enforcement to deal with and use the rules for ambassadors and such, for invading armed forces, or for spies, depending on the situation.

The campaign should be very interesting. . . .

Anonymous said...

and the international legal system has recognized this by declaring them to be legally sovereign entities.

Hmm. So given the fact that Superman can go anywhere he wants anytime, doesn't that mean he'll be violating international law every time he saves a kitten in Somebackwaterstan? Or that he would have the right to extradite crooks from any country, something only a certain country can do at the moment? Or finally, that his arms and feet would be declared WMD and a UN invasion of his apartment approved? :)

Anonymous said...

"A world where there are superbeings who are powerful enough to fight national armed forces"
And inevitably win? Can the superbeings suffocate, be poisoned, be killed by nuclear weapons (at what range?), etc? Are the superbeings sufficiently numerous that it's hard for them to form cartels, and markets for their 'protection' allow states to defend themselves against marauders?

Anonymous said...

And inevitably win? Can the superbeings suffocate, be poisoned, be killed by nuclear weapons (at what range?), etc? Are the superbeings sufficiently numerous that it's hard for them to form cartels, and markets for their 'protection' allow states to defend themselves against marauders?

Good questions, and some of them are the kind of things I'm starting this campaign to explore. However, here are some premises I've worked out:

* The superbeings in question are not certain to defeat a nation-state in a war. What's at issue, rather, is that they could inflict ruinous damage on a nation-state that went to war with them. To quote a classic epigram, "One more victory like that and we are done for."

* There are about sixty sovereign individuals in the world, ranging from highly ethical types like Superman to "outlaw sovereigns."

* The player characters are going to be less powerful, non-sovereign superheroes belonging to a force whose primary mission is to restrain other supers, including especially sovereigns, who act as outlaws.

Mike Huben said...

Markm is on the money: self-reproducing nanotech is the worry. And of course we already have it in biology.

It's already possible to produce biological weapons that could essentially destroy most humans: the soviets had quite a lot of that.

And as biotechnology gets cheaper and more advanced, it will be easier and easier to do in your home, as a hobby. The smallpox genome has been published: the virus could be synthesized.

30 years ago, I considered how to do it the low-tech way: dig up some smallpox victims from under permafrost. That's well within the capabilities of people without much wealth. Anybody could do it on a summer holiday. Dispense it in the NY city subways.

What we really need is the equivalent of an immune system that can actively deal with such threats. That requires government action.

I envision this problem growing until such an immune system is created for wealthy nations. Poorer nations will suffer greatly until the price comes down to where they can afford it too.

There would be a similar trend if we ever get non-biological self-reproducing machines, at macro or nano scale.

William Newman said...

rex little wrote "The technology needed for interstellar travel is so far ahead of the tech needed to destroy the world that any intelligent race is bound to wipe itself out before it can get out of its home system."

I don't think that's the right window. "Destroying" "the world" only works in the window before we have the tech for economically self-sufficient operations in the rest of the solar system. That is a considerably smaller window (in terms of technological progress, and presumably, time) than before we have the tech for colonizing Alpha Centauri.

It is of course still a pretty big window, because the window-closing sustainable economies off earth come along much later than the window-opening nuclear weapons and selective breeding of war pathogens, and nukes and bugs seem to make it feasible to knock back the population by 99.99% or so. (I'm more skeptical about the last 0.01%, hence my quotes on "destroying" above: and in this context that's not just a trivial quibble, because to explain the Fermi paradox, you want to absolutely exterminate the species, not just knock civilization back a few centuries.)

But by the time a civilization has the kinds of hyperadvanced technology that people are worrying about, not just nukes and bugs, it should tend to have the technology it needs to colonize the solar system. In particular, if you have the technology for self-replicating virulent grey goo which can chew through all competing systems on Earth (including, one would ordinarily expect, systems defended by technology of comparable sophistication), you should already have self-replicating systems which flourish on previously-uninhabited planets and comets. Outliers of the civilization might even be percolating out of the solar system: how many comets and brown dwarves and such are there in interstellar space?

My best guess is that Fermi's paradox is either because it is really really hard for life to get started (which seems a little unlikely, and would make it surprising that on Earth life seems to have gotten started about as early as physically possible) or, more likely, because typical planets get sterilized every twenty million years or so and our history (with nothing worse than the dinosaur-killer asteroid) is an extreme outlier.