View Single Post
Old March 25th, 2009, 05:44 PM   #3
Damocles
Bad Email Address
 
Damocles's Avatar
 
The Last Person


Join Date: Apr 2005
Location: Earth
Posts: 10,713

Default Re: Top Ten Things Wrong with Star Trek: TNG

Star Trek:TNG as BAD SCIENCE.

Quote:
In Star Trek’s universe:

1. Aliens exist. They’re common enough that in just one corner of the galaxy, dozens of intelligent species interact, sort of like our nation-states. But in reality, as we’ve known ever since Fermi asked his famous question “where are they”, intelligent life outside Earth must be somewhere between very rare and nonexistent. Yes, that’s a controversial view which deserves its own post one day, but it’s hard to believe anything else once you’ve built up an intuition for large numbers. Other habitable planets than Earth have existed for billions of years; the average Earthlike planet is older than Earth. That means some other civilization should have popped up in the neighborhood hundreds of millions of years ago — enough time to invent a bicycle and ride it to Alpha Centauri. Galactic and even intergalactic distances are small compared to the distance attainable on these time scales by light or optimized interstellar craft. In far less time still, exponential growth lets a civilization blossom into something encompassing many star systems. And yet we see no sign of any alien intelligence, anywhere. They haven’t taken apart the stars for raw materials, or switched them off to conserve energy. They haven’t sent any humanitarian missions, or unleashed berserker robots to suppress potential competitors. The simplest explanation is they’re not out there — apparently setting up a technological civilization from nothing requires one or more hard steps, perhaps starting with abiogenesis.
2. Almost everyone lives on planets. Star Trek, like most of our thinking about the future, is shamelessly planetocentric. This is just a habit we will eventually get over. There is nothing that makes other planets a particularly good place to colonize compared to environments we could construct ourselves if we had anything close to Trek-level technology. In the long run, civilizations will find it more efficient to expand into outer space, where there is a lot more room. Our asteroid belt, if made into space settlements, could house thousands of times the population of Earth’s surface. These settlements could rotate for artificial gravity, would have unobstructed solar energy 24 hours a day, would make launches into space cheap, and would be less vulnerable to various disasters.
3. Humans are unenhanced. It looks like the next few centuries, probably even decades, will bring advanced genetic engineering, life extension, brain-computer interfaces, mind uploading, and other forms of augmentation. Some of these technologies exist in Star Trek, but they’re only used much by the Borg, a race that’s portrayed as evil, scary, numerous, and fairly stupid. However, these technologies certainly aren’t intrinsically horrible; they can be used to great effect for good as well as evil ends. Extending lifespans is a good idea simply for humanitarian reasons. The same is true of technologies meant to limit or even abolish suffering, as well as those meant to expand opportunities for positive well-being. Even qualities like wisdom, kindness, and dignity are not above being tweaked and emulated. The capability to scan brains and transfer them to other substrates would, among other consequences, allow arbitrary copying of human capital constrained only by available hardware. If we could amplify intelligence beyond human defaults, that would have enormous ramifications; for just one thing, it would greatly speed up scientific progress. It’s hard to see how all governments would not only fail to capitalize on all this, but would also be able to prevent most dissenting groups and individuals from doing so.
4. Economic growth is underemphasized. We’re always finding better ways to do stuff and adding to our stocks of capital, so economies tend to grow by a few percent each year nowadays, and this is a trend that has been accelerating on long timescales — growth was much slower before the industrial revolution and especially before agriculture. As so often, you will find enlightenment by taking out your pocket calculator. At a constant 3% per year, the economy grows by a factor of 7000 in 300 years and a factor of 140,000 in 400 years. At 5% per year, it’s 2.3 million and 300 million, respectively. Lesson: it’s easy to underestimate exponential growth. Artificial general intelligence or molecular nanotechnology, both of which I’ll return to later, would increase the growth rate to something far, far beyond those. Now, it’s true that you can’t blindly assume trends will continue. Trek’s universe, though, certainly doesn’t seem lacking in technological toys that could be put to good economic use, and these people have advanced aliens around to trade with and get ideas from. And yet, although I can’t back this up with anything rigorous, to me, the world in the newer series doesn’t seem that much richer than the world in the older series; the older civilizations don’t seem that much richer than the younger civilizations; and the Earth of Trek doesn’t seem richer than Earth-2007 by nearly a large enough factor.
5. Artificial general intelligence exists, but has not revolutionized society. This is the biggest of all these points. To me, it makes many of the others irrelevant; I hold the opinion that human-level AI, if invented, will blow everything else out of the water. An artificial mind will have a different set of strengths and weaknesses than a human, so if one of them can perform human tasks on a starship, it’s already going to be superhuman in many fields — not just at the genius end of the human scale, but with capabilities far surpassing those we could imagine in any human. Star Trek probably has some silly excuse why Data can’t make copies of himself, but a real AI could create new transhuman capital at a rate limited only by the cost of computer hardware. (Come to think of it, I recall them copying the hologram doctor guy, but I don’t think much came of it.) Superhuman AI would also accelerate scientific and technological progress; imagine the fruits of a century of research compressed into a year, or a day. But although those consequences are relatively easy to think about, they aren’t even the main point of what futurists are calling the technological singularity — they aren’t necessarily what makes the event of superhuman AI entering the stage so sudden and discontinuous. A key insight is that one field of research an AI would accelerate is the field of AI itself. It could design smarter versions of itself that could in turn design even smarter versions. An AI that started out close to human intelligence would thus catapult itself quickly to whatever extremes of cognitive ability could be achieved given the hardware it had access to. That matters, because something that’s much smarter than you can pick out a state of the universe that very closely optimizes its goals, and who knows what those goals and the smartest ways to reach them will look like? This gives the singularity a much-discussed element of unpredictability. With some exceptions, it’s impossible to predict how something much smarter than you will behave, leaving the honest science fiction writer just plain screwed.
6. Advanced nanotechnology exists, but has not revolutionized society. Star Trek’s world has nanorobots, or “nanites” as they call them, suggesting self-replicating assemblers of the kind theorized by Eric Drexler, not just the technology with nanoscale features that the word “nanotechnology” has come to mean. There’s controversy about whether Drexler’s proposals would work, but if they did, they would amount to far more than just a toy you could pull out at a time that’s convenient and then go back to ignoring. Nowadays, it’s supposed that molecular nanotechnology would be based not on roaming assemblers but on desktop nanofactories that could produce a range of products from simple feedstock, including but not limited to more nanofactories (with a doubling time perhaps less than a day!), computers cheap and powerful enough that you could start considering brute-forcing AI, large amounts of consumer goods, large amounts of conventional weapons, large amounts of nuclear weapons, and weapons with entirely novel properties. War would look different and probably much less stable. Humanity would gain not just one but a whole new menu of ways to screw itself over. But there’s an upside, too, in that we could end ancient curses like poverty — some have predicted a post-scarcity economy — and, through nanomedicine, various diseases and other limitations of the human body.
7. Posthumans behave unreasonably. If you could do just about anything you wanted to, if you had all the time in the world to think about who and what you wanted to be, would you really end up just sitting around being enigmatic and half-heartedly harassing mortals, like the Q Continuum? Probably not — nor would most other people, of whatever species. If posthuman civilizations were here, and were indifferent to our plight, and had any preferences on the physical universe’s makeup at all, no matter how weak, they’d have turned those preferences into reality long ago, probably wiping out humanity in the process. If they believed in anything we’d think of as good ethics, they could do a lot better than letting history run its natural, cruel, risky course. Goal systems can be thought of that fall in neither of those categories. It’s hard, however, to think of any that would lead to behavior stable over subjective eons, meddlesome enough to be noticeable, nonsensical enough to give off an air of mystery, and yet restrained enough to leave the basic setting intact.
8. AI is anthropomorphic. The human mind occupies a very specific corner in the space of intelligent programs. It got that way because it was designed incrementally by natural selection in a very specific environment, under various biological constraints like the slow serial speed of neurons. Designing an AI is hard; designing an AI that resembles the human mind is much harder still. Yet Data does resemble a human — just a human whose emotional state is always set to “neutral”. He tends to misunderstand human emotion and social interaction, in the way that humans lacking experience with these things might misunderstand them, even though there’s no reason why, to a general intelligence, they should be mysterious. Going overboard on significant digits is stupid computer behavior, and stereotypical human nerd behavior, but there is no reason why an AI should make mistakes typical of either stupid computers or stereotypical human nerds. And real AIs would not come neatly packaged as androids so we can interact with them the way we would with a human individual without upsetting our intuitions too much. They would be massively copyable, mergeable, tweakable, expandable, decomposable, upgradable, and most probably not confined to any single robot body.
9. Ideas have changed too little. In Star Trek’s society, as far as I know, there is no taboo of ours that has become universally accepted. Yes, the mores of Star Trek’s society are such that we consider them progressive, but progressives as little as 100 years ago would be shocked if they could see what sort of things we consider normal. It’d be unlikely if there were nothing in future customs to shock us. There don’t seem to be any genuinely new ideas on how to have society work, either. I’m thinking along the lines of prediction markets, or even just blogs. Like with many other points, I don’t blame the writers for this; it is in predicting the future of ideas that futurism runs into its hardest limits. But a future with no weird ideas is still deeply unrealistic, and that’s worth keeping in mind.
10. The world remains balanced too precariously between utopia and disaster. If the world needs saving every couple decades, why is it still around in the 24th century? In the real universe, there won’t be any Picards to miraculously save the day, with everyone knowing that, though dramatic tension requires difficulties, the ending will be happy. We will have to defeat existential disasters at a more systematic, institutional level, and by more comfortable margins. Behind every story of extraordinary heroism, there is a less exciting and more interesting story about the larger failures that made heroism necessary in the first place.
Here is the link:

http://www.acceleratingfuture.com/steven/?p=3

I'll have something to say about the stupid technology in another post.
Damocles is offline   Reply With Quote