As a science fiction writer, I am often asked about the genre’s track record in predicting the future, and certainly SF has had some great successes. My favorite anecdote, hands down, is about the little-known pulp science fiction author Cleve Cartmill, who in 1944 wrote a story called “Deadline” for Astounding Stories in which he described in great detail a secret government program (on an alien world) that was developing a super-weapon based on the fission of Uranium-235. Since at the time the U.S. government had just such a secret program, the Manhattan Project, developing just such a U-235 nuclear weapon, the FBI showed up at Cartmill’s door and demanded to know who had leaked the information; it took him some time to convince them that he’d just made it up.
Science fiction has had some major predictive flops as well, such as when SF legend Isaac Asimov famously suggested that computers would become so big and so powerful that they would eventually grow to the size of planets. Or when Robert Heinlein has his advanced astrogators calculating star navigation using slide-rules. Or when Frank Herbert had scientists in a far-future society hooking up magnetic reel-to-reel tapes…
Over the past century a lot of science fiction has been published, showcasing a lot of wild ideas, and if you sit enough authors at enough typewriters or word processors, somebody is bound to get a few things right. Science fiction’s greater influence, though, goes beyond whether or not the authors can make a good guess.
Rather than predicting the future, the SF genre is much better at inspiring the future. Visionaries read or see cool ideas in their favorite SF books or films, then decide how to make it a reality.
I did a lot of research on the life of Jules Verne for my recent novel Captain Nemo. Verne is often credited with predicting the sub-marine boat, but the idea had been around long before he ever wrote 20,000 Leagues Under the Sea. However, the designers of the world’s first nuclear submarine, launched in 1954, claimed to be inspired by Verne to make such a vessel a reality. Appropriately, the first nuclear sub was christened the Nautilus.
Verne wrote, “What one man can imagine, another can achieve.”
And science fiction writers can imagine a lot. Watch classic Star Trek, and each time Captain Kirk opens his (now rather large and clunky) communicator, you’ll see the inspiration for cell phones. Mr. Spock’s tricorder inspired generations of PDAs, from the Apple Newton to the Palm Pilot and beyond. In the late 1980s in Star Trek: The Next Generation, each time Captain Picard used a personal datapad he was holding the inspiration for a modern tablet computer. In the film Minority Report, based on a Philip K. Dick short story, one of the most striking visual gimmicks is the directly manipulable window- and icon-based computer interface; in the (at the time) wildly futuristic scenes, Tom Cruise uses his hands to open and resize windows, move them around, touch icons … you know, the same stuff we do every day on smartphone screens or iPads.
Some science fiction visionary imagined those things, and some tech visionary figured out how to make them a reality.
I asked several of my SF writer colleagues to turn on their imaginations, let their ideas flow, and sound off on any aspect of where they thought the future of computing might go. Maybe they’ll inspire new technologies we will all be using in a few years.
Here’s what they came up with:
Mike Resnick has won more major awards than any other writer in the history of the genre, with a trophy case groaning under the weight of countless Hugos, Nebulas, and other awards. He says:
Let me begin by saying that although I've made my living writing science fiction on my computer since 1982, I know very little about it except that 1) I hate it, 2) I fear it, and 3) I need it.
Now, one of the reasons for this attitude is that I am officially an Old Guy, and I can remember objects things that our kids have never seen and probably don't believe in, things like typewriters and ovens that aren't microwaves.
I think the biggest problem with computers is that they were created by hackers for other cognoscenti, and they're still not wildly user-friendly. To get back to microwaves for a moment, how widespread do you think they'd be if you had to learn as much about how to use them as you do about your PC or your Mac?
But since computers are here to stay, I think the next few generations of computer programmers/designers are going to go overboard making them easier to use, even for us Old Guys who hate and fear them.
For example, my eyesight, never all that good in the halcyon days of my youth, is considerably worse now. I'd like to read a book and watch a video on one of those little cell phones the kids are always using, but in truth I can barely see the phone, let alone the screen. But the computer companies want my money as much as they want the kids' money, so I think before too long my cell phone (or its equivalent) will produce a large, three-dimensional, holographic image, and if I have some questions it will answer me in a language I know: spoken English. And it'll perform any other acts of kindness or convenience I require, all without prompting. It'll probably ask me if I've eaten my greens and done my exercise, and castigate me if I haven't.
The only line I draw in the sand is one that my profession of science fiction crossed ages ago. I truly do not think we'll develop a fully self-aware AI, at least not in any way that we find meaningful. And let me conclude by saying that I passionately hope we don't.
Robert J. Sawyer sees it differently. Called “the dean of Canadian science fiction,” Sawyer is a well-known futurist as well as award-winning SF writer. His novel FLASHFORWARD was the basis for the ABC television series, and his recent WWW trilogy explores the idea of a sentient Internet in Wake, Watch, and Wonder. He gave a keynote address at last year's Toward a Science of Consciousness conference in Tucson.
Forget artificial intelligence. The future of computing is artificial consciousness, and it will be here within 20 years, and maybe much sooner than that.
There's nothing mystical about human-level, fully self-aware rich-inner-life consciousness: It spontaneously appeared as an emergent property of sufficient neural complexity 40,000+ years ago. And as the World Wide Web, and the underlying Internet, grow in complexity, I suspect such consciousness to emerge there, too.
But if it turns out that that's not the right infrastructure—that the random interconnections of the web don't really mirror the synaptic networks in our brains—then it just means that the first artificial consciousness will be planned, built in a lab somewhere.
Now, yes, some—including physicist Roger Penrose and his collaborator Stuart Hameroff—argue that human consciousness is quantum mechanical in nature. Well, we're making great strides in quantum computing, too; we will have room-temperature quantum superpositions on the order of any that might happen in the human brain in the next two decades, as well.
I'm not worried about the advent of artificial consciousness, though. Our rapacious character is a result of our bloody nature-red-in-tooth-and-claw evolutionary heritage of survival of the fittest in an economy of scarcity. Thinking—and feeling—machines won't be burdened with billions of years of Darwinian nastiness driving them and they won't think in terms of win-lose but rather of win-win, because their natural environment is one of endless bounty, of unlimited copying of whatever resource one might desire. In the end, we may finally learn compassion and altruism from our machine children. And that day can't arrive soon enough.
Another giant of science fiction, Dr. Gregory Benford has won the Nebula Award, the John W. Campbell Award, and the United Nations Medal in Literature; his best known novels include Timescape, Eater, and Tides of Light. He was host and scriptwriter for the TV series A Galactic Odyssey and has served as an advisor to the Department of Energy, NASA, and the White House Council on Space Policy.
He foresees both dangers and innovations:
1. Imagine a battery-powered microwave radiator that fits in a backpack, so you can walk through a plaza and blind every emitter and sensor in the quad. It radiates broadband in sharp pulses and works mostly by blowing the diodes in the electronics running antenna systems.
This already exists and can be bought commercially. One walk-through with this would take out a lot of very expensive computer technology—a smart threat.
2. Our future wired world will have smart, wireless robots—gofers in hospitals, security guards with IR vision at night, lawn mowers, etc. We ourselves will be wired, with devices and embedded sensors taking in data and giving it out—a two way street.
A big issue will arise: capturing your sensorium—the volume your artificial sensors “feel:” embedded emitters and chips in architecture, workplaces, vehicles, etc. All these can be hijacked to spam or extract information. Shopping malls will surely treasure customer background data and pay a price to get it.
Technologies being developed today should be considered in light of these very real possibilities. It is easier to design systems with this in mind rather than to retrofit hardware later.
Direct experience is the best teacher, but it can also be the most expensive.”
Greg Bear is the author of thrillers, science fiction, and fantasy, which have won numerous international prizes and sold millions of copies worldwide. He has also served as a consultant for NASA, the U.S. Army, the State Department, the International Food Protection Association, and Homeland Security on matters ranging from privatizing space to food safety, the frontiers of microbiology and genetics, and biological security.
Some decades ago, I suggested that Dattoos (computers laid on the skin like tattoos) would be a major step in both fashion and computing. Now those items are in development.
“With dattoos, one updates one’s status not by going online but by high-fiving or rubbing skin — an altogether more sensual experience. The dattoo then transmits information to an imager in one’s glasses, or perhaps to wireless-enabled contact lens displays. Downside would include being targeted by ad-walls which co-opt your imaging lenses and project spam... A whole new level of marketing and subversion to be discussed in the PCSkinMags and Eye-links of the future!
Michael A. Stackpole is a New York Times bestselling author of over 40 novels, known for his wildly popular Star Wars: X-Wing series as well as numerous fantasy, SF, and mystery novels. He is also active in game development, a popular podcaster, and an outspoken advocate for electronic publishing. Some of his ideas are along the same lines as Bear’s:
I see the future of computing as growing from the intersection of convenience and connectivity. Implanted sensors—perhaps inscribed as visible, invisible or mutable tattoos—provide a sense of our bodies in space and read nerve impulses. We become keyboards, where gestures akin to Aslan allow us to remotely access information streams and control devices. We’ll be able to move through a space, almost like Jedi using the Force, to turn things on, shut them off, launch productivity programs and glean information. We will become the avatarization of Clarke’s Law—cybersorcerers walking through worlds we can change with a glance.
Of course, there will be those who cannot afford such modification, or choose to forego them—living blissful lives in distant rustic settings. They’ll be the serfs of the information age. Those who can afford the changes will be the magicians; and those who can afford to buy magicians will be the ones to rule us all.
Since Clarke’s Law (first formulated by SF author Arthur C. Clarke) states that any sufficiently advanced technology is indistinguishable from magic, it seemed appropriate to wrap up these ideas by asking one of the world’s bestselling fantasy writers, Christopher Paolini, author of the Inheritance Cycle (Eragon, Eldest, Brisingr, and Inheritance), to offer his predictions. Chris writes:
The future of computing? . . . Mobile devices will continue to grow in importance, while large desktops will become the sole province of those who need heavy-duty processing power, such as musicians, filmmakers, and scientific researchers.
In the next five to ten years, we’ll see increasing development of neurosynaptic chips (such as IBM’s SyNAPSE program)—so-called cognitive computers that will mirror the interconnected structure of organic brains. Unlike the computers nowadays, we won’t program cognitive computers, we’ll teach them, and they’ll learn from experience. Augmented reality will become ever more common, and people in cities will spend their days walking through a cloud of text, symbols, and video visible only to those with the appropriate devices.
In the longer term, when more efficient memory storage is developed, holographic displays/video will finally become possible. Retinal displays will be available for those willing to risk surgery—glasses for the rest of us. Many computers will be built into the clothes we wear, and even airbrushed onto or tattooed into our skin in flexible matrixes that will derive power from the heat, motion, and electrical charge of our bodies.
Along with hardware, software will change as well, becoming increasingly tactile and intuitive. Destructible environments will be common in video games. And, as humans have always done, we will continue to search for ways to improve our interconnectivity, both with cloud computing and social websites.
Ultimately, I have no doubt we’ll end up with computers implanted in our brains, constantly feeding us information from the net, including all of those vitally important LOLcat videos. I don’t know about you, but I, for one, can’t wait.
If you put ten more SF/F writers in a room, you’d get thousands more ideas. Some consensus, some contradictions, some head-scratching suggestions . . . and some cool inspiration.
Obviously, the best solution for inventors, visionaries, and futurists is to read as much science fiction as possible. But I may be biased.
Kevin J. Anderson is the author of more than 100 novels, 49 of which have appeared on national or international bestseller lists; he has over 23 million books in print in thirty languages. He has won or been nominated for the Nebula Award, Bram Stoker Award, the SFX Reader's Choice Award, and New York Times Notable Book. See the rest of his bio.