Three more ways
Issue #155 • September/October, 2015
“What are you doing?” a voice asked.
I looked up and saw O.E. MacDougal, Dave’s poker-playing friend from Southern California, and he’s now my friend, too. Accompanying him was Dave. Dave, of course, is Dave Duffy, the publisher of Backwoods Home Magazine.
“You!” I said. I hadn’t seen Mac in a few years. “What are you doing here?”
“Came up for the fishing; gonna stay a little longer because the weather looks so good.”
I nodded and jokingly asked, “Hey, are there any new ways the world can end since the last times we talked about it?” Twice in the past, Mac talked about various ways civilization could be seriously impaired, or could even end. What I didn’t expect was his response.
“New ways? Yeah.”
I paused for a second. He smiled.
“Are you joking?” I asked.
“How can there be new ways?” I asked.
“There are things we didn’t talk about before. Things that I wasn’t sure could be serious, but I’m having second thoughts. They’re technological things.”
“Didn’t we talk about technological things the other times?”
“We talked about a few: nuclear war, manufactured pandemics …” He looked up, as if in thought, like he was trying to recall other things we’d talked about.
“New ones,” I said, more to myself than to him because I was trying to guess in my mind what the new ones could be.
“Two are new technologies we’re hoping will benefit humanity, but they may have a dark side, too,” he said.
“Benefits with a dark side? That sounds like a contradiction.”
“No. Nuclear power is like that. It was the promise of a new source of energy but it brought with it the danger of annihilation because of the bomb and nuclear power plant meltdowns that can be hazardous to our health. It also has the possibility of being used as a weapon of terror.”
“Well, I guess it’s always nice to have new reasons to hide behind the couch,” I said somewhat exaggeratedly and he and Dave laughed.
I looked to Dave and asked, “Is this something you’d like me to write about?”
He thought a minute. “Depends on what the scenarios are.”
He turned to Mac and said, “You’ve already talked about eruptions of supervolcanoes, flood basalt volcanoes, asteroids and comets hitting the earth, pandemics, exploding stars, nuclear war, and a horde of other horrors that not only keep John awake at night, but may even drive humanity into extinction. I’ve had him write about them twice (“The chances of global disaster,” Issue #57, May/June, 1999 and “Zombie apocalypse,” Issue #134, March/April, 2012). Now you’re saying there’s something new … ?”
“… that I should worry about?” I added.
“Yeah,” Mac said.
Dave looked skeptical but asked, “What are they?”
“There are three: nanotechnology, artificial intelligence, and encountering another species from outer space.”
Dave and I glanced at each other. I think we both thought this was a prelude to a joke.
But Mac continued, “Let’s take them in order and we’ll start with nanotechnology,” and he was, as the saying goes, off to the races.
Nanobots the size of blood cells will be injected into patients to treat various diseases.
“Nanotechnology was just conceived in the last half of the 20th century. One of the first people to give it a lot of thought was Nobel laureate and physicist, Richard Feynman, who talked about the concept in 1959. But it would be another 27 years before an engineer named Eric Drexler used the term ‘nanotechnology’ in a book titled Engines of Creation: The Coming Era of Nanotechnology.”
“I’ve heard of nanotechnology,” I said, “but I don’t know what it is.”
“It’s the manipulation of matter at the atomic, molecular, or even supramolecular level, meaning we’re manipulating individual atoms, molecules, or groups of molecules.”
“Is that what supramolecular means?” Dave asked. “A group of molecules?”
“Yeah, and ‘nano’ is a prefix meaning ‘one billionth,’ so a nanometer is one billionth of a meter. Roughly speaking, if you placed four hydrogen atoms in a row, they’d form a line about one nanometer long. And the now pretty much universally accepted use of the word is to say that something is of nanoscale if at least one of its dimensions is between one to 100 nanometers. For example, a sheet of carbon atoms might be several feet long and several feet wide, but only one atom thick. Because that third dimension is on a nanoscale, the sheet would be considered a product of nanotechnology.”
“So what’s the technology going to be used for?” I asked.
“It already has applications but it holds promise for a lot more. It’s expected to have medical uses including use as a surgical tool that will replace invasive surgery.”
“Really?” I asked.
“Yeah. Imagine making little nanoscale robots— or nanobots, as they’re called— that can be injected into a patient to go after and either excise or deliver drugs to a tumor, and only the tumor. Or nanobots that would work to clear blood vessels of plaque and obviate the need for bypass surgery or stents.
“The technology also has applications in electronics for making incredibly small electronic parts, for optics, chemistry, and industrial processes. The list is endless. It’s already being used to make super-strong, lightweight fibers that go into everything from tennis rackets to airplane wings. But there are those worried that there’s a dark side to it, and they’re not professional doomsayers or Luddites, they’re legitimate scientists.”
“What are the problems?” Dave asked.
“As far as health goes, nanofibers and nanoscale powders that are already being used in a variety of commercial products are so small, and some are nearly indestructible, that they readily become airborne, get into the environment, can linger for years, and present health risks that may rival asbestos. Laboratory studies have shown lung and brain damage in mice that breathe the nanoparticles. But those are just health problems, and when I say ‘just,’ I don’t mean to diminish those problems, because they’re important. But there may be an even bigger problem and that came to light from one of the early engineering pioneers of nanotechnology, that engineer I just mentioned, Eric Drexler. In his book he imagines the possibility of creating nanoscale robots that have but one purpose: to build nanoscale duplicates of themselves.”
He paused and I said, “That’s it?”
“That’s it. Each nanobot would build another nanobot just like itself and then the two would each make one more and the four of them would make four more …”
“So the number of nanobots would keep doubling,” I said.
“What would stop them?” I asked.
“And the danger is … ?” I asked.
“I see where this is going,” Dave said. “If their numbers can grow geometrically, they’d consume the world’s resources making copies of themselves. And it would happen faster than you can imagine.”
When Mac nodded, I asked, “Could that really happen?”
“No one knows for sure,” Mac said. “But this is the stuff nightmares are made of. Some genius might decide to make self-replicating nanobots, just to see if it can be done. Or they might be created by mistake. Either way, one or more could escape from the laboratory setting and set the disaster in motion. Someone even came up with the scenario where nanobots are created to clean up an oil spill but, because of a programming error, they start consuming the carbon in everything they find so that they can make duplicates of themselves.”
“Why carbon?” I asked.
“A lot of nanotechnology is based on carbon because it’s so versatile and you can make super-strong items with it. Diamonds come to mind.”
“So, if they get out,” Dave said, “every blade of grass, lump of coal, sardine, bird, and human would become a host to and a carrier of the nanobots until each was consumed to make more nanobots.”
Mac added, “Yes, and their spread would probably be unstoppable. In days, maybe weeks, but certainly no more than a few months, they could conceivably devour every living thing from the size of blue whales down to the size of viruses. The last living beings would be on the space station, and they’d eventually starve to death because everything they need to sustain them has to be brought up from the planet.”
“To add to the nightmare,” he continued, “terrorists or some psychopath might get hold of the plans to make them and deliberately turn them loose on the world. Drexler even gives a scenario in which nanobots go awry.”
He looked at Dave because Dave had just talked about how fast these self-replicating nanobots could spread, and said, “Drexler said if a nanobot could duplicate itself in 1,000 seconds, then in 1,000 seconds there would be two. In another 1,000 seconds they’d be four. In less than an hour they’d be eight. But, as you said, the growth is geometric and in 5½ hours there’d be more than a million and in 10 hours there’d be more than 68 billion. If they could sustain this kind of growth, in less than two days they’d outweigh the earth.”
“Do you think it could really happen?” Dave asked.
“Maybe. But of course, it wouldn’t happen quite as fast as that.”
“Why not?” I asked.
“They’d begin getting in each other’s way. They may even start consuming each other. And as they ran out of material to consume to make more nanobots, their growth rate would slow down. But they’d spread across the landscape like mold going across a slice of bread. And conceivably they’d be so small they’d also get picked up by wind, the way dust is swept up off a road; they’d be carried by water whether it was rivulets or ocean currents. We wouldn’t be able to contain them.”
“What would be the end?” Dave asked.
“Ever hear the phrase ‘grey goo’? It’s used to describe the earth after self-replicating nanobots have devoured everything they were programmed to use to make duplicates of themselves and all that’s left is … well, not literally ‘goo,’ more likely dust, but a barren landscape nonetheless.”
“But couldn’t we find a way to control them?” I asked.
“We can’t even stop the spread of flu and we know how that spreads.”
“Are you making this up?” I asked.
“We could put controls on making them so this couldn’t happen,” I said.
He shook his head. “Someone suggested that another scary scenario is someone getting a 3D printer that will create things on a nanoscale, and they might get the blueprints for making self-replicating nanobots off the Internet …”
“And we’re back to where they either accidentally get out of the lab or they’re intentionally turned loose,” Dave said.
“You know,” I said, “I lie in bed at night worrying about being caught in a tsunami when the Cascadia Subduction Zone off the coast here cuts loose because where I live there’s no way for me to get away from the coast if it happens in the middle of the night (“Subduction zone tsunami: what residents of the Pacific Northwest have to fear,” Issue #94, July/August, 2004). But I can do something about that: I’m going to move somewhere safer in the next few months.
“But now I’m going to worry about nanobots that’ll be too small for me to see, devouring me alive.”
“What are the odds of this happening?” Dave asked.
“I don’t know,” Mac said.
“Do you think it could happen in our lifetime?” I asked.
“Technology is changing so fast …it’s conceivable.”
“You seem genuinely worried about it.”
“I am. I try to think of what can be done to prevent it. Then I try to think of what can be done to stop it if it starts, and I reach roadblocks. I also worry that if it’s possible for the world to end like this, that given enough time, it may also be inevitable.”
“Did you have to bring this stuff up?” I asked. “I’m going to have nightmares.”
“Well, you’re the one who asked for scenarios that could end the world as we know it.”
“And I suppose you have another remarkably pessimistic end-of-the-world scenario for us, too,” I said.
“I do. The second one is just the opposite of a bunch of mindless nanobots taking over the world.” He looked at my laptop. “It’s computers.”
Artificial intelligence disaster
“Computers are getting faster and faster and, if you’ll allow me to use the term ‘smart,’ they seem to be getting smarter and smarter, and the idea of making a computer that could think like us has been with us at least since the computer’s earliest days.”
“When you say ‘think,’ do you mean like ‘artificial intelligence,’ AI as it’s often called?” Dave asked.
“That’s it. The concept of AI as a genuine science came about in the mid-1950s when some scientists with backgrounds in math, engineering, psychology, and other fields met to discuss the possibility of making an artificial brain. Since then, the science has taken off. And now the question is, if we can make a computer that can actually think, will it be a boon to humanity or a disaster?”
“Do you think they’ll ever make a computer that can actually think?” I asked.
“Better informed people than I think so. That doesn’t mean it will happen, but Dr. Stuart Armstrong, of the Future of Humanity Institute at Oxford University, is warning us that we are going to make machines that are smarter and ‘think’ faster than we do, and it could be a problem.”
“When do you think AI will arrive?” I asked.
“We already use it when we use GPSs that navigate driving routes, digital cameras that recognize faces and provide perfect focus, speech recognition software, language translators, the software on smart phones that offers predictive text when it tries to anticipate the next word you’re going to use, and those are just some of the everyday stuff it does now.
“Future benefits of AI are unknown and incalculable. But we expect them to be amazing. We’re hoping AI systems will solve many of our social and economic problems, take the drudgery out of our lives, make our lives safer, and give us more free time. But, as with other technologies, we have to ask if there will be unexpected and unintended consequences. There are practical dangers that equipment running on AI may have programming bugs or that because they are no more than software, they’ll be susceptible to virus attacks, just as the computers on your desk are. But unlike a computer crash when you’re in the middle of writing a letter or playing a game, if they’re being used to control a car, a plane, a battlefield weapon, or a nuclear reactor, the results could be deadly. So, along with the benefits come risks.
“But the biggest risk is what if they become so smart and fast that they become sentient, and they don’t like us, or they decide we’re a threat they have to eliminate.”
“Isn’t that sort of being paranoid?” I asked.
“Stephen Hawking, one of the greatest scientific minds of our time said the development of full AI could spell the end of the human race. And there are many other scientists and nonscientists who feel the same as Hawking. So it may be well-placed paranoia.
“Does the name Elon Musk ring a bell with either of you?”
“He’s … umm …” Dave was thinking. “… the guy’s an inventor, engineer, and business investor who founded SpaceX and cofounded PayPal, Tesla Motors, and maybe some other stuff. The guy’s a billionaire.”
“Musk said, we should be very careful with artificial intelligence, and he added that if he had to guess what the biggest existential threat to the human race is, it’s probably AI. So, he’s pledged $10 million of his own money to support the Future of Life Institute which is grappling with these possible threats from AI that may equal and then exceed human intelligence. That’s how important he thinks the problem is.”
“On the other hand …” he said and paused for emphasis, “… many of the experts in the field feel that AI becoming a threat to society is so far off in the future that it’s not worth worrying about today.”
“But all this presumes we can make computers that are actually able to think or, more so, to be conscious. Is that really realistic?” I asked.
“Let me phrase my answer like this,” Mac replied. “There are those, meaning engineers, computer experts, and other scientists, who feel that all we have between our ears is a ‘meat computer’ or, as it’s sometimes referred to, a ‘wet computer.’ Either way, they regard our brains as simply machines made of cells, and that a similar machine can be made from silicon chips and wiring. They not only think it can be done, they think it’s inevitable. Furthermore, they think it’s possible that, eventually, we’ll make computers that don’t just match the human brain, but will be more complex and thousands of times faster than our own brains. After that, an intelligent computer is likely to start altering itself to become ever smarter and smarter, something many of us would like to do, but we can’t because of our biology. But a computer wouldn’t have the same impediments. Then the sky’s the limit on their intelligence. Computers could improve themselves to a point where, in comparison, Albert Einstein or Isaac Newton seem like idiots.”
“But they’re not going to be conscious,” I said.
“Who knows what consciousness is?”
“What do you mean?” I asked.
“Consciousness is like time: we all experience both, yet no one can explain either.
“To the religious person, the difference between men and machines is that we have a soul and a machine can’t and that, somehow, that soul makes us conscious. But to many computer and software specialists, the brain is, as I said, just another computer, a very complex one, that’s made of meat and there’s the belief that as complicated as our own brains are, eventually men will make computers that are more complicated and better than our brains. And they believe these computers are going to be able to ‘think,’ and when they do, they’ll be able to think thousands of times faster than we can. And who’s to say what makes consciousness. And what if the machines become conscious, and as we turn more and more over to them, they either decide they don’t need us or don’t like us.”
“But that would be like saying they’re alive,” I said.
“Not ‘alive,’ conscious,” Dave corrected. “I think there’s a difference here.”
“Broadly speaking,” Mac continued, “there are two different views on consciousness, and then there’s a whole bunch of theories that are variations on one or the other or even a melding of the two. One of those views is called physicalism and the other is called dualism.
“Physicalism says that consciousness arises out of the physical; put things together just the right way, whether it’s living cells or silicon chips and consciousness will arise. And, yet, we still don’t know what consciousness is.
“On the other hand, dualism is the concept that consciousness is a state apart from the physical. It’s a nonphysical substance that sort of floats within the brain.”
“Sort of like the concept of souls,” Dave said.
“That’s a good analogy,” Mac responded. “For the dualists, they think of it as being like an electric charge or a magnetic field; that is, it’s not made of ordinary matter but becomes a property of it.
“And there are variations on both of these theories that you can pick and choose among.”
“So, what do you think?” I asked Mac.
“I think that if they make a conscious computer in my lifetime, I’m not going to be totally surprised.”
“Can you give us a timetable as to when we might expect this to happen?” Dave asked.
“An inventor/futurist, Ray Kurzweil, who has had a lot of success over the last 2½ decades making astoundingly accurate predictions about computers, the Internet, and more, is now predicting machine intelligence will surpass human intelligence by the year 2045.
“But, despite his success with past predictions, it’s worth pointing out that many predictions do not become true. In 1970 some scientists predicted alternative energy would overtake conventional power sources in 20 or 30 years. The same was predicted about superintelligent computers surpassing human intelligence. But here we are, 45 years later, and both are still 20 or 30 years down the road.”
“So you think we’re safe, for now,” I said.
“The future is uncertain.”
Dave said, “So, the big question is whether superintelligent computers prove to be among the greatest of humanity’s inventions, or if they’ll be the cause of our extinction.”
“Yes, and it’s the latter concern that worries Hawking, Musk, and others.”
“Including you?” Dave asked.
“But how would they do it?” I asked.
“Given that computers control so much today, and will control much more in the future, and given that they’re getting smarter and smarter and may eventually possess intelligence that will dwarf our own, if they want to get rid of us, they’ll find a way.”
Extinction from space
“But,” Mac continued, “considering the possibility of the two disasters we’ve just discussed and other technological mishaps you’ve previously written about, which include nuclear war and engineered pandemics, all of which could drive us to extinction, it leads me to the question of why we haven’t yet detected extraterrestrials. I know there are people who claim to have been abducted by or visited by ETs, but I’m talking about scientific and verifiable evidence that extraterrestrials exist.”
“Are you about to suggest that the reason we haven’t detected any alien civilizations may be that part of the evolution of a technological species is that they will ultimately wipe themselves out?” Dave asked.
Mac nodded. “That’s what I’m afraid of. It could be that given enough time, all technological species get blindsided by their own inventions that drive them into extinction.”
“Including us?” Dave asked.
“We have to consider that possibility and be very careful as we go into the future.”
“But you said the third thing that could lead to the extinction of the human race would be aliens— extraterrestrials,” I said. “Now it seems you’re saying they might not be out there.”
“Until we know otherwise, their existence is a possibility we should take into consideration.”
“Do you think there are alien civilizations out there?” Dave asked.
Mac paused a second. “The organization called SETI, which is an acronym for the ‘Search for ExtraTerrestrial Intelligence,’ has been looking for signals from alien civilizations for decades and found nothing. Critics ask: If there’s intelligent life out there, why haven’t we heard from them? Even the great Italian-American physicist, Enrico Fermi asked, given the billions of years in which extraterrestrial civilizations could have arisen, if they exist, where are they? He concluded there are none.”
“And I’ll bet you have a reason why we haven’t heard from them,” I said.
“I have several. One may be that we’re not listening the right way. Radio is barely more than a century old and, though we may think of ourselves as pretty advanced, it could be that we’re still too primitive to imagine the communication technologies that might be available to advanced civilizations.”
“What do you mean?” I asked.
“Imagine some primitive tribesmen living in a community of villages in the middle of the Amazonian rainforest. We don’t know about them, and they don’t really know about us except that, now and then, a plane flies overhead. The way the villages communicate with each other is with drums. The drumming can be heard from village to village. And keep in mind, these people are every bit as intelligent as we are, but they just have no technology— yet.
“When a plane crosses the skies above them, they don’t know if it’s gods, some kinds of animals, weird stars, or other humans. So some of the tribe’s members are going to try to communicate with them. How do they do it?”
“I see where this is going,” Dave said. “They’d go to a clearing or the top of the highest hill they could find and listen for drum messages because that’s the type of communication they understand, and maybe even send some drum messages themselves. But nothing would work because the people in the civilized world wouldn’t be using drums. So, given the silence, these tribesmen might conclude no one existed outside of themselves. Yet, they’d be sitting in a sea of radio and television transmissions and never be aware of them.”
“And we may be sitting in a sea of messages that are being beamed across space, but we’re unaware of them because we’re really only a little bit more advanced than that primitive tribe,” I said.
“Yes,” Mac said. “We’ve only had basic radio since around the turn of the 20th century. But a civilization that’s tens of thousands or even millions of years older may have ways of communicating that we not only can’t imagine, but we may not discover for centuries to come.”
“You said that’s one reason we may not have heard from anyone,” Dave said. “What’s another reason?”
“Fermi may be right. It may be that we’re the only intelligent life in the universe. Or it may be that any race of aliens that has ever existed or will ever exist, such as our own, may reach a certain point and may be extinguished by their own nanobots, intelligent machines, or other technologies we haven’t even thought of, yet.”
“So, you don’t think alien civilizations exist,” I said.
“I didn’t say that,” Mac replied.
“What if they do exist. What are the chances of some coming here?” Dave asked.
“I can’t put a probability on it. In the near future— our lifetimes and those of our children and grandchildren— probably zero. But the further into the future you go, the murkier it gets.”
“Why?” I asked.
“If they’re much more than 100 light years from us, they wouldn’t know we’re here, yet.”
“But we’ve been broadcasting radio signals for more than a century,” I said.
“Yes, and if the civilization is 100 light years away, they’re just getting them now— if they’re looking for them.”
“If they’re 1,000 light years away, they won’t know about us for another nine centuries.”
“What would you think if SETI discovered something tomorrow?” Dave asked.
“It would be good news and bad news,” Mac replied.
“What’s the good news?” I asked.
“Assuming that it’s not a machine civilization, it would mean that maybe civilizations can advance without destroying themselves with their own technology.”
“What’s the bad news?” Dave asked.
Mac smiled sardonically. “I may be wrong about the time it would take them to get here; they may visit us and the visit may not go well.”
“This sounds like the premise for a grade-B science fiction movie,” I said.
“Stephen Hawking, who by now you might regard as a pessimist, not only worries about AI, he’s said that contact with an alien race may be disastrous for humanity. And he isn’t the only scientist to feel that way, he’s just one of the more prominent ones. He said an advanced civilization may view us as little more than a nuisance to be disposed of while they plunder the planet— or they may want to colonize it. He suggests we remember what Europeans did when they arrived in the New World, and it didn’t go well for the native populations. Anyway, you should keep in mind that scientists say the average person carries three to five pounds of microbes in their bodies. There’s no reason not to think aliens would be similar in that way.”
“Why is that important?” I asked.
“The same way Europeans inadvertently brought microbes that wiped out as much as 90 percent of the native populations of North America alone, an alien encounter may result in a microbial invasion, microbes that they’re carrying in and on their bodies, that would be disastrous for many if not all life forms on this planet, including ourselves.
“Another concern is that they’re not going to want to get infected by life forms here, either, so one of the things an alien civilization might have to do if they decide to colonize the earth would be to sanitize it. Wipe out all life.”
“How would they do it?” I asked.
“You should already be able to think of one way. Let’s go back to the nanobot disaster. If they wanted to do it in a hurry, they could hose the planet with nanobots to extinguish all life-forms. And each nanobot would be programmed to destroy itself after a certain length of time so there would be no danger to themselves when they landed. Now the planet, though conducive to life, has no life on it, so they can safely colonize and seed with flora and fauna from their own world to promote their own existence. And that’s just one way to ‘cleanse’ our planet. I’m sure if they can traverse interstellar space they have the means and imagination to come up with other ways, too.”
“One more thing to worry about,” I mumbled.
“Nah. Not in our lifetimes. It would be centuries before they got here. Going from one star to another is likely to be a herculean task, even for an advanced civilization. Distances across space are immense and any undertaking, even to the nearest star outside of our solar system, Proxima Centauri, a mere four and a quarter light years away, would be an enormous undertaking. And there’s no evidence that either that star or any other within 100 light years of us has a civilization capable of space travel— though we don’t know that with certainty.”
And as I was about to suggest there may be ways for them to get here faster, he added, “I know that science fiction stories, both in novels and in the movies, abound with sufficiently technologically advanced civilizations, including future human civilizations, that hop, skip, and jump around the universe in no time at all, and even have fanciful explanations for how this all happens, from warp speeds to stargates to wormholes. But there are good reasons to believe that Einstein was right and the speed of light is the speed limit of the universe.”
“So you’re not worried about an immediate alien invasion,” Dave said.
“Not at the moment. Someone might come up with some ideas or information that could change my mind, but for now, I’m more worried about what we might inadvertently do to ourselves. Because we haven’t yet detected an alien civilization should be a warning that we should proceed cautiously into the future and make sure we don’t bring on our own extinction.”
“What if they’re already out there, and have been watching us for years?” I asked.
“That I’d like. If they’re here and they haven’t done anything to us yet, I wouldn’t worry about them now. We may be a great wildlife special on videos beamed back to their home planet.”
Dave said, “So, in summary, a visit from an alien civilization could be disastrous, but we should be more concerned about what we might do to ourselves.”
Looking at me, Mac replied, “… and the supervolcanoes, comets, asteroids, exploding stars, and all the other things John wrote about before.”
It boggles my mind how many things could go wrong.
Suddenly, Dave said, “Let’s go to lunch,” and he and Mac stood up.
“You coming?” Dave asked me.
“Yeah, but I don’t have much of an appetite, now.”
I don’t know why they found that funny.