The Robot Nonpocalypse

By May 12, 2015seanachai

Why you shouldn’t be afraid of the “Robot Apocalypse.”

So a lot of rich and/or smart folks think the robot apocalypse is upon us. Bill Gates, Elon Musk, Stephen Hawking, Steve Wozniak. and Ray, “You too will be assimilated” Kurtzweil.
 And I think they are wrong. As arrogant and foolish as this may seem, I have an argument. This week on the Seanachai, I’m going explain to you why I think Skynet is the last thing you should be afraid of.


Okay, so in a nutshell, I have two points. Everyone who thinks that the machines will rise up and displace us is a) confused about economics and b) anthropomorphizing technology. Not just ascribing agency where none currently exists, but assuming that all forms of consciousness will be flawed in the same ways that we are.

And there’s no evidence for it.

It’s also interesting to note that all these people live work and breathe technology. But they are afraid that what they do could be the end of the world. You can easily make the case that Bill Gates is one of the most powerful and important men in the world — but the idea that his field is the difference between life and death for the entire species? That’s a little too convenient and arrogant. Even for a billionaire. “I’m the most important man in the world,” is riddled with hubris.

Since it is logically impossible to prove a negative — a principle that all apocalypse hoaxes rely on — we have to ask ourselves, is it likely that machines will become self-aware, rise up and wipe us out?

Let’s look at this from the standpoint of the sun rising not rising tomorrow. Just because it has happened doesn’t mean that it won’t. I can’t prove the sun won’t rise tomorrow, but if we know something about celestial mechanics — if we can agree that the sun is constant, and only appears to rise and set because the world is round and spinning, we go a long way towards dispelling mysticism.

Now we can ask a whole bunch of other useful questions. Can the earth stop spinning in an evening? Could the sun just wink out? Could it explode? And based on our understanding of physics and observations made in the real world, we could develop a sense of how likely those things are. And hopefully sleep better at night as a result.

As a starting point, let’s take a recent statement of Steve Wozniak’s.

“If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”

Is this likely? Sure, why not? But is it, as stated, a problem?

The answer, I think, is no. And to understand why, we need to understand the fundamental wellspring of human economic activity. Why do companies exist and what do they do? A company produces a product or service for people. If is a good company, it is a good product or service that people want at a price they are willing to pay. If it is a bad company it either goes out of business, or exploits regulatory protection to the detriment of consumers. For example, the cable company.

Defer, for a moment, the idea of companies of robots producing goods and services for robot consumers. I promise, I will dispatch that fallacy in due course, but to stake that vampire we need to look a little deeper into how an economy works.

People have wants and needs. The world is so complicated that they are pretty much indistinguishable from each other. Things that we see as a basic needs in the United States, don’t exist much in sub-saharan Africa. And a lot of them probably didn’t exist before 1850 or so. But the thing with people is no matter how much we have, we always, always want more. Sure, one car is good, but why not two? A convertible to drive on sunny days. If it was cheap enough, or you were rich enough, you’d buy one. Or any one of a thousand other damnfool things that your grandparents were perfectly happy doing without.

No matter how much we have, a little bit more will always make us happier. You could argue that a person only needs one pair of shoes. But the availability of shoes coupled with the inherent greed and vanity of humanity is how we wind up with Air Jordan’s and Manolo Blanco’s et al.

Now you might be more enlightened than this — I certainly hope that Saints and Bodhisattvas are among my listening audience, but odds are, you are not. And the entire history of mankind is a pretty damning chronicle of appetite and avarice. Deep inside everyone there is a voice that screams for MOAR!

I believe that coming to grips with this inner appetite is key to true happiness. And I am in good company on this. Lao Tzu, wrote, “He who knows when he has enough, is truly rich.”

So, the most fundamental problem with the robot apocalypse scenario is that economic production is arranged for the satisfaction of people. We already live in a world of abundant capital. And our problem is not having enough stuff to make something, it’s knowing which things to make. (and how to distribute and market them) Robots taking over all production is just MORE abundance.

And that is unequivocally good. People don’t spin yarn into thread by hand anymore. Machines do it. And there are more people working in the textile industry now than before machines took it over. They’re just not doing the boring parts.

And, as we can already see that in a world of abundant production, the scarce resource is ideas. Specifically, ideas about what people want and how to make people’s lives better.

I don’t mean this in a Mother Teresa way. I mean it in a practical way. In a-wheels-on-luggage kind of way.

Wheels on Luggage

Depending on how you want to pick the start, civilization is some six to nine thousand years old. The use of the wheel for transportation purposes is dated to 3500 BC – so, we’ve had the wheel for 5500 years. And it wasn’t until 1972 that anybody really added wheels to luggage.

There’s not an algorithm to come up with insights like that, or the insights that create profound art. It takes a deeper insight of what it means to be human than most humans have. Shakespeare (pen name or not) had a profound understanding of what it meant to be human. In fact, Harold Bloom argues that Shakespeare invented modern human consciousness. It’s a little crazy, but after you read his book Shakespeare the Invention of the Human, it’s not as crazy as it first sounds.

And that is more disturbing to me than the Robot Apocalypse.

But let us say, for a moment, that robots learn how to put wheels on luggage. All they are, is better at satisfying our wants and needs.

For me, the problem with that is pretty obvious. It’s the Wall-e problem. We become a race of fat, useless, stupid, incapable slobs. In animation this is cute. In the real world, this is very sad and ugly.

Isaac Asimov wrote about this very problem, with great eloquence and sensitivity in the robot novels and his Foundation series. Humans who colonized the first planets outside earth relied on robots for everything and became, not only incapable, but estranged from each other. And they stopped reproducing.

This is, I think, a real and terrifying threat of ascendant technology. But that’s not a robot apocalypse, it something we would choose to do to ourselves. Like drug addiction.

But what about greedy, self-aware Machines?

Accept, for a moment, that machines will be come self-aware. But as you do recognize that this is a gigantic leap. We have no literally idea of what consciousness is. Or for that matter what dark matter is — And dark matter makes up 84.5% of the universe, so we literally don’t know what the vast majority of reality is composed of either physically or phenomenologically.

But cast the glaring defects in our knowledge aside and say that machines develop consciousness, Consciousness being again, a thing which we can’t define. Why would they be greedy? And why would they be greedy for the things we need?

My argument is two-pronged here.

(1) We are assuming that all consciousness is or would be like our consciousness. And that the flaws in our makeup are somehow inherent in all consciousness. It’s base anthropomorphism. And I don’t think there’s any reason to believe that’s the case. Dolphins are conscious. They are very smart. Are they greedy? Are they even, in consciousness, inferior to us? Why? Because we make computers and scare ourselves with them?

The consciousness of machines may well be far higher than ours. Which would be humbling, wouldn’t it?

And we already have the a glimpse of something like this in an answer to the Fermi Paradox.

The Fermi Paradox

The Fermi paradox goes like this:

The apparent size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this hypothesis seems inconsistent with the lack of observational evidence to support it.

In other words, where the hell is everybody?

One of the answers offered to this paradox is that civilizations grow to a point and then annihilate themselves with their technology. Nuclear war, biological warfare, nanotech, robot apocalypse — all the fun stuff.

But, that also leaves the possibility that some civilizations would evolve past this point of self-destruction. Developing a higher consciousness, if you will. And this is the kicker, developing to that state of higher consciousness means that they don’t need or want to go anywhere or conquer anybody.

What about immortality? The reason people have kids is so something of themselves and their DNA goes on. What if you know you aren’t going to die? What if we evolved to the point where we never faced death?

For a human this probably is an impossible to fathom question, but for consciousness that lives on a chip, what does death look like, if looks like anything at all? Why reproduce? Just because we have a very strong drive to do so, doesn’t mean that a robot does or will?

And for those who say that we will create our digital children in our own image — I say, you fool yourself about both children and consciousness. If a thing has free will, it will make choices you don’t like, want or expect. Such is parenthood.

I think part of the reason people are afraid of robots is that in nature, a fitter species will displace an inferior species at an exponential rate. But this statement is incomplete. This will happen only if the species are competing for the same resource base. And would we be, really? Maybe electricity? The faster and more powerful computers become, the smaller they get. Would the most powerful computer ever be the smallest and most efficient?

And we desire computers to be faster for our purposes. For crunching large amounts of data and rendering 3d graphics. But what would be the purposes of self-aware computers?

We don’t know. And not knowing is scary. And scary is great for the purposes of writing thrillers and controlling people who are easily frightened and not so swift in the critical thinking department.

What are my qualifications to talk about any of this? None really. I have an understanding of economics, but all the people I mentioned at the beginning of this essay are all waaaaay smarter than me. But ultimately, the veracity of an idea isn’t about qualifications. Nobody cared about Einstein’s qualifications, they cared about the quality of his thought.

I’m no Einstein, but I think I just knocked a couple of holes in the theory of the Robot Apocalypse. And until somebody fills them in, I’m going to go back to worrying about the Zombie Apocalypse. It’s probably just as unlikely, but it’s waaaay more entertaining.


Author Patrick

I help organizations figure out what they stand for and how to communicate what they believe. I also write fiction.

More posts by Patrick
  • Nobilis

    This is a strong argument against the standard model of the robot apocalypse.

    And generally speaking, I’m in agreement. For the most part, humans will remain in control of computers and robots and networks, right up until the very, very end. But the humans who are in control of those computers and networks will be a small minority, and they will shrink over time, both as a percentage of humanity as a whole, and in number.

    This is because of the total dominance of capitalism.

    Capital has defeated labor, and is in the process of grinding it out of existence. This is because a robot, a computer, or a network, falls under the category of “capital.” And those robots, computers, and networks are slowly replacing all of the jobs that labor used to do. The industrial revolution, followed by the information revolution, have handed all the economic power to the owners, and taken it from the makers and the doers. Capital, by means of computers, robots, and networks, can now do more and more of the making and doing.

    “Ah!” you say, “But those robots, computers and networks will never replacing having ideas.”

    That’s where intellectual property comes in. The modern doctrine of intellectual property, embodied primarily in the arcane field of patents but also in the world of copyright, has allowed even this labor to be converted into capital. From the moment a new idea is created, it becomes capital. And because of the thicket that patent law has become, true innovation is more and more the purview of large multinational corporations. And even if they don’t own it from the beginning, they’ll own it before too long.

    So if things continue as they are, then there will be less and less for ordinary folks to do. And if we continue to insist that a person must “contribute” to the economy in order to benefit from it, then those people will continue to be discarded by it. And the more left behind those folks get, the less relevant any “ideas” they have will be to the folks who are still part of the economy.

    The best symptom of this process is the dominance of the finance industry (I use the word ‘industry’ only because that’s how it’s commonly known–it doesn’t actually MAKE anything, so it’s hard to agree with the wording). It makes a good deal of its money on high-frequency trading; computerized trades that run generally unsupervised; evolutionary development performs refinements on these programs without human intervention as well.

    And finally, there’s nothing for people to do but own. And when that happens, what do people do? If there’s anything history teaches us, is that there’s nobody more ambitious than someone who has achieved. The more we own, the less satisfied we are with what we have. Which is why the powerful have such an unending appetite for power.

    And that’s what leads us to the robot takeover. It doesn’t require that robots and computers be greedy. Humans will provide the greed for them; it’ll be the last thing that only humans can do.

    There are ways to avert this. Given the incredible abundance that our modern economy provides, we could let go of the notion that those who don’t own things are lazy and deserve to be hungry and cold. We could implement something like a BIG – Basic Income Guarantee. It would be more effective than our current welfare system and less expensive.

    But it won’t happen. It won’t happen because the elites that have captured the great industrial powers (different means in the East and the West, same result) will never give up the hold that the illusion of the ‘steady job’ has over the rest of us.

    • PatrickEMcLean

      I appreciate your thoughtful and impassioned response. But, (almost predictably it seems) I must disagree. Here’s a couple of quick thoughts:

      Capital can’t defeat labor. Capital only makes labor more productive. What you are suggesting is that a shovel might somehow defeat someone who digs ditches.

      All of the problems you seem to cite aren’t what I would call Capitalism — strong property rights, free, voluntary exchange — but rather features of highly constrained and ‘planned’ markets. Intellectual property rights being the worst feature of this. The majority of the greatest creators in history worked without copyright protection. For a great exploration of how this fosters innovation and progress you could try “Against Intellectual Monopoly” by Bouldrin and Levine. But a simple example might make the point just as well — do you think the personal computer would have evolved so quickly if Xerox had patented the mouse? Or Apple?

      As for the rest of it. You’ve run far afield. And deep into pessimistic territory. Be of good cheer Nobilis, just because the current regime is failing, doesn’t mean that the next one won’t be better. I think we’ve reached peak centralization.

      • Nobilis

        Imagine a world where the productivity of one worker is so magnified by technology (i.e. capital) that one percent of the population are productive enough to give themselves and everyone else the optimal lifestyle level for happiness, which turns out to be the equivelent of a US income of about $160,000, depending on which source you look at:

        (A refutation, by the way, of your assumption that if one car makes you happier, then two will make you even happier, and three happier still, etcetera)

        In our current economy, what happens is that that fraction gets a highly unsatisfying average income in the millions, and everyone else is given enough to keep them from rising up and destroying civilzation.

        And as technology progresses, that percentage of productivity gets smaller and smaller. Eventually, virtually nobody is required; all the jobs (including, by the way, the job of keeping the means of production from being destroyed by angry mobs) are no longer being done by humans.

        That’s the road we’re on. You’re right that intellectual monopoly is one of the things stifling our culture…and given that the people who benefit from that monopoly are the ones running things, I see no means to stop it. The current regime isn’t failing. It’s succeeding admirably, at protecting the interests of the people it benefits.

        It’s not Capitalism that will save us. It has no mechanisms for it. The power of the marketplace is only meaningful for people who can participate in that marketplace, and they will generally only use it to benefit themselves.