Elon Musk, the billionaire inventor and amateur futurologue, has recently taken to the idea that we may all be living in a simulation akin to Second Life. He has been influenced in his thinking by the philosopher Nick Bostrom, though something of the latter's rigour has been lost as the argument is translated into a version suitable to capture the imagination of a global 'thought leader', who, in turn, is positioned to get the rest of us talking about it. Of course some of us can remember talking about it before either of these men forced it into the zeitgeist, perhaps in an informal setting where the exploratory mood was enhanced by a joint and we found ourselves starting our sentences with, "Whoah, what if, like..." But now the adventure of ideas, of which any stoner is capable, and indeed of which our ancestors millennia before the invention of video games were capable, has been given weight by the interest of an Oxford philosopher, and cachet by the derivative interest of a rich person. And now when people talk about it they will not say, "Whoah, what if, like..." and they will probably not have a joint in hand. They will soberly, straight-facedly say to their coworkers, "I read this one expert who..." or, more succinctly, "They say that..."
You do not need to be a Heideggerian to be wary of 'the they'.
It is certainly possible that we are living in a simulation, if by this we mean that things are not as they appear, that reality is not just brute stuff sitting there on its own. This is a possibility that has been contemplated in various ways by great minds for quite some time now, and that has provided fuel for the wild speculations of not-so-great minds for just as long. What is new is the way in which one manoeuvres into the appearance of expertise by doing nothing more than being very wealthy and deciding to take up the social role of a visionary. What Musk has done is to update an ancient possibility, to cause it to appear as something never-before-thought when in truth it is only a repackaging and a re-enchantment.
The particular form the new version takes offers a vivid case study in the consequences of historical and anthropological ignorance. How self-congratulatory and parochial does a member of a given culture, at a given moment, have to be, to suppose that reality itself takes the form of a particular technology developed within that very culture in the course of one's own lifetime? Consider the familiar claim that 'the brain is like a computer', or, switching the comparative 'like' for the stoner one, that 'the brain is, like, a computer'. Is this not effectively to say that this thing that has been around in nature for hundreds of millions of years turns out to in fact have been, all along, this other thing that we ourselves came up with in the past few decades?
Wouldn't it, I mean, be a remarkable coincidence to find ourselves alive at just the moment where technology finally shows itself to be adequate to reveal to us the true nature of reality? And how are we supposed to interpret the equally certain claims of people in other times and places, who believed that reality in fact reflected some device or artifice of central importance to their own culture (e.g., horologia, mirrors, puppets, tjurungas...)? Are we really to believe that it was not the light-and-shadow theatres of the ancients or the hydraulic automata of the early moderns that revealed the true nature of things, but that instead humanity would have to await the eventual advent of... Pong? And might the key cosmic-historical significance of this technological moment have something to do with the fact that it is simultaneous with the formative early experiences of the man-child Elon Musk?
If you are like Musk, or Bostrom, then you will probably consider these historical and culture-comparative considerations irrelevant to the question at hand. Fine, then. Let's talk about the argument. One notes, first, that it relies on a crucial but unexamined premise, that the simulated characters of video games, if they keep developing in the way they have been developing since the 1970s, will eventually become conscious. But there is just one small problem: we don't know what consciousness is yet. We don't know how it is grounded in brain activity, nor whether it is an emergent capacity of the evolution of organisms at all, so we can't possibly know whether it is bound to emerge from the evolution of other physical systems.
Some people are strongly committed to the view that consciousness is just the result of the way brains are structured, and there is nothing categorically special in the physical world about how brains are structured. But they cannot give an account, at least not yet, of how this works, how we get thoughts and feelings and memories from the firing of neurons, let alone positively establish that it works in the same way as our computers work. And if we do not know that brains are computers, then we definitely don't know that computer programs, or indeed the special parts of programs responsible for the production of simulations of characters that seem to bear some analogy to us (Ms. PacMan, the Sims, etc.), are on their way to becoming conscious.
But let's suppose for the sake of argument that our brains are computers, and that our consciousness is the result of the fact that we are 'running a program'. It does not follow from this that wherever in the universe there is natural computational activity, given enough time this activity will in turn result in the production of artificial systems that simulate what had already emerged naturally. In other words, there is no reason to think that wherever there are naturally evolving brains there are likely to be, given enough time, artificial ones too.
The presumption of the high probability of such an outcome is perhaps what is most new about the new repackaged version of the argument. It appears to be borrowed from some recent speculations in xenobiology, triggered by the recent recalculation, by several orders of magnitude, of the likely number of habitable planets in the universe. But this speculation is based on a misunderstanding of evolutionary biology, and pumped up on a fairly large dose of smuggled teleology. There is no reason why biological evolution should move from lower to higher, from dumb fish and worms to ingenious toolmaking and abstract-thought-using beings. This is for the simple reason that there can be no lower or higher at all in evolution. I am worse than a fish if we're having a contest in underwater breathing, but better if it is typing that interests us. And this is all evolution does: it yields up organisms that are fitted to their environments; it does not yield up absolutely ever-better organisms, nor is tool-making and abstract thinking any better, absolutely, than breathing through gills.
Even given the astoundingly large number of habitable planets in the universe and the likely passage on at least some of them from inorganic molecules to living systems, there is no compelling reason to think that a large number of these systems, or even more than one of them, must ever have resulted in a species such as ours that builds tools we would recognise as products of technology. There could for example be a species of electric eel-like creatures that develop a flourishing culture of abstract self-expression, in which some become legendary, like eel Mozarts, for their ability to control the currents coming out of them. Such a thing could evolve without giving off any technological traces. Such a thing, indeed, may even be going on right now among some terrestrial non-human species. But not only do we not detect it, we are not even interested in it, as we are certain, without argument, that intelligence is coextensive with making stuff.
If there is no necessity or high probability that the passage to what we would recognise as technology should have occurred more than once, then a fortiori there cannot be a high probability that one or many other living systems in the universe ever came up with a technology similar to Second Life (in which the little avatars eventually become conscious, mistake their simulation for reality, etc.).
There are two instances of one and the same error in the argument that we might be living in a video game simulation. It is supposed that given enough time any living system will become like us in that it will begin using abstract thought and building tools, which tools will eventually become the loci of abstract thought themselves. It is supposed, further, that these thought-tools will eventually take a form that looks recognisably like the thought-tools we have started to develop over the past half-century or so. The second inflection of the error only looks more absurd in view of its greater specificity. Both, again, are based on the ungrounded claim that thought-tools not only help us conscious beings to think, but also, as they become more complex, begin thinking themselves.
This speculation has become 'a thing' recently not because it has finally been grounded in a compelling argument, but because Elon Musk occupies a social role in which he need only dream out loud in order for his 'he' to become a 'they'. The argument is a matter of interest, like the horoscopes once so lucidly studied by Adorno, mostly because of what it says about the sociology of authority, not because of what it, well, says. And yet one fears that in the Internet era, though we are now offered infinite space to say things, there is somehow nonetheless vanishing space for critical analysis of the declarations of the powerful. The powerful maintain a pretence of reasonableness by speaking in terms of what is 'probable'. But speaking in this way translates just as easily into Google hits as does speaking of what is 'true'. And Google hits are more interesting than truth anyway, so why not just dream out loud?