In the late twentieth century, the Wachowski brothers offered mankind two pills. Take the blue pill and everything will remain as it is. Take the red pill and you will find out how deep the rabbit hole is. You will gain the ability to fly, dodge bullets and do anything possible. However, this applies to the world that does not yet exist. This world is just a clever trap with detailed simulation, which takes place in the depths of the incredible supercomputer which feeds on real people as its cheap source of supply.
Where is the truth? What are we surrounded by? Is it a real world or virtual? In 2003, Swedish transhumanist Nick Bostrom developed this idea in his famous article “Are we living in a computer simulation?“. In this article, the philosopher speculates on the possibility in the foreseeable future for humanity, if it does not destroy itself, to evolve sooner or later into such a powerful civilization that it will be capable of simulating the reality on a large scale. And it is very likely that our current reality may as well be a product of such sophisticated simulation.
Bostrom’s ideas have been criticized repeatedly, a detailed “defeat” of his arguments is found in the article by Danila Medvedev “Are We Living In Nick Bostrom’s Speculation?“. Still is it possible to look at this problem based on a particular point of view, to find tangible experimental evidence that our world is indeed for real? As it turns out, it is possible.
First attempts have been made before Bostrom. In 2001, MIT researcher Seth Lloyd tried to assess computational resources required to simulate the universe on the scale of space and time available for our observation and concluded that it is impossible in principle.
Lloyd estimated the number of calculations needed to be performed by computer to simulate the entire universe from the time of the Big Bang, with every event that has happened to every elementary particle in the past in a period of time of nearly 14 billion years. Here, the number is not as important as its magnitude, since the energy required to perform these calculations will be greater than the energy of the entire simulated universe by itself. “This computer must be more powerful than the entire universe, and for it to complete its task it will take more time than the age of the world”, concluded Lloyd, “And who would come up with the idea to do this anyway?“.
It seems that this would be the end of this idea, but only at a first glance. After all, future generations with super abilities, artificial matrix, alien civilizations with super powers, or anyone else could simulate current reality without being required to make this model an ideal one. Take, for example, three-dimensional models of the worlds in today’s computer games. They are quite natural, although they represent only approximated models, which are computed by the processors of common desktop computers. Can a supercomputer create our reality together with us in it, while discarding excessive details at such a low level that we simply are not yet capable of “observing” it?
Suppose our planet and solar system are virtualized with the highest resolution, distant stars are calculated in a lower detail, while the most distant galaxies are “updated” only when the need arises. Also, the reality that is happening on the macro-level can be rendered with a very high quality, while the micro-level can be quite rough (no wonder the quantum world surprises us with its highly conditional approximation and with probability of its causative relationships).
In fact, modern supercomputers and mathematical methods employed by quantum chromodynamics are quite capable of modeling the behavior of the universe on small scales of space and time. This, of course, is only a tiny fraction, but what happens within its boundaries is not significantly different from the reality that exists independent of us or our perception of what we think the material world around us should be.
If computing power will improve the way it has been improving up to now (it is true that a modern smartphone outperforms combined capabilities of NASA computers, which existed during the years of the moon’s Apollo project), it is quite easy to imagine that technological capabilities for modeling reality will improve and eventually become sufficient for a reasonable simulation of our entire civilization. Simple extrapolation shows that even with the unchanged pace of simulation capabilities, modeling of an area 1x1m in size will become possible in 140 years.
American physicist Silas Beane, who is working on modeling interactions between protons and neutrons during early ages of the universe, believes that in the next century even more perfected models will become available and may incorporate in them intelligent beings. The intelligent beings inhabiting such a model might be asking “Is our world real or are we only a product of some kind of simulation…?”
If this is true, the “Matrix” may appear, if we happen to find flaws in the model, i.e. spaces where it begins to “malfunction”. In 2007, this idea was introduced by the Cambridge professor of mathematics John Barrow. In his mind , these failures may be caused by minor inaccuracies of fundamental constants values, which should otherwise remain unchanged, for example, the speed of light in a vacuum or the fine structure constant.
Interestingly, during the same year of 1999 when “The Matrix” by Wachowski brothers was released, the first observational evidence has surfaced showing that the fundamental constants are not so unchangeable. Scientists had stated that 10 billion years ago, the fine structure constant was slightly, a thousandth of a percent, greater than it is nowadays. However, additional experiments are not allowing to confirm this fact definitively, but the possibility of this is somewhat frightening. Perhaps the programmers of the Matrix managed to fix this “bug”?
In 2012, Bean and his colleagues proposed a more comprehensive approach to check the integrity of the Matrix. This is possible because any computer model to simulate reality implies splitting this reality into segments in order to carry out computations and combining these segments together afterwards. It would resemble a picture on the computer screen composed of pixels when we take a close up look to see them. The problem we face is how to detect these “pixels of reality”.
According to calculations of the British scientists, the movement of particles modeled within a lattice is determined by the size of cells it consists of. The smaller the cells, the greater the maximum energy level they will have. By the way, observations by astronomers indicate that the energy of cosmic rays coming to us from the most distant galaxies actually drops to zero at a certain level. This boundary is referred to as the Greisen-Zatsepin-Kuzmin limit, and calculations show that if it points out the existence of the Matrix, its integrity is extremely high, the size of cells in this model will be 10^11 times smaller than the “pixels”, which are used by today’s physicists in their modeling. This matrix is too perfect to detect it when relying on this approach.
There is another potentially observable effect from the existence of a lattice serving as a medium to simulate reality. If the space-time fabric is smooth and has no “seams”, then moving particles, including the cosmic rays mentioned previously, will travel along smooth trajectories. If the Matrix lattice does exist, calculations show that combining its cells will cause simulated particles to “slip off” and exhibit symmetry inherent in the original model. If, for instance, a lattice is composed of cubes, cubic symmetry will be showing as a result.
However, it is still unclear how to test these hypotheses. And most importantly, it is uncertain whether these hypotheses need to be tested. Maybe it is better to choose the blue pill and let things remain in their usual places?