sa.pngThe Simulation Hypothesis (simulation argument or simulism) proposes that reality is a simulation and those affected are generally unaware of this. The concept is reminiscent of René Descartes' Evil Genius but posits a more futuristic simulated reality. The same fictional technology plays, in part or in whole, in the science fiction films Star Trek, Dark City, The Thirteenth Floor, The Matrix, Open Your Eyes, Vanilla Sky, Total Recall, and Inception.  I think in recent times idea took quite a swing thanks to Matrix trilogy, but in somewhat wrong direction by public (mostly thanks to the storyline).  I usually dismiss automatically hyped ideas by movies, but somewhere in 2008 my friend sent a link to simulation argument site which made some sense to me, though I saw it (and I still do) more of philosophical approach to reality.  In 2011, I accepted the fact there would be nothing strange or unexpected if we would run within simulation.  I will try to explain why is this so and how is this related to multiverse concepts I have been focusing so far.

In its current form, the Simulation Argument began in 2003 with the publication of a paper by Nick Bostrom.  Davis J. Chalmers, took it a bit further in his The Matrix as Metaphysics analysis where he identified three separate hypotheses, which, when combined give what he terms as Matrix Hypothesis; the notion that reality is but a computer simulation:

  • The Creation Hypothesis states "Physical space-time and its contents were created by beings outside physical space-time"
  • The Computational Hypothesis states "Microphysical processes throughout space-time are constituted by underlying computational processes"
  • The Mind-Body Hypothesis states "mind is constituted by processes outside physical space-time, and receives its perceptual inputs from and sends its outputs to processes in physical space-time"

 

For the sake of the arguments and discussion I will just mention there is also the dream argument which contends that a futuristic technology is not required to create a simulated reality, but rather, all that is needed is a human brain. More specifically, the mind's ability to create simulated realities during REM sleep affects the statistical likelihood of our own reality being simulated.  I remember thinking about this idea when in teen period without even knowing someone might have taken this much farther than I could possibly know back then.  Nevertheless, I do plan to focus on computational process mostly in this blog (more correctly I plan to focus on virtual world simulation or as in words by Nick Bostrom - ancestor simulations).

 

So far, when talking about multiple and parallel universes, we mostly relied on matematics and its laws and what they tell us.  This models of parallel worlds simply came out of it, pretty much as many other hard to believe theoretical prediction which would eventually be confirmed at some later stage.  As per se, this doesn't mean that every prediction is right, but some of those ideas are logical and discoveries in past 100 years have placed ground in such manner that major number of serious physicist today stands behind this ideas (or at least one of them).  Nevertheless, let's forget a math for a moment and change roles.  Can we create universe?  We are pretty sure that processes involved during big bang were such that we can't create it and even if we could we would have hard time following what is going on (think of inflation).  What do we do then?  We create models.  Computer models.  Simulations.  Playing god would simply prove irresistible, wouldn't be?  (Something Michio Kaku likes to point out to be the future anyway)

matrix.jpgTo make it clear, we are now not talking about real universes from our point of view, but rather virtual ones.  You probably had more than once (or at least once) dream which at the moment felt so real.  You might have had high temperature and halucinations as well.  The bottom line is, if we modify normal brain function just a bit, though the outside world remains stable, our perception of it does not. This raises a classic philosophical question; since all of our experiences are filtered and analyzed by our respective brains, how sure are we that our experiences reflect what’s real?  How do you know you’re reading this sentence, and not floating in a vat on a distant planet, with alien scientists stimulating your brain to produce the thoughts and experiences you deem real?  Branch of phylosophy to deal with this is called epistemology (term introduced by James Frederick Ferrier).  It addresses following questions:

  • What is knowledge?
  • How is knowledge acquired?
  • How do we know what we know?

 

The bottom line is that you can’t know for sure! We see our world through our senses, which stimulate our brain in ways our neural circuitry has evolved to interpret. If someone artificially stimulates our brain so as to elicit electrical crackles exactly like those produced by eating pizza, reading this sentence, or skydiving, the experience will be indistinguishable from the real thing. Experience is dictated by brain processes, not by what activates those processes.

 

OK, so let's take this step further.  We know brain can be stimulated and to be able to do so we should at least be able to match what we see as our current brain processing power.  Next, we should have enough processing power to stimulate brains of all other beings.  On further thought, we need processing powers to simulate and stimulate all processes happening within at least active region of objects in simulation.  Each object here being fundamental ingredient.  Of course, not every single particle in universe would need to be addressed (think of role of observer discussed in Many Worlds).  But wait a minute.  If we are part of such a simulation, why should we believe anything we read in neurobiology texts - the texts would be simulations too, written by simulated biologists, whose findings would be dictated by the software running the simulation and thus could easily be irrelevant to the workings of "real" brains. Well, this is a valid point (phylosophy is always likes to leave open questions), but let's assume whoever simulates reality wishes to simulate it as real as it is.  While I'm agnostic and tend to leave God out of any discussion, it is hard not to quote here famous line "So God created man in his own image, in the image of God he created him; male and female he created them."

 

bluebrain-neuron.jpg

Now, try to forget above lines and imagine you are real - which most likely won't be a problem as that was something you thought so far anyway.  What is the processing speed of the human brain, and how does it compare with the capacity of computers?  This is a difficult question.  Our brain is still pretty much unknown teritory and it was just recently some serious efforts in that direction have been made.  I believe my first belief that we might be onto something was when I listened to Henry Markram lecture I found on youtube back in 2009.  Henry is leading Blue Brain Project with goal of reconstructing the brain piece by piece and building a virtual brain in a supercomputer. The virtual brain would be tool giving neuroscientists a new understanding of the brain and a better understanding of neurological diseases.  The Blue Brain project began in 2005 with an agreement between the EPFL and IBM, which supplied the BlueGene/L supercomputer acquired by EPFL to build the virtual brain.   

 

Now, the computing power needed is considerable. Each simulated neuron requires the equivalent of a laptop. A model of the whole brain would have billions of such laptops.  Nevertheless, supercomputing technology is rapidly approaching a level where simulating the whole brain becomes a concrete possibility.  As a first step, the project succeeded in simulating a rat cortical column.  This neuronal network, the size of a pinhead, recurs repeatedly in the cortex. A rat's brain has about 100,000 columns of in the order of 10000 neurons each. In humans, the numbers are dizzying - a human cortex may have as many as  two million columns, each having in the order of 100000 neurons each. 

 

The human retina, a light-sensitive tissue lining the inner surface of the eye, has 100 million neurons (it is smaller than a dime and about as thick as a few sheets of paper) and it is one of the best-studied neuronal clusters. The robotics researcher, Hans Moravec, has estimated that for a computer-based retinal system to be on a par with that of humans, it would need to execute about a billion operations each second. To scale up from the retina's volume to that of the entire brain requires a factor of roughly 100000. Moravec suggests that effectively simulating a brain would require a comparable increase in processing power, for a total of about 100 million million (1014) operations per second. Independent estimates based on the number of synapses in the brain and their typical firing rates yield processing speeds within a few orders of magnitude of this result, about 1017 operations per second. Although it's difficult to be more precise, this gives a sense of the numbers that come into play.  Currently (2H 2011), Japan's K computer, built by Fujitsu, is the fastest in the world; it achieves speed of 10.51 petaflops (1015 operations per second).  This statistic will change most likely in recent future.  If we use the faster estimate for brain speed, we find that a hundred million laptops or a hundred supercomputers, approach the processing power of a human brain.

 

Now, such comparisons are likely naïve; the mysteries of the brain are manifold, and speed is only one gross measure of function. But most everyone agrees that one day we will have raw computing capacity equal to, and likely far in excess of, what biology has provided. Obvious unknown is whether we will ever leverage such power into a radical fusion of mind and machine.  Dualist theories, of which there are many varieties, maintain that there's an essential nonphysical component vital to mind. Physicalist theories of mind, of which there are also many varieties, deny this, emphasizing instead that underlying each unique subjective experience is a unique brain state. Functionalist theories go further in this direction, suggesting that what really matters to making a mind are the processes and functions - the circuits, their interconnections, their relationships - and not the particulars of the physical medium within which these processes take place.  Physicalists would agree that were you to faithfully replicate my brain by whatever means - molecule by molecule, atom by atom - the end product would indeed think and feel as you do. Functionalists would agree that were you to focus on higher-level structures - replicating all your brain connections, preserving all brain processes while changing only the physical substrate through which they occur - the same conclusion would hold. Dualists would disagree on both counts.  The possibility of artificial sentience clearly relies on a functionalist viewpoint. Earlier mentioned Henry Markram, anticipates that before 2020 the Blue Brain Project, leveraging processing speeds that are projected to increase by a factor of more than a million, will achieve a full simulated model of the human brain. It needs to be said that Blue Brain's goal is not to produce artificial sentience, but rather to have a new investigative tool for developing treatments for various forms of mental illness; still, Markram has gone out on a limb to speculate that, when completed, Blue Brain may very well have the capacity to speak and to feel.  What if we apply this to virtual universe model?

bbp.jpg

The history of technological innovation suggests that iteration by iteration, the simulations would gain verisimilitude, allowing the physical and experiential characteristics of the artificial worlds to reach convincing levels of nuance and realism. Whoever was running a given simulation would decide whether the simulated beings knew that they existed within a computer; simulated humans who surmised that their world was an elaborate computer program might find themselves taken away by simulated technicians in white coats and confined to simulated locked wards. But probably the vast majority of simulated beings would consider the possibility that they're in a computer simulation too silly to warrant attention.  Even if you accept the possibility of artificial sentience, you may be persuaded that the overwhelming complexity of simulating an entire civilization, or just a smaller community, renders such feats beyond computational reach.  One may usefully distinguish between two types of simulation: in an extrinsic simulation, the consciousness is external to the simulation, whereas in an intrinsic simulation the consciousness is entirely contained within it and has no presence in the external reality.  It's time to play with numbers.

 

simulated.jpg

Scientists have estimated that a present-day high-speed computer the size of the earth could perform anywhere from 1033 to 1042 operations per second. If we assume that our earlier estimate of 1017 operations per second for a human brain is correct, then an average brain performs about 1024 total operations in a single hundred year life span. Multiply that by the roughly 100 billion people who have ever walked the planet and the total number of operations performed by every human brain since Ardi is about 1035. Using the conservative estimate of 1033 operations per second, we see that the collective computational capacity of the human species could be achieved with a run of less than two minutes on an earth-sized computer with today’s technology.

Quantum computing has the capacity to increase processing speeds by spectacular factors (although we are still very far from mastering this application of quantum mechanics).  Researchers have estimated that a quantum computer no bigger than a laptop has the potential to perform the equivalent of all human thought since the dawn of our species in a tiny fraction of a second.  Again, this is just simulating brain operations; to simulate not just individual minds but also their interactions among themselves and with an evolving environment, the computational load would grow orders of magnitude larger. On the other hand a sophisticated simulation could be optimized with minimal impact on quality. For example, simulated humans on a simulated Earth won't be bothered if the computer simulates only things lying within the cosmic horizon (we can't see beyond that range anyway so why simulate it).  Further, the simulation might simulate stars beyond the sun only during simulated nights, and then only when the simulated local weather resulted in clear skies (imposing some load balancing too). When no one is looking, the computer's celestial simulator routines could take a break from working out the appropriate stimulus to provide each and every person who could look skyward.  Remember discussion Bohr and Einstein had which is described in Many Worlds? It's exactly that!  Well-structured program would keep track of the mental states and intentions of its simulated inhabitants, and so would anticipate, and appropriately respond to, any impending stargazing. The same goes for simulating cells, molecules and atoms. 

 

The march toward increasingly powerful computers, running ever more sophisticated programs, is inexorable. Even with today's rudimentary technology, the fascination of creating simulated environments is strong; with more capability it's hard to imagine anything but more intense interest. The question is not whether our descendants will create simulated computer worlds - we're already doing it. The unknown is how realistic the worlds will become.  At this point Nick Bostrom makes a simple, but powerful observation.  Our descendants are bound to create an immense number of simulated universes, filled with a great many self-aware, conscious inhabitants. If someone can come home at night, kick back, and fire up the create-a-universe software, it's easy to envision that they'll not only do so, but do so often.  One future day, a cosmic census that takes account of all sentient beings might find that the number of flesh and blood humans pales in comparison with those made of chips and bytes, or their future equivalents. If the ratio of simulated humans to real humans were colossal, then brute statistics suggests that we are not in a real universe. The odds would overwhelmingly favor the conclusion that you and everyone else are living within a simulation.  That's a bit of shocking and unavoidable observation.

 

Once we conclude that there's a high likelihood that we're living in a computer simulation, how do we trust anything (including the very reasoning that led to the conclusion)?  Will the sun rise tomorrow?  Maybe, as long as whoever is running the simulation doesn't pull the plug or gets BSOD. Are all our memories trustworthy? They seem so, but whoever is at the keyboard may have a penchant for adjusting them from time to time.  Logic alone can't ensure that we're not in a computer simulation.

Maybe sentience can't be simulated - full stop. Or maybe, as Bostrom also suggests, civilizations en route to the technological mastery necessary to create sentient simulations will inevitably turn that technology inward and destroy themselves.  Or maybe when our distant descendants gain the capacity to create simulated universes they choose not to do so, perhaps for moral reasons or simply because other currently inconceivable pursuits prove so much more interesting that, much as we noted with universe creation, universe simulation falls by the wayside.  There are numerous loopholes, but are they large enough?  And if you were living in a simulation, could you figure that out?  Let's examine some possibilities.

SL.jpg

The Simulator might choose to let you in on the secret. Or maybe this revelation would happen on a worldwide scale, with giant windows and a booming voice surrounding the planet, announcing that there is in fact an All Powerful Programmer up in the heavens. But even if your Simulator shied away from exhibitionism, less obvious clues might turn up. Simulations allowing for sentient beings would certainly have reached a minimum fidelity threshold, but as they do with designer clothes and cheap knock offs, quality and consistency would likely vary. For example, one approach to programming simulations ("emergent strategy") would draw on the accumulated mass of human knowledge, judiciously invoking relevant perspectives as dictated by context. Collisions between protons in particle accelerators would be simulated using quantum field theory. The trajectory of a batted ball would be simulated using Newton’s laws. The reactions of a mother watching her child's first steps would be simulated by melding insights from biochemistry, physiology, and psychology. The actions of governmental leaders would fold in political theory, history, and economics. Being a patchwork of approaches focused on different aspects of simulated reality, the emergent strategy would need to maintain internal consistency as processes nominally construed to lie in one realm spilled over into another. A psychiatrist needn't fully grasp the cellular, chemical, molecular, atomic, and subatomic processes underlying brain function.  But in simulating a person, the challenge for the emergent strategy would be to consistently meld coarse and fine levels of information, ensuring for example that emotional and cognitive functions interface sensibly with physiochemical data. Simulators employing emergent strategies would have to iron out mismatches arising from the disparate methods, and they'd need to ensure that the meshing was smooth. This would require fiddles and tweaks which, to an inhabitant, might appear as sudden, baffling changes to the environment with no apparent cause or explanation. And the meshing might fail to be fully effective; the resulting inconsistencies could build over time, perhaps becoming so severe that the world became incoherent, and the simulation crashed.

 

A possible way to obviate such challenges would be to use a different approach called "ultra-reductionist strategy".  Here, simulation would proceed by a single set of fundamental equations, much as physicists imagine is the case for the real universe. Such simulations would take as input a mathematical theory of matter and the fundamental forces and a choice of "initial conditions" (how things were at the starting point of the simulation); the computer would then evolve everything forward in time, thereby avoiding the meshing issues of the emergent approach. These simulations have their own set of problems. If the equations our descendants have in their possession are similar to those we work with today - involving numbers that can vary continuously - then the simulations would necessarily invoke approximations. To exactly follow a number as it varies continuously, we would need to track its value to an infinite number of decimal places (let's say, a variation from .9 to 1 pass through numbers like .9, .97, .971, .9713, .97131, .971312, and so on, with an arbitrarily large number of digits required for full accuracy). That’s something a computer with finite resources can't manage: it will run out of time and memory. So, even if the deepest equations were used, it's still possible that computer-based calculations would inevitably be approximate, allowing errors to build up over time.  Round-off errors, when accumulated over a great many computations, can yield inconsistencies. Of course, a Simulator might wish to conceal herself too.  As inconsistencies would start to build, she might reset the program and erase the inhabitants' memory of the anomalies. So it would seem a stretch to claim that a simulated reality would reveal its true nature through glitches and irregularities.

 

simulationreality.jpgIf and when we do generate simulated worlds, with apparently sentient inhabitants, an essential question will arise: Is it reasonable to believe that we have become the very first creators of sentient simulations? Perhaps yes, but if we're keen to go with the odds, we must consider alternative explanations that, in the grand scheme of things, don't require us to be so extraordinary. Once we accept that idea, we're led to cconsider that we too may be in a simulation, since that's the status of the vast majority of sentient beings in a Simulated Multiverse. Evidence for artificial sentience and for simulated worlds is grounds for rethinking the nature of your own reality.  So, it is just a matter of time before come to that point.
Why stop there?  There's a philosophical perspective, coming from the structural realist school of thought, suggesting physicists may have fallen prey to a false dichotomy between mathematics and physics. For example, it is common for theoretical physicists to speak of mathematics providing a quantitative language for describing physical reality. But maybe, this perspective suggests, math is more than just a description of reality - maybe math is reality. The computer simulation is nothing but a chain of mathematical manipulations that take the computer's state at one moment and according to specified mathematical rules evolve those bits through subsequent arrangements.matrix1.jpg

Deeper point of this perspective is that the computer simulation is an inessential intermediate step, a mere mental stepping-stone between the experience of a tangible world and the abstraction of mathematical equations.  The mathematics itself (through the relationships it creates, the connections it establishes, and the transformations it embodies), contains you, your actions and your thoughts. You don't need the computer - you are in the mathematics.  In this way of thinking, everything you're aware of is the experience of mathematics. Reality is how math feels.

 

Max Tegmark calls it Mathematical Universe Hypothesis (also known as the Ultimate Ensemble or MUH) and says that, the deepest description of the universe should not require concepts whose meaning relies on human experience or interpretation. Reality transcends our existence and so shouldn't, in any fundamental way, depend on ideas of our making. Tegmark's view is that mathematics is precisely the language for expressing statements that shed human contagion.  As per Tegmark, nothing can possibly distinguish a body of mathematics from the universe it depicts.  Were there some feature that did distinguish math from the universe then they  would have to be non-mathematical.  But, according to this line of thought, if the feature were non-mathematical, it must bear a human imprint, and so can't be fundamental. Thus, there’s no distinguishing what we conventionally call the mathematical description of reality from its physical embodiment - they are the same.

 

Originally, there was a bit of an inconsistency in original model with Gödel's incompleteness theorems.  Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" (essentially, a computer program) is capable of proving all truths about the relations of the natural numbers (arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, a corollary of the first, shows that such a system cannot demonstrate its own consistency.  Tegmark's response is to offer a new hypothesis "that only Godel-complete (fully decidable) mathematical structures have physical existence. This drastically shrinks the Level IV multiverse, essentially placing an upper limit on complexity, and may have the attractive side effect of explaining the relative simplicity of our universe." Tegmark goes on to note that although conventional theories in physics are Godel-undecidable, the actual mathematical structure describing our world could still be Godel-complete, and "could in principle contain observers capable of thinking about Godel-incomplete mathematics, just as finite-state digital computers can prove certain theorems about Godel-incomplete formal systems like Peano arithmetic." Later on Tegmark gives a more detailed response, proposing as an alternative to MUH - the more restricted "Computable Universe Hypothesis" (CUH) which only includes mathematical structures that are simple enough that Gödel's theorem does not require them to contain any undecidable/uncomputable theorems. Tegmark admits that this approach faces "serious challeges", including (a) it excludes much of the mathematical landscape; (b) the measure on the space of allowed theories may itself be uncomputable; and (c) "virtually all historically successful theories of physics violate the CUH".  His approach is also known as "shut up and calculate" and represents introduction into Level IV of multiverse - where everything is almost mathematical structure.

 

This is closely related to another question you may have heard before - did we discover mathematics or did we just found it?  Did we create it?  For centuries people have debated whether - like scientific truths - mathematics is discoverable, or if it is simply invented by the minds of our great mathematicians. But two questions are raised, one for each side of the coin. For those who believe these mathematical truths are purely discoverable, where, exactly, are you looking? And for those on the other side of the court, why cannot a mathematician simply announce to the world that he has invented 2 + 2 to equal 5.  The Classical Greek philosopher Plato was of the view that math was discoverable, and that it is what underlies the very structure of our universe. He believed that by following the intransient inbuilt logic of math, a person would discover the truths independent of human observation and free of the transient nature of physical reality.  Obviously, if you accept matematical universe then you accept Platonic view too.

pi-day.jpgAlbert Einstein said: "The most incomprehensible thing about the universe is that it is comprehensible." Physicist Eugene Wigner wrote of "the unreasonable effectiveness of mathematics" in science. So is mathematics invented by humans, like cars and computers, music and art? Or is mathematics discovered, always out there, somewhere, like mysterious islands waiting to be found?  The question probes the deepest secrets of existence.  Roger Penrose, one of the world's most distinguished mathematicians, says that "people often find it puzzling that something abstract like mathematics could really describe reality." But you cannot understand atomic particles and structures, such as gluons and electrons, he says, except with mathematics.   Penrose, Mark Belaguer and the others tend to be aware of the other side too.  So, is mathematics invented or discovered? Here’s what we know. Mathematics describes the physical world with remarkable precision. Why? There are two possibilities.  First, math somehow underlies the physical world, generates it. Or second, math is a human description of how we describe certain regularities in nature, and because there is so much possible mathematics, some equations are bound to fit.  As for the essence of mathematics, there are four possibilities. Only one is really true. Math could be: physical, in the real world, actually existing; mental, in the mind, only a human construct; Platonic, nonphysical, nonmental abstract objects; or fictional, anti-realist, utterly made up. Math is physical or mental or Platonic or fictional. Choose only one.

 

As to the question of whether we are living in a simulated reality or a "real' one", the answer may be "indistinguishable". Physicist Bin-Guang Ma proposed the theory of  "Relativity of reality", though this notion has been suggested in other contexts like ancient philosophy (Zhuangzi's 'Butterfly Dream') and psychologic analytics. By generalizing the relativity principle in physics, which is mainly about the relativity of motion, stating that the motion has no absolute meaning (to say if something is in motion or rest, one must adopt some reference frame; without a reference frame, one cannot tell the state of being in rest or in uniform motion), a similar property has been suggested for reality, meaning that without a reference world, one cannot tell the world one is living in is real or a simulated one. Therefore, there is no absolute meaning for reality. Similar to the situation in Einstein's relativity, there are two fundamental principles for the theory 'Relativity of reality'.

  • All worlds are equally real.
  • Simulated events and simulating events coexist.

 

The first principle (equally real) says that all worlds are equal in reality, even for partially simulated worlds (if there are living beings, they feel the same level of reality just as we feel). In this theory, the question "whether are we living in a simulated reality or a "real one" is meaningless, because they are indistinguishable in principle. The "equally real principle" doesn't mean that we cannot differentiate a concrete computer simulation from our own world, since when we are talking about a computer simulation, we already have a reference world (the world we are in).  Coupled with the second principle ("coexistence"), the space-time transformation between two across-reality objects (one is in real world and the other is in virtual world) was supposed in this theory, which is an example of interreality (mixed reality) system. The first "interreality physics" experiment may be the one conducted by V. Gintautas and A. W. Hubler, where a mixed-reality correlation between two pendula (one is real and the other is virtual) was indeed observed.

 

Going back to our simulation, there several classes of computation computer can do:

  • computable functions, which are functions that can be evaluated by a computer running through a finite set of discrete instructions
  • noncomputable functions, which are set of well-defined problems that cannot be solved by any computational procedure

 

Computer trying to calculate a noncomputable function will churn away indefinitely without coming to an answer, regardless of its speed or memory capacity. Imagine a simulated universe in which a computer is programmed to provide a wonderfully efficient simulated chef who provides meals for all those simulated inhabitants - and only those simulated inhabitants - who don't cook for themselves. The question is: Whom does the computer charge with feeding the chef?Think about it, and it makes your head hurt. The chef can't cook for himself as he only cooks for those who don't cook for themselves, but if the chef doesn't cook for himself, he is among those for whom he is meant to cook.  The successful universes constituting the Simulated Multiverse would therefore had to be based on computable functions.  Then the simplest explanation of our universe is the simplest program that computes it. In 1997 Jürgen Schmidhuber pointed out that the simplest such program actually computes all possible universes with all types of physical constants and laws, not just ours. His essay also talks about universes simulated within parent universes in nested fashion, and about universal complexity-based measures on possible universes.  It is hard not see parallels here with what is called Ultimate universe and anthropic principle discusses earlier in previous multiverse blog entries.  Here a clip where Schmidhuber talks about all computable universes at the World Science Festival 2011.

 

Our reality is far from what it seems, but we should be open minded.  Matehmatics so far has been embraced as our framework which not only explains what we know, but also gives us directions with some new strange paths to explore and discover.  It would be mistake not to say that has happened before and will happen again for sure.  We like to believe in testable experiments, but sometime mathematics is all there is.  Until we do not find something else... And so the story continues and with technological advance we have some exciting times ahead.  It is exactly these kinds of theories and their testing which make me wish how gladly I would live forever...


Credits: Brian Greene, Michio Kaku, Nick Bostrom, David J. Chalmers, Blue Brain Project, Max Tegmark, Josh Hill, Jürgen Schmidhuber

 

Related posts:

Deja vu Universe

Inflation

Braneworld

Landscape Multiverse

Many worlds

Holographic Principle to Multiverse Reality