You are currently browsing the tag archive for the ‘physics’ tag.

It is not often that we think of events as isolated incidents separated by a vast divide in both physical and virtual distance. In our day to day existence with near instantaneous methods of communication and a pervasively global information network, significant events are easily taken note of. But when the distance separating the event from the recipient exceeds our Earthly bounds, an interesting phenomenon occurs. Even on the scale of the solar system, light from the Sun takes approximately 8 minutes to reach our sunny skies here on Earth. If the Sun happened to go supernova, we would have no acknowledgment of the fact until some 8 minutes after the event actually occurred. While not completely revolutionary, this concept has deeper ramifications if the distances are again increased to a Universal scale.

While we are accustomed to thinking of light as travelling at a fixed speed limit, it is not often that one thinks of gravity as a force that requires time to cross intergalactic distances. But indeed it does. Gravity waves propagate at the speed of light; slight perturbations on the surfaces of incredibly massive objects (eg neutron stars or binary star systems) act as the catalyst for these disturbances. Unimpeded by objects, gravity waves are able to pass through the Universe without effect. They act to warp the nature of spacetime, contracting and expanding distances between objects as the wave passes through that particular locality.

Here on Earth, information is similarly transferred quickly along the Internet and other communication pathways at an average of close to the speed of light. Delays only arise when traffic is heavy (pathways severed, technical problems, increased use). As the distances involved are relatively small in comparison to the speed of the transfer, communication between two points is practically instantaneous. But what if we slow down the speed of travel? Imagine the event occurs in an isolated region of desert. The message can only be transmitted via a physical carrier, thus mimicking the vast distances involved in an interstellar environment. Observer B waiting to receive the message thus has no knowledge of what has happened until that message arrives.

Revisiting the scenario of the Sun exploding, it seems strange that mammoth events in the Universe could occur without our immediate knowledge. It is strangely reminiscent of the Chinese proverb; does a falling tree make a sound if no one is around to listen? Cosmic events are particularly relevant in this respect, as they most certainly do have immense ramifications (‘making a noise’). If the Universe suddenly collapsed at the periphery (unlikely but considered for the purposes of this exercise), our tiny speck of a planet would not know about it until (possibly) many, many millions of years. It is even possible that parts of the distant Universe have already ‘ceased to exist’; the fabric of time and space from the epicentre of this great event expanding like a tidal wave of doom. What does this mean for a concept of Universal time? Surely it must not be dependent upon physical reality, for if it did, surely such a catastrophic event would signal the cessation of time across the entire cosmos. Rather, it would be a gradual process that rushes forth and eliminates regions of both space and time sequentially. The final remaining island of ‘reality’ would thus act as a steadily diminishing safe haven for the remaining inhabitants of the cosmos. Such an event would certainly make an interesting science-fiction story!

Einstein became intimately aware of this universal fact of locality, making it a central tenet in his grand Theory of Relativity. He even offered comments regarding this ‘principle of locality’ (which became a recognised physical law);

“The following idea characterises the relative independence of objects far apart in space (A and B): external influence on A has no direct influence on B; this is known as the Principle of Local Action, which is used consistently only in field theory.”

A horribly simplified description of relativity states that what I experience is not necessarily the same as what you will experience. Depending on how fast you are travelling and in what direction relative to myself (taking into account the speed and direction at which I am travelling), our experience of time and space will differ; quite markedly if we approach the speed of light. Even the flow of time is unaffected, as observers aboard objects travelling at high velocities experience a slowing notion of chronicity compared to their colleagues. It would be intriguing to experience this phenomenon first hand in order to determine if the flow is psychologically detectable. Perhaps it would be experienced as an exaggerated and inverted version of the overly clichéd ‘time flies when you’re having fun’.

Locality in Einstein’s sense is more about the immediate space surrounding objects rather than causes and their effects (although the two are undoubtedly interrelated). Planetary bodies, for instance, are thought to affect their immediate surroundings (locality) by warping the fabric of space. While the metaphor here is mainly for the benefit of visualisation rather than describing actual physical processes, orbiting bodies are described as locked into a perpetual spin, similar to the way in which a ball bearing revolves around a funnel. Reimagining Einstein’s notion of relativity and locality as causality (and the transmission of information between two points), the speed of light and gravity form the main policing forces in managing events in the Universe. Information can only travel some 300,000 km/s between points, and the presence of gravity can modify how that information is received (large masses can warp transmissions as in gravitational lensing and also influence how physical structures interact).

Quantum theory adds to the fray by further complicating matters of locality. Quantum entanglement, a phenomenon whereby an effect at Point A instantaneously influences Point B, seems to circumnavigate the principle of locality. Two points in space dance to the same tune, irrespective of the distances involved. Another quantum phenomenon that exists independently of local space is collapsing wave functions. While it is currently impossible to affirm whether this ‘wave’ actually exists and also what it means for the nature of reality (eg many worlds vs Copenhagen interpretation), if it is taken as a part of our reality then the act of collapse is surely a non-local phenomenon. There is no detectable delay in producing observable action. A kicked football does not pause while the wave function calculates probabilities and decides upon an appropriate trajectory. Likewise, individual photons seem just ‘know’ where to go; instantly forming the familiar refraction pattern behind a double-slit grating. The Universe at large simply arranges its particles in anticipation of these future events instantaneously, temptingly inviting notions of omniscience on its behalf.

Fortunately, our old-fashioned notions of cause and effect are preserved by quantum uncertainties. To commit the atrocious act of personifying the inanimate, it is as though Nature, through the laws of physics, protects our fragile Universe and our conceptions of it by limiting the amount of useful information we can extract from such a system. The Uncertainty Principle acts as the ubiquitous protectorate of information transfer, preventing instantaneous transfer between two points in space. This ‘safety barrier’ prevents us from extracting useful observations regarding entangled particles without the presence of a traditional message system (need to send the extracted measurements taken at Point A to Point B at light speed in order to make sense of the entangled particle). When we observe particles at a quantum level (spin, charge etc) this disturbs the quantum system irrevocably. Therefore the mere act of observing prevents us from using this system as a means of instantaneous communication.

Causality is still a feature of the Universe that needs in-depth explanation. At a higher level is the tireless battle between determinism and uncertainty (free-will). If every event is predetermined based on the collisions of atoms at the instant of the Big Bang, causality (and locality) is a moot point. Good news for reductionists whom hope to uncover a fundamental ‘theory of everything’ with equations to predict any outcome. If, on the other hand, the future really is uncertain, we certainly have a long way to go before an adequate explanation of how causality operates is proposed. Whichever camp one claims allegiance, local events are still isolated events whose effects travel at a fixed speed. One wonders what the more frustrating result of this is; not having knowledge about an important albeit distant event or realising that whatever happens is inevitable. The Universe may already have ended; but should we really care?

Teleportation is no longer banished to the realm of science fiction. It is widely accepted that what was once considered a physical impossibility is now directly achievable through quantum manipulations of individual particles. While the methods involved are still in their infancy (single electrons are the heaviest particle to be teleported), we can at least begin to appreciate and think about the possibilities on the basis of plausibility. Specifically, what are the implications for personal identity if this method of transportation is possible on a human scale? Atomically destructing and reconstructing an individual at an alternate location could introduce problems with consciousness. Is this the same person or simply an identical twin with its own thoughts, feelings and desires? These are the questions I would like to discuss in this article.

Biologically we lose our bodies several times over during one human life-time. Complete organs are replaced diurnally with little thought given to the implications for self-identity. It is a phenomenon that is often overlooked, and especially so in relation to recent empirical developments with quantum teleportation. If we are biologically replaced with regularity does this imply that our sense of self is, likewise, dynamic in nature and constantly evolving? There would be reasonable arguements for both sides of this debate; maturity and daily experience do result in a varied mental environment. However, one wonders if this has more to do with innate processes such as information transfer/recollection/modification rather than purely the biological characteristics of individual cells (in relation to cell division and rejuvenation processes).

Thus it could be argued that identity is a largely conscious (in terms of seeking out information and creating internal schema of identity) and directed process. This does not totally rule out the potential for identity based upon changes to biological structure. Perhaps the effects are more subtle, modifying our identities in such a way as to facilitate maturity or even mental illness (if the duplication process is disturbed). Cell mutation (neurological tumor growth) is one such example whereby a malfunctioning biological process can result in direct and often drastic changes to identity.

However, I believe it is safe to assume that “normal” tissue regenerative processes do not result in any measurable changes to identity. What makes teleportation so different? Quantum teleportation has been used to teleport photons from one location to another, and more recently, particles with mass (electrons). The process is decidedly less romantic than science-fiction authors would have us believe; classical transmission of information is still required, and a receiving station must still be established at the desired destination. What this means is that matter transportation, ala ‘Star Trek’ transporters, is still very much an unforeseeable fiction. In addition, something as complex as the human body would require incredible computing power to scan at sufficient detail, another limiting factor in its practicality. Fortunately, there are potential uses for this technology such as in the fledging industry of quantum computers.

The process works around the limitations of the quantum Uncertainty Principle (which states that the exact properties of a quantum system can never be known in exact detail) through a process known as the “Einstein-Podolsky-Rosen” effect. Einstein had real issues with Quantum Mechanics; he didn’t like it at all (to quote the cliche ‘Spooky action at a distance’). The EPR paper was aimed at irrefutably proving the implausibility of entangled pairs of quantum particles. John Stewart Bell tripped the Einstein proposition on its head when he demonstrated that entangled particles do in fact exhibit statistically significant random behaviours (that is, the frequencies of each action correlated between both particles too highly to be due to chance alone). The fact that entanglement does not violate the no-communication theorem is good news for our assumptions regarding reality, but more bad news for teleportation fans. Information regarding the quantum state of the teleportee is still required to be transmitted via conventional methods for reassembly at the other end.

Quantum teleportation works by initially scanning the quantum state of a particle at A, with care taken not to cause too much disruption (measurement distorts the original, the harder you look the more uncertain the result). This partial scan is then transmitted at relativistic speeds to the receiver at B. A pair of entangled particles is then dispatched to both teleportation stations. Entangled particle 1 at A interacts with the remainder of A (minus the scanned out information sent to B). Entanglement then assures that this information will be instantaneously available at B (via entangled particle 2). Utilising the principles of the EPR effect and Bell’s statistical correlations, it is then possible to reconstruct the state of the original particle A at the distant location, B. While the exact mechanism is beyond the technical capacity of philosophy, it is prudent to say that the process works by taking the entangled information from EP2 and combining it with the classically transmitted information that was scanned out of the original particle, A.

Casting practicality aside for the sake of philosophical discussion,  if such a process became possible for a being as complex as a human, what would be the implications for consciousness and identity? Common sense tells us that if an exact replica could be duplicated then how is this in any way different to the original? One would simply ‘wake-up’ at the new location within the same body and mind as you left. Those that subscribe to a Cartesian view of separated body and mind would look upon teleportation with an abhorrent revulsion. Surely along the way we are loosing a part of what makes us uniquely human; some sort of intangible soul or essence of mind which cannot be reproduced? This leads one to similar thought experiments. What if another being somewhere in the Universe is born with the exact mental characteristics as yourself? Would this predispose them to some sort of underlying and phenomenological connection? Perhaps this is supported by anecdotal evidence from empirical studies into identical twins. It is thought such individuals share a common bond, demonstrating almost telepathic abilities at times. Although it could be argued that the nature of this mechanism is probably no more mystical than a familiar acquaintance predicting how you would react in a given situation, or similarities in brain structure predisposing twins to ‘higher than average’ mental convergence events.

Quantum teleportation on conscious beings also raises serious moral implications. Is it considered murder to deconstruct the individual at point A, or is this initial crime nullified once the reassembly is completed? Is it still considered immoral if someone else appears at the receiver due to error or quantum fluctuation? Others may argue that it is no different to conventional modes of transport; human error should be dealt as such (necessary condition for the label of crime/immorality) and naturally occurring disasters interpreted as nothing more than random events.

While it is doubtful that we will ever see teleportation on a macro scale, we should remain mindful of the philosophical and practical implications of emerging technologies. Empirical forces are occasionally blinded to these factors when such innovations are announced to the general public. While it is an important step in society that such processes are allowed to continue, the rate at which they are appearing can be cause for alarm if they impinge upon our human rights and the preservation of individuality. There has never been a more pressing time for philosophers to think about the issues and offer their wisdom to the world.

In the first part of this article, I outlined a possible definition of time and (keeping in touch with the article’s title) offered a brief historical account of time measurement. This outline demonstrated humanity’s changing perception of the nature of time, and how an increase in the accuracy with which it is measured can affect not only our understanding of this phenomenon, but also how we perceive reality. In this article I will begin with the very latest physical theory explaining the potential nature of time, followed by a discussion on several interesting observations concerning the fluctuations that seem to characterise humanity’s chronological experience. Finally, I hope to promote a hypothesis (even though it may simply be stating the blatantly obvious) that the flow and experience of time is uniquely variable, in that the concept of ‘absolute time’ is as dead as the ‘ether’ or absolute reference point of early 19th century physics.

Classical physics dominated the concept of time up until the beginning of the 20th century. In this respect, time (in the same vein as motion) as having an ‘absolute’  reference point. That is, time was constant and consistent across the universe and for all observers, regardless of velocity or local gravitational effects. Of course, Einstein turned all this on its head with his theories of general and special relativity. Time dilation was a new and exciting concept in the physical measure of this phenomenon. Both the speed of the observer (special relativity) and the presence of a gravitational field (general relativity) were predicted to have an effect on the passage of time. The main point to consider in combination with these predictions is that by the very nature of the theory, relativity insists that all events are relative, or change with perspective, in respect to some external observer.

Consider two clocks (A and B), separated by distance x. According to special relativity, if clock B is accelerated to a very high speed (at least 30,000km/s for the effects to become detectable), time dilation effects will come into play. In effect, relative to clock A (which is running on ‘normal’ Earth time), clock B will be seen to run slower. An observer travelling with clock B would not notice these effects – time would continue to pass normally within their frame of reference. It is only upon return and the clocks are directly compared that the inaccuracy becomes apparent. Empirically, this effect is well established, and offers an explanation as to why muons (extremely short-lived particles) are able to make it to the Earth’s surface before decaying. Cosmic rays slam into the Earth’s atmosphere at high speed, producing sufficient energy when they collide with molecules for the generation of muons and neutrinos. These muons, which normally decay after a distance of 0.6km (if stationary/moving slowly), are travelling so fast that time dilation effects act to slow down the radiological emission process. Thus, these particles survive much longer (penetrating some 700m underground) than normal.

General relativity also predicts an effect on our perceptions of time. Objects with large mass produce gravitational fields, which in turn, are predicted to influence time by slowing down its perceived effects in proportion to the observer’s proximity to the field. Clock A is on the Earth’s surface, while Clock B is attached to an orbiting satellite. As Clock B is further from the centre of the Earth, the gravitational field at a lower potential, that is, it is weaker and exerts less of an effect. Consequently, the elapsed time at B (relative to Clock A) will be shorter (ie, Clock B is running faster). Again, this effect has been tested empirically, with clocks on board GPS satellites forced to undergo regular adjustments to keep them in line with Earth-bound instrumentation (thus enabling accuracy in pinpointing locations). Interestingly, the effects of both types of dilation are additive; the stronger effect wins out, resulting in either a net gain or loss of time. Objects moving fast within a gravitational field should then experience both a slowing down and speeding up of time relative to an external observer (this was in fact recorded in an experiment involving atomic clocks on board commercial airliners).

Frustratingly, the physical basis for such dilation seems to be enmeshed with the complicated mathematics and technical jargon. Why exactly does this dilation occurs? Descriptions of the phenomenon seem to lack any real insight into this question, and instead proffer statements to the effect of ‘this is simply what relativity predicts’. It is an important question to ask, I think, as philosophically, the question of ‘why’ is just as important as the empirical ‘how’, and should follow as a natural consequence. By probing the meta-physical aspects of time we can aim to better understand how it can influence the human sensory experience and adapt this new-found knowledge to practical applications.

Based on relativity’s notion of a non-absolute framework of time, and incorporating the predictions of time dilation, it seems plausible that time could be reducible to a particulate origin. The field of quantum physics has already made great headway in proposing that all matter acts in a wave-particle duality; in the form of waves, photons and matter travel along all possible routes between two points, with the crests and troughs interfering with, or reinforcing, each other. Similar to the double slit experiment (light and dark interference pattern), only the path that is reinforced remains and the wave collapses (quantum de-coherence) into a particle that we can directly observe and measure. This approach is know as the ‘sum over histories’ hypothesis, proposed by Richard Feynman (which also opens up the possibility of ‘many worlds’; alternative universes that branch off at each event in time).

In respect to time, perhaps its re-imagining as a particle could explain the effects on gravity and velocity, in the form of dilation. One attempt is the envisaged ‘Chronon’, a quantised form of time which disrupts the commonly held interpretation of a continuous experience. This theory is supported via the natural unit of Planck Time, some 5.39121 x 10ˆ-44 seconds. Beyond this limit, time is thought to be indistinguishable and the notion of separate events undefinable. Of course, we are taking a leap of faith here in assuming that time is a separate, definable entity. Perhaps the reality is entirely different.

Modern philosophy seems to fall over when attempting to interpret the implications of theoretical physics. Perhaps the subject matter is becoming increasingly complex, requiring dedicated study in order to grasp even the simplest concepts. Whatever the reason, the work of philosophers has moved away from the pursuits of science and towards topics such as language. What science needs is an army of evaluators, ready to test their theories with practical concerns in mind. Time has not escaped this fate either. Scientists seem content, even ‘trigger happy’ in their usage of the anthropic principle in explaining the etiology of their theories and any practical inquiry as to why things are the way they are. Basically, any question of why evokes a response along the lines of ‘well, if it were any different, conditions of the universe would not be sufficient for the evolutions of intelligent beings such as ourselves, who are capable of asking the very question of why!’. Personally, this approach does make sense, but seems to have the distinct features of a ‘cop-out’ and circularity; alot of the underlying reasoning is missing which prohibits deeper inquiry. It also allows theologians to promote arguments for the existence of a creator; ‘god created the universe in such a way as to ensure our existence’.

What has this got to do with time? Well, put simply, the anthropicists propose that  if time were to flow in a direction contrary to that which is experienced, the laws of science would not hold, thus excluding the possibility of our existence as well as violating the principles of CPT symmetry (C=particle/antiparticle replacement, P=taking the mirror image and T=the direction of time). Even Stephen Hawking weighs in on the debate, and in his Brief History of Time, proposes the CPT model in combination with the second law of thermodynamics (entropy, or disorder, always increases). The arrow of time, thus, must correspond to and align with the directions of these cosmological tendencies (universe inflates, which is the same direction as increasing entropy, which is the same as psychological perceptions of time).

So, after millenia of study in the topic of chronology, we seem to be a long way off from a concrete definition and explanation of time. With the introduction of relativity, some insights into the nature of time have been extracted, however philosophers still have a long way to go before practical implications are expounded from the very latest theories (Quantum Physics, String Theory etc). Indeed, some scientists believe that if a grand unified theory is to be discovered, we need to further refine our definitions of time and work backwards towards the very instant of the big bang (under which it is proposed that all causality breaks down).

Biologically, is time perceived equally among not only humans but also other species (animals)? Are days where time seems to ‘stand still’ sharing some common feature that could support the notion of time as a definable physical property of the universe (eg the Chronon particle)? On such days are we passing through a region of warped spacetime (thus a collective, shared experience) or do we carry an internal psychological timepiece that ticks to its own tock, regardless of how others are experiencing it? When we die is the final moment stretched to a relative infinity (relative to the deceased) as neurons loose their potential to carry signals (ala falling into a black hole, the perception of time slows to an imperceptible halt) or does the blackness take us in an instant? Maybe time will never fully be understood, but it is an intriguing topic that warrants further discussion, and judging by the surplus of questions, not in any hurry to reveal its mysteries anytime soon.

The essence of mathematics cannot be easily discerned. This intellectual pursuit lurks behind a murky haze of complexity. Those that are fortunate enough to have natural ability in this field are able to manipulate algebraic equations as easily as spoken word. However, for the vast majority of the population, mathematical expertise is elusive, receding away at each desperate grasp and attempt at comprehension. What exactly is this strange language of numerical shapes, with its logical rule-sets and quirky laws of commutativity? It seems as though the more intensely this concept is scrutinised, the faster its superfluous layers of complexity are peeled away. But what of these hidden foundations? Are mathematical formulations the key to understanding the nature of reality? Can all this complexity around which we eke out a meagre existence really condense into a single set of equations? If not, what are the implications for and likelihood of a purely mathematical and unified ‘theory of everything? These are the questions I would like to explore in this article.

The history of mathematics dates back to the dawn of civilisation. The earliest known examples of mathematical reasoning are believed to be from some 70,000 years BC. Geometric patterns and shapes on cave-walls shed light onto how our ancestors may have thought about abstract concepts. These primitive examples also include rudimentary attempts at measuring the passage of time through measured, systematic notches and depictions of celestial cycles. Humankind’s abilities progressed fairly steadily from this point, with the next major revolution in mathematics occurring some 3000-4000 years BC.

Neolithic religious sites (such as Stonehenge, UK and Ġgantija, Malta) are thought to have made use of the growing body of mathematical knowledge and an increased awareness and appreciation of standardised observation. In a sense, these structures spawned appreciation of mathematical representation by encouraging measurement standardisation. For example, a static structure allows for patterns in constellations and deviations from the usual to stand out in prominence. Orion’s belt rises over stone X in January, progressing towards stone Y; what position will the heavens be in tomorrow?

Such observational practices allowed mathematics, through the medium of astronomy, to foster and grow. Humanity began to recognise the cyclical rhythm of nature and use this standardised base to extrapolate and predict future events. It was not until 2000BC that mathematics grew into some semblance of the formalised language we use today. Spurred on by the great ancient civilisations of Greece and Egypt, mathematical knowledge advanced at a rapid pace. Formalised branches of maths emerged around this time period, with construction projects inspiring minds to realise the underlying patterns and regularities in nature. Pythagoras’ Theorem is but one prominent result from the inquiries of this time as is Euclid’s work on geometry and number theory. Mathematics grew steadily, although hampered by the ‘Dark Ages’ (Ptolemic model of the universe) and a subsequent waning interest in scientific method.

Arabic scholars picked up this slack, contributing greatly to geometry, astronomy and number theory (the numerical system of base ten we use today is an adoption of Arabic descent). Newton’s Principia was perhaps the first wide-spread instance of formalised applied mathematics (in the form of generalised equations; geometry had previously been employed for centuries in explaining planetary orbits) in the context of explaining and predicting physical events.

However, this brings us no closer to the true properties of mathematics. An examination of the historical developments in this field simply demonstrates that human ability began with rudimentary representations and has since progressed to a standardised, formalised institution. What essentially are these defining features? Building upon ideas proposed by Frayn (2006), our gift for maths arises from prehistoric attempts at grouping and classifying external objects. Humans (and lower forms of life) began with the primitive notion of ‘big’ versus ‘small’, that is, the comparison of groupings (threats, friends or food sources). Mathematics comprises our ability to make analogies, recognise patterns and predict future events; a specialised language with which to conduct the act of mental juggling. Perhaps due to the increasing encephalic volume and neuronal connectivity (spurred on by genetic mutation and social evolution) humankind progressed beyond the simple comparison of size and required a way of mentally manipulating objects in the physical world. Counting a small herd of sheep is easy; there is a finger, toe or stick notch with which to capture the property of small and large. But what happens when the herd becomes unmanageably large, or you wish to compare groups of herds (or even different animals)? Here, the power of maths really comes into a world of its own.

Referring back to the idea of social evolution acting as a catalyst for encephalic development, perhaps emerging social patterns also acted to improve mathematical ability. More specifically, the disparities in power as individuals become more elevated compared to their compatriots would have precipitated a need to keep track of assets and incur taxation. Here we observe the leap from singular comparison of external group sizes (leaning heavily on primal instincts of flight/fight and satiation) to a more abstract, representative use of mathematics. Social elevation brings about wealth and resources. Power over others necessitates some way of keeping track of these possessions (as the size of the wealth outgrows the managerial abilities of one person). Therefore, we see not only a cognitive, but also a social, aspect of mathematical evolution and development.

It is this move away from the direct and superficial towards abstract universality that heralded a new destiny for mathematics. Philosophers and scientists alike wondered (and still wonder today) whether the patterns and descriptions of reality offered by maths are really getting to the crux of the matter. Can mathematics be the one tool with which a unified theory of everything can be erected? Mathematical investigations are primarily concerned with underlying regularities in nature; patterns. However it is the patterns themselves that are the fundamental essence of the universe; mathematics simply describes them and allows for their manipulation. The use of numerals is arbitrary; interchange them with letters or even squiggles in the dirt and the only thing that changes is the rule-set to combine and manipulate them. Just as words convey meaning and grammatical laws are employed with conjunctions to connect (addition?) premises, numerals stand as labels and the symbols between them convey the operation to be performed. When put this way, mathematics is synonymous with language, it is just highly standardised and ‘to the point’.

However this feature is a double-edged sword. The sterile nature of numerals (lacking such properties as metaphor, analogy and other semantic parlour tricks) leaves their interpretation open. A purely mathematical theory is only as good as the interpreter. Human thought processes descend upon formulae picking apart and extracting like a vulture battles haphazardly over a carcass. Thus the question moves from one of validating mathematics as an objective tool to a more fundamental evaluation of human perception and interpretation. Are the patterns we observe in nature really some sort of objective reality, or are they simply figments of our over-active imagination; coincidences or ‘brain puns’ that just happen to align our thoughts with external phenomena?

If previous scientific progress is anything to go by, humanity is definitely onto something. As time progresses, our theories become closer and closer to unearthing the ‘true’ formulation of what underpins reality. Quantum physics may have dashed our hopes of ever knowing with complete certainty what a particle will do when poke and prodded, but at least we have a fairly good idea. Mathematics also seems to be the tool with which this lofty goal will be accomplished. Its ability to allow manipulation of the intangible is immense. The only concern is whether the increasing abstractivity of physical theories is outpacing our ability to interpret and comprehend them. One only has to look at the plethora of alternative quantum interpretations to see evidence for this effect.

Recent developments in mathematics include the mapping of E8. From what can be discerned by a ‘lay-man’, E8 is a multi-dimensional geometric figure, the exact specifications of which eluded mathematicians since the 19th century. It was only through a concerted effort involving hundreds of computers operating in parallel that its secrets were revealed. Even more exciting is the recent exclamation of a potential ‘theory of everything’. The brainchild behind this effort is not what could be called stereotypical; this ‘surfing scientist’ claims to have utilised the new-found knowledge of E8 to unite the four fundamental forces of nature under one banner. Whether his ship turns out to hold any water is something that remains to be seen. The full paper can be obtained here.

This theory is not the easiest to understand; elegant but inherently complex. Intuitively, two very fitting characteristics of a potential theory of everything. The following explanation from Slashdot.org is perhaps the most easily grasped for the non-mathematically inclined.

“The 248-dimensions that he is talking about are not like the time-space dimensions, which particles move through. They describe the state of the particle itself – things like spin, charge, etc. The standard model has 6(?) properties. Some of the combinations of these properties are allowed, some are not. E8 is a very generalized mathematical model that has 248-properties, where only some of the combinations are allowed. What Garrett Lisi showed is that the rules that describe the allowed combinations of the 6 properties of the standard model show up in E8, and furthermore, the symmetries of gravity can be described with it as well.” Slashdot.org, (2007).

Therefore, E8 is a description of particle properties, not the ‘shape’ of some omnipresent, underlying pervasive force. The geometric characteristics of the shape outline the numbers of particles, their properties and the constraints over these properties (possible states, such as spin, charge etc). In effect, the geometric representation is an illustration of underlying patterns and relationships amongst elementary particles. The biggest strength of this theory is that it offers testable elements, and predictions of as yet undiscovered physical constituents of the universe.

It is surely an exciting time to live, as these developments unfurl. On first glance, mathematics can be an incredibly complex undertaking, in terms of both comprehension and performance. Once the external layers of complexity are peeled away, we are left with the raw fundamental feature; a description of underlying universals. Akin to every human endeavour, the conclusions are open to interpretation, however with practice, and an open mind free from prejudicial tendencies, humanity may eventually crack the mysteries of the physical universe. After all, we are a component of this universe therefore it makes intuitive (if not empirical) sense that our minds should be relatively objective and capable of unearthing a comprehensive ‘theory of everything’.

Quantum physics is a fascinating branch of modern science that has grown in popularity. Terms such as “the uncertainty principle”, “quantum entanglement” and “probability waves” have all become commonly-used phrases in the scientific community. In the same way that Newtonian mechanics explains the world of the very big, (the orbits of planets, falling apples) quantum physics aims to improve our understanding of the very small (sub-atomic scales). Once objects start interacting at a smaller level, quantum mechanics takes over and produces some weird and wacky results. What Newton’s laws and (to an extent) Einstein’s special and general theories of relativity have in common sense and comprehensibility, quantum physics makes up for in its plain weirdness.

In the wacky world of the quanta, particles appear out of nothing and vanish again in an instant. Particles separated by infinite distances show characteristics of ‘entanglement’; that is, measurements taken on one particle instantaneously affect the state of the partner (seemingly violating the faster-than-light limitations of general relativity). Similarly, quantum particles exhibit tunneling behaviours. Being probabilistic in nature, the quantum wave equation for any given particle will expand as a function of time. Occasionally, this wave (or probability of existing in a particular position) will penetrate insurmountable obstacles (that is, distances or barriers where the energy to escape them is more than the particle’s kinetic energy). In effect, the particle has ‘tunnelled’ through thin air.

The probabilistic nature of quantum physics introduces some worrying implications for the nature of reality. In particular, the Copenhagen Interpretation (one leading view on what the quantum calculations translate into in the macroscopic world) posits that an observer is needed to collapse the wave functions, creating what we see as real. Taken literally, this means that nothing exists if we aren’t watching. The falling tree in a deserted forest really does make no sound, solving the Chinese proverb succinctly. Erwin Schrodinger, one of the pioneers of quantum theory and the man behind wave equations, disagreed with this interpretation most vehemently. Schrondinger’s cat was the fruits of his protest; a thought experiment introducing the paradox that this interpretation brings.

Schrodinger’s thought experiment goes a little something like this. It states that a cat, sealed off totally from the outside world and attached to a death device will exist in a superposition of quantum states. Its probabilty wave will spread out over time, with the cat existing as both dead and alive at the same time. The hypothetical death device consists of a decaying radioactive source, emitting particles that are detected via a Geiger counter. The probabilty wave spreads in such a manner due to the underlying quantum randomness that controls the process of radioactive decay (tunnelling allows beta particles to escape the overwhelming pull of the weak nuclear force). Thus, once a sufficient period of time has passed and the probability of the radioactive substance emitting a particle (or not) is exactly 1/2, the cat is said to be both alive and dead.

Schrodinger was not advocating the truth of this experiment, rather using it instead to draw attention to the paradox and ‘can of worms’ that the Copenhagen Interpretation had brought about. While the experiment may indeed be possible in the realm of quantum uncertainty, it certainly requires a definite leap of faith away from the common sense interpretation of everyday occurances. The major premises that this argument requires us to accept is that a) probability waves exist (that is, quantum particles exist in a superposition of possible states), b) an observer is necessary to collapse the function and bring about reality and c) the observer must be intelligent (namely that there is something inherently unique about conscious beings and their quantum-collapsing ability).

Firstly I will take a minor detour and actually lend a snippet of support to the thought experiment. The old saying ‘a watched pot never boils’ seems to make no practical sense, however a simple rephrasing to ‘a watched quantum pot never boils’ is closer to the truth. Researchers imitated the physical process of boiling on a quantum scale by bombarding a collection of beryllium atoms with microwaves. These incoming microwaves were then absorbed by the atoms, booting them up from a low to high energy level. The researchers knew that the time period for all atoms to become excited was around 250ms, therefore by beaming a burst of laser light into their atomic midst, the number of atoms still in their lower ground state could be counted (excited atoms cannot absorb the incoming photons, therefore only the atoms in the lower, less excited state will be affected). Initially they only looked at 125ms, when around half the atoms should be excited. And they were! Then then increased the number of observations, looking four times in 250ms. They found something unexpected. With each successive observation, the atoms would ‘reset’ their energy levels; in effect, by increasing the number of observations the atoms would never reach the higher state. The watched pot never boiled! (For further reading, search for “The Quantum Zeno Effect“).

The explanation here directly supports one of Schrodinger’s main requirements for the thought experiment. Quantum probability waves exist. What the researchers believe happens is that the probability wave of each atom is artificially collapsed by the act of observing. When the atoms are free from observation, the probability wave is free to spread out, increasingly the likelihood of observing all the atoms similarly excited. By looking multiple times, the wave is collapsed prematurely, preventing the wave from spreading out to its potential equilibrium state. In effect, the intent of the observer controls to outcome. If you want half the atoms to become excited, no problem, look at time t/2. You want the pot to never boil? OK, just keep watching continuously.

However, Shrodinger’s second and third requirements denoting the features of the observer doing the collapsing are not so easy to support. Why are humans so arrogant to believe that there is something inherently special about us that we are required for the universe to exist? It simply makes no sense whatsoever that outside of our measly existence, nothing is actually real until we look. Rather, the quantum constituents may be probabilistic however the virtual seething mass of particles that zip around and interact with each other must surely provide the means to collapse wave functions. A conscious observer is not needed for reality to have any objective meaning (what about prior to the evolution of conscious beings – are we all being observed by an omnipotent being which makes us all real?) The universe itself must surely be doing the observing and the collapsing, through the myriad of interacting particles.

I believe the main problems people suffer from when discussing quantum mechanics is that they try to relate it to pre-existing notions of reality. They also place the importance of human consciousness above the fact that the universe will continue to exist regardless of whether we are around to watch. This deluded geocentricism has long plagued humanity, causing major scientific retardation throughout the ages (Aristotle et al). The implications of quantum mechanics on reality still holds many mysteries. If watching a quantum pot causes it to freeze in its initial state, what does this mean for reality and intent (and also free-will)? If quantum processes control the operation of minds, perhaps it will also prove to be the mysterious bridge that spans between Cartesian mind/body duality. Perhaps the secret to consciousness is the uncertainty introduced by the quantum reality that underlies every physical process.