You are currently browsing the category archive for the ‘Uncategorized’ category.

It is not often that we think of events as isolated incidents separated by a vast divide in both physical and virtual distance. In our day to day existence with near instantaneous methods of communication and a pervasively global information network, significant events are easily taken note of. But when the distance separating the event from the recipient exceeds our Earthly bounds, an interesting phenomenon occurs. Even on the scale of the solar system, light from the Sun takes approximately 8 minutes to reach our sunny skies here on Earth. If the Sun happened to go supernova, we would have no acknowledgment of the fact until some 8 minutes after the event actually occurred. While not completely revolutionary, this concept has deeper ramifications if the distances are again increased to a Universal scale.

While we are accustomed to thinking of light as travelling at a fixed speed limit, it is not often that one thinks of gravity as a force that requires time to cross intergalactic distances. But indeed it does. Gravity waves propagate at the speed of light; slight perturbations on the surfaces of incredibly massive objects (eg neutron stars or binary star systems) act as the catalyst for these disturbances. Unimpeded by objects, gravity waves are able to pass through the Universe without effect. They act to warp the nature of spacetime, contracting and expanding distances between objects as the wave passes through that particular locality.

Here on Earth, information is similarly transferred quickly along the Internet and other communication pathways at an average of close to the speed of light. Delays only arise when traffic is heavy (pathways severed, technical problems, increased use). As the distances involved are relatively small in comparison to the speed of the transfer, communication between two points is practically instantaneous. But what if we slow down the speed of travel? Imagine the event occurs in an isolated region of desert. The message can only be transmitted via a physical carrier, thus mimicking the vast distances involved in an interstellar environment. Observer B waiting to receive the message thus has no knowledge of what has happened until that message arrives.

Revisiting the scenario of the Sun exploding, it seems strange that mammoth events in the Universe could occur without our immediate knowledge. It is strangely reminiscent of the Chinese proverb; does a falling tree make a sound if no one is around to listen? Cosmic events are particularly relevant in this respect, as they most certainly do have immense ramifications (‘making a noise’). If the Universe suddenly collapsed at the periphery (unlikely but considered for the purposes of this exercise), our tiny speck of a planet would not know about it until (possibly) many, many millions of years. It is even possible that parts of the distant Universe have already ‘ceased to exist’; the fabric of time and space from the epicentre of this great event expanding like a tidal wave of doom. What does this mean for a concept of Universal time? Surely it must not be dependent upon physical reality, for if it did, surely such a catastrophic event would signal the cessation of time across the entire cosmos. Rather, it would be a gradual process that rushes forth and eliminates regions of both space and time sequentially. The final remaining island of ‘reality’ would thus act as a steadily diminishing safe haven for the remaining inhabitants of the cosmos. Such an event would certainly make an interesting science-fiction story!

Einstein became intimately aware of this universal fact of locality, making it a central tenet in his grand Theory of Relativity. He even offered comments regarding this ‘principle of locality’ (which became a recognised physical law);

“The following idea characterises the relative independence of objects far apart in space (A and B): external influence on A has no direct influence on B; this is known as the Principle of Local Action, which is used consistently only in field theory.”

A horribly simplified description of relativity states that what I experience is not necessarily the same as what you will experience. Depending on how fast you are travelling and in what direction relative to myself (taking into account the speed and direction at which I am travelling), our experience of time and space will differ; quite markedly if we approach the speed of light. Even the flow of time is unaffected, as observers aboard objects travelling at high velocities experience a slowing notion of chronicity compared to their colleagues. It would be intriguing to experience this phenomenon first hand in order to determine if the flow is psychologically detectable. Perhaps it would be experienced as an exaggerated and inverted version of the overly clichéd ‘time flies when you’re having fun’.

Locality in Einstein’s sense is more about the immediate space surrounding objects rather than causes and their effects (although the two are undoubtedly interrelated). Planetary bodies, for instance, are thought to affect their immediate surroundings (locality) by warping the fabric of space. While the metaphor here is mainly for the benefit of visualisation rather than describing actual physical processes, orbiting bodies are described as locked into a perpetual spin, similar to the way in which a ball bearing revolves around a funnel. Reimagining Einstein’s notion of relativity and locality as causality (and the transmission of information between two points), the speed of light and gravity form the main policing forces in managing events in the Universe. Information can only travel some 300,000 km/s between points, and the presence of gravity can modify how that information is received (large masses can warp transmissions as in gravitational lensing and also influence how physical structures interact).

Quantum theory adds to the fray by further complicating matters of locality. Quantum entanglement, a phenomenon whereby an effect at Point A instantaneously influences Point B, seems to circumnavigate the principle of locality. Two points in space dance to the same tune, irrespective of the distances involved. Another quantum phenomenon that exists independently of local space is collapsing wave functions. While it is currently impossible to affirm whether this ‘wave’ actually exists and also what it means for the nature of reality (eg many worlds vs Copenhagen interpretation), if it is taken as a part of our reality then the act of collapse is surely a non-local phenomenon. There is no detectable delay in producing observable action. A kicked football does not pause while the wave function calculates probabilities and decides upon an appropriate trajectory. Likewise, individual photons seem just ‘know’ where to go; instantly forming the familiar refraction pattern behind a double-slit grating. The Universe at large simply arranges its particles in anticipation of these future events instantaneously, temptingly inviting notions of omniscience on its behalf.

Fortunately, our old-fashioned notions of cause and effect are preserved by quantum uncertainties. To commit the atrocious act of personifying the inanimate, it is as though Nature, through the laws of physics, protects our fragile Universe and our conceptions of it by limiting the amount of useful information we can extract from such a system. The Uncertainty Principle acts as the ubiquitous protectorate of information transfer, preventing instantaneous transfer between two points in space. This ‘safety barrier’ prevents us from extracting useful observations regarding entangled particles without the presence of a traditional message system (need to send the extracted measurements taken at Point A to Point B at light speed in order to make sense of the entangled particle). When we observe particles at a quantum level (spin, charge etc) this disturbs the quantum system irrevocably. Therefore the mere act of observing prevents us from using this system as a means of instantaneous communication.

Causality is still a feature of the Universe that needs in-depth explanation. At a higher level is the tireless battle between determinism and uncertainty (free-will). If every event is predetermined based on the collisions of atoms at the instant of the Big Bang, causality (and locality) is a moot point. Good news for reductionists whom hope to uncover a fundamental ‘theory of everything’ with equations to predict any outcome. If, on the other hand, the future really is uncertain, we certainly have a long way to go before an adequate explanation of how causality operates is proposed. Whichever camp one claims allegiance, local events are still isolated events whose effects travel at a fixed speed. One wonders what the more frustrating result of this is; not having knowledge about an important albeit distant event or realising that whatever happens is inevitable. The Universe may already have ended; but should we really care?

The supreme scale and vast expanse of the Universe is awe inspiring. Contemplation of its grandeur has been described as a type of scientific spiritualism; broadening the mind’s horizons in a vain attempt to grasp our place amongst such awesome magnitude. Containing some 200 billion stars (or 400 billion, depending on whom you ask), our relatively humble home in the Milky Way is but one of billions of other such homes for other countless billions of stars. Likewise, our small blue dot of a planet is but one of possible billions of similar planets spread throughout the Universe.

To think that we are alone in such a vast expanse of space is not only unlikely, but irrational. For eons, human egocentricism has blinkered ideology and spirituality. Our belief systems place humanity upon a pedestal, indicating implicitly that we are alone and incredibly unique. The most salient of which is the ‘Almagest’; Ptolemy’s Earth-centred view of the Universe.

While we may be unique, the tendency of belief systems to invoke meaning in our continued existence leaves no place for humility. The result of this human focussed Universe is one where our race arrogantly fosters its own importance. Consequently, the majority of the populace has little or no concern in cosmic contemplation, nor an appreciation of truly objective thought with the realisation that Earth and our intelligent civilisation does not give sole definition to the cosmos. The Universe will continue to exist as it always have whether we are around or not.

But to do otherwise would spell certain doom for our civilisation, and it is easy to see why humans have placed so much importance upon themselves in the grand scheme of things. The Earth is home to just one intelligent species, namely us. If the Neanderthals had survived, it surely would have been a different story (in terms of the composition of social groups). Groups seem to unite against common foes, therefore a planet with two or more intelligent species would distinguish less within themselves, and more between. Given the situation we find ourselves in as the undisputed lords of this planet, it is no wonder we attach such special significance to ourselves as a species (and to discrediting the idea that we are not alone in the Universe).

It seems as if humanity needs their self-esteem bolstered when faced with the harsh reality that our existence is trivial when compared to the likelihood of other forms of life and the grandeur of the Universe at large. Terror Management Theory is but one psychological hypothesis as to why this may be the case. The main postulate of this theory is that our mortality is the most salient factor throughout life. A tension is created because on the one hand, death is inevitable, and on the other, we are intimately aware of its approach yet desperately try to minimise its effects on our lives. Thus it is proposed that humanity attempts to minimise the terror associated with impending death through cultural and spiritual beliefs (afterlife, the notion of mind/body duality – the soul continues on after death). TMT puts an additional spin on the situation by suggesting cultural world-views, and the tendency for people to protect these values at all costs (reaffirming cultural beliefs by persecuting the views of others reduces the tension produced by death).

While the empirical validity of TMT is questionable (experimental evidence is decidedly lacking), human belief systems do express an arrogance that prevents a more holistic system from emerging. The Ptolemic view dominated scientific inquiry during the middle ages, most likely due to its adoption by the church. Having the Earth as the centre of the Universe coincided nicely with theological beliefs that humanity is the sole creation of god. It may also have improved the ‘scientific’ standing of theology in that it was apparently supported by theory. What the scholars of this period failed to realise was the principle of Occam’s Razor, that being the simpler the theory the better (if it still explains the same observations). The overly complicated Ptolemic system could explain the orbit of planetary bodies, at the expense of simplicity (via the addition of epicycles to explain the anomalous motion of planets).

Modern cosmology has thankfully overthrown such models, however the ideology remains. Perhaps hampered and weighed down by daily activities, people simply do not have the time to consider an existence outside of their own immediate experience. From an evolutionary perspective, an individual would risk death if thought processes were wasted on external contemplation, rather than a selfish and immediate satisfaction of biological needs. Now that society has progressed to a point where time can be spent on intellectual pursuits, it makes sense that outmoded beliefs regarding our standing in the Universe should be rectified.

But just how likely is the possibility of life elsewhere? Science-fiction has long been an inspiration in this regard, its tales of Martian invaders striking terror into generations of children. The first directed empirical venture in this area came about with the SETI conference at Green Bank, West Virginia in 1961. At this conference, not only were the efforts of radio-astronomers to detect foreign signals discussed in detail, but one particular formulation was also put forward. Known as the Drake Equation, it was aimed at quantifying and humanising the very large numbers that are thrown about when discussing intergalactic probabilities.

Basically the equation takes a series of values thought to contribute to the likelihood of intelligent life evolving, multiplying the probabilities together and outputting a single number; the projected number of intelligent civilisations in the galaxy. Of course, the majority of the numbers used are little more than educated guesses. However, even with conservative values, this number is above 1. Promising stuff.

Fortunately, with each astronomical advance these numbers are further refined, giving a (hopefully) more accurate picture of reality. The SETI project may have even found the first extra-terrestrial signal in 1977. Dubbed the ‘Wow!’ signal (based on the researcher’s margin comments on the printout sheet), this burst of activity bore all the hallmarks of artificial origin. Sadly, this result has not been replicated despite numerous attempts.

All hope is not lost. SETI has received a revitalising injection of funds from none other than Microsoft’s Paul Allen, as well as the immensely popular SETI@Home initiative which utilises distributed network technology to sort through the copious amounts of generated data. Opponents to SETI form two main camps; those whom believe it is a waste of funds better spent on more Earthly concerns (a valid point) and those whom perceive SETI as dangerous to our continued existence. The latter point is certainly plausible (albeit unlikely). The counter claim in this instance is that if such a civilisation did exist and was sufficiently advanced to travel intergalactic distances, the last thing on their mind would be the annihilation of our insignificant species.

The notion of Star Trek’s ‘Prime Directive’ seems the most likely situation to have unfolded thus far. Extra-terrestrial civilisations would most likely seek a policy of non-interference with our meager planet, perhaps actively disguising their transmissions in an attempt to hide their activity and prevent ‘cultural contamination’.

Now all we need is for the faster-than-light barrier to be crossed and the Vulcans will welcome us into the galactic society.

Teleportation is no longer banished to the realm of science fiction. It is widely accepted that what was once considered a physical impossibility is now directly achievable through quantum manipulations of individual particles. While the methods involved are still in their infancy (single electrons are the heaviest particle to be teleported), we can at least begin to appreciate and think about the possibilities on the basis of plausibility. Specifically, what are the implications for personal identity if this method of transportation is possible on a human scale? Atomically destructing and reconstructing an individual at an alternate location could introduce problems with consciousness. Is this the same person or simply an identical twin with its own thoughts, feelings and desires? These are the questions I would like to discuss in this article.

Biologically we lose our bodies several times over during one human life-time. Complete organs are replaced diurnally with little thought given to the implications for self-identity. It is a phenomenon that is often overlooked, and especially so in relation to recent empirical developments with quantum teleportation. If we are biologically replaced with regularity does this imply that our sense of self is, likewise, dynamic in nature and constantly evolving? There would be reasonable arguements for both sides of this debate; maturity and daily experience do result in a varied mental environment. However, one wonders if this has more to do with innate processes such as information transfer/recollection/modification rather than purely the biological characteristics of individual cells (in relation to cell division and rejuvenation processes).

Thus it could be argued that identity is a largely conscious (in terms of seeking out information and creating internal schema of identity) and directed process. This does not totally rule out the potential for identity based upon changes to biological structure. Perhaps the effects are more subtle, modifying our identities in such a way as to facilitate maturity or even mental illness (if the duplication process is disturbed). Cell mutation (neurological tumor growth) is one such example whereby a malfunctioning biological process can result in direct and often drastic changes to identity.

However, I believe it is safe to assume that “normal” tissue regenerative processes do not result in any measurable changes to identity. What makes teleportation so different? Quantum teleportation has been used to teleport photons from one location to another, and more recently, particles with mass (electrons). The process is decidedly less romantic than science-fiction authors would have us believe; classical transmission of information is still required, and a receiving station must still be established at the desired destination. What this means is that matter transportation, ala ‘Star Trek’ transporters, is still very much an unforeseeable fiction. In addition, something as complex as the human body would require incredible computing power to scan at sufficient detail, another limiting factor in its practicality. Fortunately, there are potential uses for this technology such as in the fledging industry of quantum computers.

The process works around the limitations of the quantum Uncertainty Principle (which states that the exact properties of a quantum system can never be known in exact detail) through a process known as the “Einstein-Podolsky-Rosen” effect. Einstein had real issues with Quantum Mechanics; he didn’t like it at all (to quote the cliche ‘Spooky action at a distance’). The EPR paper was aimed at irrefutably proving the implausibility of entangled pairs of quantum particles. John Stewart Bell tripped the Einstein proposition on its head when he demonstrated that entangled particles do in fact exhibit statistically significant random behaviours (that is, the frequencies of each action correlated between both particles too highly to be due to chance alone). The fact that entanglement does not violate the no-communication theorem is good news for our assumptions regarding reality, but more bad news for teleportation fans. Information regarding the quantum state of the teleportee is still required to be transmitted via conventional methods for reassembly at the other end.

Quantum teleportation works by initially scanning the quantum state of a particle at A, with care taken not to cause too much disruption (measurement distorts the original, the harder you look the more uncertain the result). This partial scan is then transmitted at relativistic speeds to the receiver at B. A pair of entangled particles is then dispatched to both teleportation stations. Entangled particle 1 at A interacts with the remainder of A (minus the scanned out information sent to B). Entanglement then assures that this information will be instantaneously available at B (via entangled particle 2). Utilising the principles of the EPR effect and Bell’s statistical correlations, it is then possible to reconstruct the state of the original particle A at the distant location, B. While the exact mechanism is beyond the technical capacity of philosophy, it is prudent to say that the process works by taking the entangled information from EP2 and combining it with the classically transmitted information that was scanned out of the original particle, A.

Casting practicality aside for the sake of philosophical discussion,  if such a process became possible for a being as complex as a human, what would be the implications for consciousness and identity? Common sense tells us that if an exact replica could be duplicated then how is this in any way different to the original? One would simply ‘wake-up’ at the new location within the same body and mind as you left. Those that subscribe to a Cartesian view of separated body and mind would look upon teleportation with an abhorrent revulsion. Surely along the way we are loosing a part of what makes us uniquely human; some sort of intangible soul or essence of mind which cannot be reproduced? This leads one to similar thought experiments. What if another being somewhere in the Universe is born with the exact mental characteristics as yourself? Would this predispose them to some sort of underlying and phenomenological connection? Perhaps this is supported by anecdotal evidence from empirical studies into identical twins. It is thought such individuals share a common bond, demonstrating almost telepathic abilities at times. Although it could be argued that the nature of this mechanism is probably no more mystical than a familiar acquaintance predicting how you would react in a given situation, or similarities in brain structure predisposing twins to ‘higher than average’ mental convergence events.

Quantum teleportation on conscious beings also raises serious moral implications. Is it considered murder to deconstruct the individual at point A, or is this initial crime nullified once the reassembly is completed? Is it still considered immoral if someone else appears at the receiver due to error or quantum fluctuation? Others may argue that it is no different to conventional modes of transport; human error should be dealt as such (necessary condition for the label of crime/immorality) and naturally occurring disasters interpreted as nothing more than random events.

While it is doubtful that we will ever see teleportation on a macro scale, we should remain mindful of the philosophical and practical implications of emerging technologies. Empirical forces are occasionally blinded to these factors when such innovations are announced to the general public. While it is an important step in society that such processes are allowed to continue, the rate at which they are appearing can be cause for alarm if they impinge upon our human rights and the preservation of individuality. There has never been a more pressing time for philosophers to think about the issues and offer their wisdom to the world.

The essence of mathematics cannot be easily discerned. This intellectual pursuit lurks behind a murky haze of complexity. Those that are fortunate enough to have natural ability in this field are able to manipulate algebraic equations as easily as spoken word. However, for the vast majority of the population, mathematical expertise is elusive, receding away at each desperate grasp and attempt at comprehension. What exactly is this strange language of numerical shapes, with its logical rule-sets and quirky laws of commutativity? It seems as though the more intensely this concept is scrutinised, the faster its superfluous layers of complexity are peeled away. But what of these hidden foundations? Are mathematical formulations the key to understanding the nature of reality? Can all this complexity around which we eke out a meagre existence really condense into a single set of equations? If not, what are the implications for and likelihood of a purely mathematical and unified ‘theory of everything? These are the questions I would like to explore in this article.

The history of mathematics dates back to the dawn of civilisation. The earliest known examples of mathematical reasoning are believed to be from some 70,000 years BC. Geometric patterns and shapes on cave-walls shed light onto how our ancestors may have thought about abstract concepts. These primitive examples also include rudimentary attempts at measuring the passage of time through measured, systematic notches and depictions of celestial cycles. Humankind’s abilities progressed fairly steadily from this point, with the next major revolution in mathematics occurring some 3000-4000 years BC.

Neolithic religious sites (such as Stonehenge, UK and Ġgantija, Malta) are thought to have made use of the growing body of mathematical knowledge and an increased awareness and appreciation of standardised observation. In a sense, these structures spawned appreciation of mathematical representation by encouraging measurement standardisation. For example, a static structure allows for patterns in constellations and deviations from the usual to stand out in prominence. Orion’s belt rises over stone X in January, progressing towards stone Y; what position will the heavens be in tomorrow?

Such observational practices allowed mathematics, through the medium of astronomy, to foster and grow. Humanity began to recognise the cyclical rhythm of nature and use this standardised base to extrapolate and predict future events. It was not until 2000BC that mathematics grew into some semblance of the formalised language we use today. Spurred on by the great ancient civilisations of Greece and Egypt, mathematical knowledge advanced at a rapid pace. Formalised branches of maths emerged around this time period, with construction projects inspiring minds to realise the underlying patterns and regularities in nature. Pythagoras’ Theorem is but one prominent result from the inquiries of this time as is Euclid’s work on geometry and number theory. Mathematics grew steadily, although hampered by the ‘Dark Ages’ (Ptolemic model of the universe) and a subsequent waning interest in scientific method.

Arabic scholars picked up this slack, contributing greatly to geometry, astronomy and number theory (the numerical system of base ten we use today is an adoption of Arabic descent). Newton’s Principia was perhaps the first wide-spread instance of formalised applied mathematics (in the form of generalised equations; geometry had previously been employed for centuries in explaining planetary orbits) in the context of explaining and predicting physical events.

However, this brings us no closer to the true properties of mathematics. An examination of the historical developments in this field simply demonstrates that human ability began with rudimentary representations and has since progressed to a standardised, formalised institution. What essentially are these defining features? Building upon ideas proposed by Frayn (2006), our gift for maths arises from prehistoric attempts at grouping and classifying external objects. Humans (and lower forms of life) began with the primitive notion of ‘big’ versus ‘small’, that is, the comparison of groupings (threats, friends or food sources). Mathematics comprises our ability to make analogies, recognise patterns and predict future events; a specialised language with which to conduct the act of mental juggling. Perhaps due to the increasing encephalic volume and neuronal connectivity (spurred on by genetic mutation and social evolution) humankind progressed beyond the simple comparison of size and required a way of mentally manipulating objects in the physical world. Counting a small herd of sheep is easy; there is a finger, toe or stick notch with which to capture the property of small and large. But what happens when the herd becomes unmanageably large, or you wish to compare groups of herds (or even different animals)? Here, the power of maths really comes into a world of its own.

Referring back to the idea of social evolution acting as a catalyst for encephalic development, perhaps emerging social patterns also acted to improve mathematical ability. More specifically, the disparities in power as individuals become more elevated compared to their compatriots would have precipitated a need to keep track of assets and incur taxation. Here we observe the leap from singular comparison of external group sizes (leaning heavily on primal instincts of flight/fight and satiation) to a more abstract, representative use of mathematics. Social elevation brings about wealth and resources. Power over others necessitates some way of keeping track of these possessions (as the size of the wealth outgrows the managerial abilities of one person). Therefore, we see not only a cognitive, but also a social, aspect of mathematical evolution and development.

It is this move away from the direct and superficial towards abstract universality that heralded a new destiny for mathematics. Philosophers and scientists alike wondered (and still wonder today) whether the patterns and descriptions of reality offered by maths are really getting to the crux of the matter. Can mathematics be the one tool with which a unified theory of everything can be erected? Mathematical investigations are primarily concerned with underlying regularities in nature; patterns. However it is the patterns themselves that are the fundamental essence of the universe; mathematics simply describes them and allows for their manipulation. The use of numerals is arbitrary; interchange them with letters or even squiggles in the dirt and the only thing that changes is the rule-set to combine and manipulate them. Just as words convey meaning and grammatical laws are employed with conjunctions to connect (addition?) premises, numerals stand as labels and the symbols between them convey the operation to be performed. When put this way, mathematics is synonymous with language, it is just highly standardised and ‘to the point’.

However this feature is a double-edged sword. The sterile nature of numerals (lacking such properties as metaphor, analogy and other semantic parlour tricks) leaves their interpretation open. A purely mathematical theory is only as good as the interpreter. Human thought processes descend upon formulae picking apart and extracting like a vulture battles haphazardly over a carcass. Thus the question moves from one of validating mathematics as an objective tool to a more fundamental evaluation of human perception and interpretation. Are the patterns we observe in nature really some sort of objective reality, or are they simply figments of our over-active imagination; coincidences or ‘brain puns’ that just happen to align our thoughts with external phenomena?

If previous scientific progress is anything to go by, humanity is definitely onto something. As time progresses, our theories become closer and closer to unearthing the ‘true’ formulation of what underpins reality. Quantum physics may have dashed our hopes of ever knowing with complete certainty what a particle will do when poke and prodded, but at least we have a fairly good idea. Mathematics also seems to be the tool with which this lofty goal will be accomplished. Its ability to allow manipulation of the intangible is immense. The only concern is whether the increasing abstractivity of physical theories is outpacing our ability to interpret and comprehend them. One only has to look at the plethora of alternative quantum interpretations to see evidence for this effect.

Recent developments in mathematics include the mapping of E8. From what can be discerned by a ‘lay-man’, E8 is a multi-dimensional geometric figure, the exact specifications of which eluded mathematicians since the 19th century. It was only through a concerted effort involving hundreds of computers operating in parallel that its secrets were revealed. Even more exciting is the recent exclamation of a potential ‘theory of everything’. The brainchild behind this effort is not what could be called stereotypical; this ‘surfing scientist’ claims to have utilised the new-found knowledge of E8 to unite the four fundamental forces of nature under one banner. Whether his ship turns out to hold any water is something that remains to be seen. The full paper can be obtained here.

This theory is not the easiest to understand; elegant but inherently complex. Intuitively, two very fitting characteristics of a potential theory of everything. The following explanation from Slashdot.org is perhaps the most easily grasped for the non-mathematically inclined.

“The 248-dimensions that he is talking about are not like the time-space dimensions, which particles move through. They describe the state of the particle itself – things like spin, charge, etc. The standard model has 6(?) properties. Some of the combinations of these properties are allowed, some are not. E8 is a very generalized mathematical model that has 248-properties, where only some of the combinations are allowed. What Garrett Lisi showed is that the rules that describe the allowed combinations of the 6 properties of the standard model show up in E8, and furthermore, the symmetries of gravity can be described with it as well.” Slashdot.org, (2007).

Therefore, E8 is a description of particle properties, not the ‘shape’ of some omnipresent, underlying pervasive force. The geometric characteristics of the shape outline the numbers of particles, their properties and the constraints over these properties (possible states, such as spin, charge etc). In effect, the geometric representation is an illustration of underlying patterns and relationships amongst elementary particles. The biggest strength of this theory is that it offers testable elements, and predictions of as yet undiscovered physical constituents of the universe.

It is surely an exciting time to live, as these developments unfurl. On first glance, mathematics can be an incredibly complex undertaking, in terms of both comprehension and performance. Once the external layers of complexity are peeled away, we are left with the raw fundamental feature; a description of underlying universals. Akin to every human endeavour, the conclusions are open to interpretation, however with practice, and an open mind free from prejudicial tendencies, humanity may eventually crack the mysteries of the physical universe. After all, we are a component of this universe therefore it makes intuitive (if not empirical) sense that our minds should be relatively objective and capable of unearthing a comprehensive ‘theory of everything’.