The period of 470-1000AD encompassed what is now popularly referred to as the medieval ‘dark age’. During this time, human civilisation in the West saw a stagnation of not only culture but society itself. It was a time of great persecution, societal uncertainty and religious fanaticism. It cannot be helped that similarities seem to arise between this tumultuous period and that which we experience today. Some have even proposed that we stand on the brink of a new era, one that is set to repeat the stagnation of the medieval dark ages albeit with a more modern flavour. Current worldly happenings seem to support such a conclusion. If we are at such a point in the history of modern civilisation, what form would a ‘new dark age’ take? What factors are conspiring against humanity to usher in a period of uncertainty and danger? Do dark ages occur in predictable cycles, and if so, should we embrace rather than fear this possible development? These are the questions I would like to discuss in this article.

Historically, the dark ages were only labelled so in retrospect by scholars reflecting upon the past and embracing humanistic principles. It is with such observations that we cast our aspersions upon the society of today. Even so, humanity struggles for an objective opinion, for it can be argued that every great civilisation wishes to live within a defining period of history. Keeping such a proposition in mind, it is nevertheless convincing to proffer the opinion that we are heading towards a defining societal moment. A great tension seems to be brewing; on the one hand there is the increasing dichotomy between religion and science, with sharply drawn battle lines and an unflinching ideology. On the other we have mounting evidence suggesting that the planet is on the verge of environmental collapse. It may only be a matter of time before these factors destabilise the dynamic system that is modern society past its engineered limits.

Modern society seems to have an unhealthy obsession with routine and predictability. The uncertainty that these potential disasters foster act to challenge this obsession, to the point that we seek reassurance. Problems arise when this reassurance takes the form of fanatical (and untenable) religious, philosophical or empirical belief structures. Such beliefs stand out like a signalling light house, the search beam symbolising stability and certainty in stark contrast to the murky, dangerous waters of modern society. But just as the light house guards from the danger of rocks, so too does the pillar of belief warn against corruption. For it is, sadly, intrinsic human nature to take advantage of every situation (to guarantee the survival of oneself through power and influence), and in combination with personality, (propensity towards exploitation of others) beliefs can be twisted to ensure personal gain or the elimination of opposition. It seems that such a phenomenon could be acting today. Religion provides a suitable system upon which to relieve mental anguish and distress at the state of the world (reassurance that . So too does science, as it proscribes the fallacies of following spiritual belief and a similarly blind ‘faith’ in securing a technological solution to humanity’s problems. In that respect, empiricism and religion are quite similar (much to their mutual chagrin).

In such a system we see that de-stablisation is inevitable; a handful of belief structure emerge from the chaos as dominant and compete for control. Progressively extreme positions are adopted (spurred on by manipulators exploiting for personal gain), which in turn sets up the participants for escalating levels of conflict. Our loyalty to the group that aims to secure its survival, ultimately (and ironically) leads to the demise of all involved. It is our lack of tolerance and subservience to evolutionary mechanisms, coupled with a lack of insight into both our internal nature as a person and social interactions that precipitates such a conclusion.

This brings the article to its midpoint, and the suggestion that three main factors are responsible for the development of a new dark ages.

Human belief systems

As argued above, humans have an intrinsic desire to subscribe to certain world views and spiritual beliefs. Whether due to a more fundamental need for explanation in the face of the unknown (being prepared for the unexpected) or simply the attraction of social groupings and initiation into a new hierarchy, the fact remains that humans like to organise their beliefs according to a certain structure. When other groups are encountered whose beliefs differ in some respect, the inevitable outcome is either submission (of the weaker group) or conflict. Perhaps an appropriate maxim that sums up this phenomenon is ‘if you can’t convert them, kill them’. Thus we see at one level, our beliefs acting as a catalyst for conflict with other groups of people. At a higher level, such beliefs are then modified or interpreted in varying ways so as to justify the acts committed, reassuring the group of its moral standing (the enemy is sub human, ‘infidels’, wartime propaganda etc). Belief is also a tool that is used to create a sense of identity, which is another feature that conscious beings seem to require. Those that are lacking in individuality and guidance take to belief systems in order to perhaps gain stability within their lives. Without identity we are operating at a reduced capacity, nothing more than automatons responding to stimuli, so in this respect, belief can form a useful method for providing motivation and structure to an individual. Problems arise when beliefs become so corrupted or conflict so great that any act can be justified without cause for long-term planning; only the complete destruction of the enemy is a viable outcome. The conflict spirals out of control and precipitates major change; another risk factor for ensuring the New Dark Age is a plausible reality.

Economic/Political Collapse

Numerous socio-economic experiments have been conducted over the few millenia that organised civilisation has existed on this planet, with varying degrees of success. Democracy seems to be the greatest windfall to modern politics, ushering in a new era of liberation and equity. But has its time come to an end? Some would argue that the masses need control if certain standards are to be maintained. While a small proportion of society would be capable of living under such an arrangement, the reality that some large swath of the population cannot co-exist without the need for social management and punitive methods calls into question the ultimate success of our political system. Communism failed spectacularly, most notably for its potential for abuse through corruption and dictatorship. Here we have the unfortunate state of affairs that those who come into power are also those whom lack the qualities that one would expect from a ruler. Islamic states don’t even enter the picture; the main aim of such societal systems is the establishment of a theocratic state that is perhaps even more susceptible to abuse (the combination of corrupted beliefs that justify atrocities and unification of church with state causing conflict with other populations whose beliefs differ).

Is democracy and capitalism running our planet into the ground? Some would point to recent stockmarket collapse and record inflation as a sign that yes, perhaps human greed is allowed too much leeway. Others merely shake their heads and point to the cyclical nature of the economy; “it’s just a small downturn that will soon be corrected” they proclaim. Mounting evidence seems to counter such a proposition, as rising interest rates, property prices and living costs force the population to work more, and own less. Is our present system of political control and economic growth sustainable? Judging by recent world events, perhaps not, thus precipitating another factor that could lead to the establishment of a new dark age.

Ecological Destruction

Tied closely to the policies implemented by modern politics and economic propensities is the phenomenon of ‘global warming’, or more broadly, the lack of respect for our biosphere. It seems almost unbelievable that humanity has turned a blind eye to the mounting problems occurring within our planet. While global warming has arguments both for and against, I doubt that any respectable empiricist, or indeed, responsible citizen, could refute that humanity has implemented some questionable environmental practices in the name of progress. Some may argue that the things we take for granted (even the laptop upon which I type this article) would not have been possible without such practices. But when the fate of the human race hangs in the balance, surely this is a high price to pay in such a high stakes game. Human nature surely plays a part in this oversight; our brains are wired to consider the now, as opposed to the might or could. By focusing on the present in such a way, the immediate survival of the individual (and the group) is ensured. Long term thought is not useful in the context of a tribal society where life is a daily struggle. Again we are hampered by more primitive mechanisms that have exceeded their usefulness. In short, humanity has advanced a sufficiently rapid pace that has since overtaken the ability of our faculties to adapt. Stuck with a game of catchup (that most neglect to see the value or importance of) society is falling short of the skills it needs to deal with the challenges that lay ahead. The destruction of this planet, coupled with our inability to reliably plan and deal with future events could (in combination with previous factors such as deliberate political/economic oversight of the problem) precipitate a new dark age in society.

But is a new dark age all doom and gloom? Certainly it will be a time of mass change and potential for great catastrophe, but an emergence out the other side could herald a new civilisation that is well equipped to deal with and manage the challenges of an uncertain future. Looking towards the future, one can’t help but feel a sense of trepidation. Over population, dwindling resources and an increasing schism between religion and science are all contributing towards a great change in the structure of society. While it would be immoral to condone and encourage such a period in light of the monumental loss of order, perhaps it is ‘part of the grand plan’ so to speak in keeping humanity in check and ensuring that the Earth maintains its capacity of life. In effect, humanity is a parasite that has suitably infected its host, resulting in the eventual collapse of its life-giving organs. Perhaps a new dark age will provide the cleansing of mind and spirit that humanity needs to refocus its efforts on the things that really matter; that being every individual attaining individual perfection and living as the best they can possibly be.

Teleportation is no longer banished to the realm of science fiction. It is widely accepted that what was once considered a physical impossibility is now directly achievable through quantum manipulations of individual particles. While the methods involved are still in their infancy (single electrons are the heaviest particle to be teleported), we can at least begin to appreciate and think about the possibilities on the basis of plausibility. Specifically, what are the implications for personal identity if this method of transportation is possible on a human scale? Atomically destructing and reconstructing an individual at an alternate location could introduce problems with consciousness. Is this the same person or simply an identical twin with its own thoughts, feelings and desires? These are the questions I would like to discuss in this article.

Biologically we lose our bodies several times over during one human life-time. Complete organs are replaced diurnally with little thought given to the implications for self-identity. It is a phenomenon that is often overlooked, and especially so in relation to recent empirical developments with quantum teleportation. If we are biologically replaced with regularity does this imply that our sense of self is, likewise, dynamic in nature and constantly evolving? There would be reasonable arguements for both sides of this debate; maturity and daily experience do result in a varied mental environment. However, one wonders if this has more to do with innate processes such as information transfer/recollection/modification rather than purely the biological characteristics of individual cells (in relation to cell division and rejuvenation processes).

Thus it could be argued that identity is a largely conscious (in terms of seeking out information and creating internal schema of identity) and directed process. This does not totally rule out the potential for identity based upon changes to biological structure. Perhaps the effects are more subtle, modifying our identities in such a way as to facilitate maturity or even mental illness (if the duplication process is disturbed). Cell mutation (neurological tumor growth) is one such example whereby a malfunctioning biological process can result in direct and often drastic changes to identity.

However, I believe it is safe to assume that “normal” tissue regenerative processes do not result in any measurable changes to identity. What makes teleportation so different? Quantum teleportation has been used to teleport photons from one location to another, and more recently, particles with mass (electrons). The process is decidedly less romantic than science-fiction authors would have us believe; classical transmission of information is still required, and a receiving station must still be established at the desired destination. What this means is that matter transportation, ala ‘Star Trek’ transporters, is still very much an unforeseeable fiction. In addition, something as complex as the human body would require incredible computing power to scan at sufficient detail, another limiting factor in its practicality. Fortunately, there are potential uses for this technology such as in the fledging industry of quantum computers.

The process works around the limitations of the quantum Uncertainty Principle (which states that the exact properties of a quantum system can never be known in exact detail) through a process known as the “Einstein-Podolsky-Rosen” effect. Einstein had real issues with Quantum Mechanics; he didn’t like it at all (to quote the cliche ‘Spooky action at a distance’). The EPR paper was aimed at irrefutably proving the implausibility of entangled pairs of quantum particles. John Stewart Bell tripped the Einstein proposition on its head when he demonstrated that entangled particles do in fact exhibit statistically significant random behaviours (that is, the frequencies of each action correlated between both particles too highly to be due to chance alone). The fact that entanglement does not violate the no-communication theorem is good news for our assumptions regarding reality, but more bad news for teleportation fans. Information regarding the quantum state of the teleportee is still required to be transmitted via conventional methods for reassembly at the other end.

Quantum teleportation works by initially scanning the quantum state of a particle at A, with care taken not to cause too much disruption (measurement distorts the original, the harder you look the more uncertain the result). This partial scan is then transmitted at relativistic speeds to the receiver at B. A pair of entangled particles is then dispatched to both teleportation stations. Entangled particle 1 at A interacts with the remainder of A (minus the scanned out information sent to B). Entanglement then assures that this information will be instantaneously available at B (via entangled particle 2). Utilising the principles of the EPR effect and Bell’s statistical correlations, it is then possible to reconstruct the state of the original particle A at the distant location, B. While the exact mechanism is beyond the technical capacity of philosophy, it is prudent to say that the process works by taking the entangled information from EP2 and combining it with the classically transmitted information that was scanned out of the original particle, A.

Casting practicality aside for the sake of philosophical discussion,  if such a process became possible for a being as complex as a human, what would be the implications for consciousness and identity? Common sense tells us that if an exact replica could be duplicated then how is this in any way different to the original? One would simply ‘wake-up’ at the new location within the same body and mind as you left. Those that subscribe to a Cartesian view of separated body and mind would look upon teleportation with an abhorrent revulsion. Surely along the way we are loosing a part of what makes us uniquely human; some sort of intangible soul or essence of mind which cannot be reproduced? This leads one to similar thought experiments. What if another being somewhere in the Universe is born with the exact mental characteristics as yourself? Would this predispose them to some sort of underlying and phenomenological connection? Perhaps this is supported by anecdotal evidence from empirical studies into identical twins. It is thought such individuals share a common bond, demonstrating almost telepathic abilities at times. Although it could be argued that the nature of this mechanism is probably no more mystical than a familiar acquaintance predicting how you would react in a given situation, or similarities in brain structure predisposing twins to ‘higher than average’ mental convergence events.

Quantum teleportation on conscious beings also raises serious moral implications. Is it considered murder to deconstruct the individual at point A, or is this initial crime nullified once the reassembly is completed? Is it still considered immoral if someone else appears at the receiver due to error or quantum fluctuation? Others may argue that it is no different to conventional modes of transport; human error should be dealt as such (necessary condition for the label of crime/immorality) and naturally occurring disasters interpreted as nothing more than random events.

While it is doubtful that we will ever see teleportation on a macro scale, we should remain mindful of the philosophical and practical implications of emerging technologies. Empirical forces are occasionally blinded to these factors when such innovations are announced to the general public. While it is an important step in society that such processes are allowed to continue, the rate at which they are appearing can be cause for alarm if they impinge upon our human rights and the preservation of individuality. There has never been a more pressing time for philosophers to think about the issues and offer their wisdom to the world.

In a previous article, I discussed the possibility of a naturally occurring morality; one that emerges from interacting biological systems and is characterised by cooperative, selfless behaviours. Nature is replete with examples of such morality, in the form of in-group favouritism, cooperativity between species (symbiotic relationships) and the delicate interrelations between lower forms of life (cellular interaction). But we humans seem to have taken morality to a higher plane of existence, classifying behaviours and thoughts into a menagerie of distinct categories depending on the perceived level of good or bad done to external agents. Is morality a concept that is constant throughout the universe? If so, how could morality be defined in a philosophically ‘universal’ way, and how does it fit in with other universals? In addition, how can humans make the distinction between what is morally ‘good’ and ‘bad? These are the questions I would like to explore in this article.

When people speak about morality, they are usually referring to concepts of good and evil. Things that help and things that hinder. A simplistic dichotomy into which behaviours and thoughts can be assigned. Humans have a long history with this kind of morality. It is closely intertwined with religion, with early scriptures and the resulting beliefs providing the means to which populations could be taught the virtues of acting in positive ways. The defining feature of religious morality finds it footing with the lack of faith in the human capacity to act for the good of the many. Religions are laced with prejudicial put downs that seek to undermine our moral integrity. But they do touch on a twinge of truth; evolution has seen the creation of a (primarily) self-centred organism. Taking the cynical view, it can be argued that all human behaviour can be reduced to purely egotistical foundations.

Thus the problem becomes not one of definition, but of plausibility (in relation to humanity’s intrinsic capacity for acting in morally acceptable ways). Is religion correct in its assumptions regarding our moral ability? Are we born into a world of deterministic sin? Theistically, it seems that any conclusion can be supported via the means of unthinking faith. However, before this religiosity is dismissed out of hand, it might be prudent to consider the underlying insight offered.

Evolution has shown that organisms are primarily interested in survival of the self (propagation of genetic material). This fits in with the religious view that humanity is fundamentally concerned with first-order, self-oriented consequences, ann raises the question of whether selfish behaviour should be considered immoral. But what of moral events such as altruism, cooperation and in-group behavioural patterns? These too can be reduced to the level of self-centered egoism, with the superficial layer of supposed generosity stripped away to more meager foundations.

Morality then becomes a way of a means to an end, that end being the fulfillment of some personal requirement. Self initiated sacrifice (altruism) elevates one’s social standing, and provides the source for that ‘warm, fuzzy feeling’ we all know and love. Here we have dual modes of satiation, one that is external to the agent (increasing power, status) and one that is internal (evolutionary mechanism for rewarding cooperation). Religious cynicism is again supported, in that humans seem to have great difficulty in performing authentic moral acts. Perhaps our problem here lies not in the theistic stalker, laughing gleefully at our attempts to grasp at some sort of intrinsic human goodness, but rather in our use of the word ‘authentic’. If one makes an allowance and conceeds that humans could simply lack the faculties for connotation-free morality, and instead put forward the proposition that moral behaviours are instead measured by their main direction of action (directed inwards; selfishly or outwards; altruistically), we can arrival at a usable conceptualisation.

Reconvening, we now have a new operational definition of morality. Moral action is thus characterised by the focus of its attention (inward vs outward) as opposed to a polarised ‘good vs evil’, which manages to evade the controversy introduced by theism and evolutionary biology (two unlikely allies!). The resulting consequence is that we have a kind of morality which is not defined by its degree of ‘ correctness’, which from any perspective is entirely relative. However, if we are to arrive at a meaningful and usable moral universal that is applicable to human society, we need to at least consider this problem of evil and good.

How can an act be defined as morally right or wrong? Considering this question alone conjours up a large degree of uncertainty and subjectivity. In the context of the golden rule (do unto others as you would have done unto yourself), we arrive at even murkier waters; what of the psychotic or sadist whom prefers what society would consider abnormal treatment? In such a situation could ‘normally’ unacceptable behaviour be construed as morally correct? It is prudent to discuss the plausibility of defining morality in terms of universals that are not dependent upon subjective interpretation if this confusion is to be avoided.

Once again we have returned to the issue of objectively assessing an act for its moral content. Intuitively, evil acts cause harm to others and good acts result in benefits. But again we are falling far short of the region encapsulated by morality; specifically, that acts can seem superficially evil yet arise from fundamentally good intentions. And thus we find a useful identifier (in the form of intention) that is worthy of assessing the moral worth of actions.

Unfortunately we are held back by the impervious nature of the assessing medium. Intention can only be ascertained through introspection, and to a lesser degree, psychometric testing. Intention can even be illusive to the individual, if their judgement is clouded by mental illness, biological deformity or an unconscious repression of internal causality (deferring responsibility away from the individual). Therefore, with such a slippery method of assessment regarding the authenticity and nature of the moral act, it seems difficult that morality could ever be construed as a universal.

Universals are exactly what their name connotes; properties of the world in which we inhabit that are experienced across reality. That is to say, morality could be classed as a universal due to its generality amoung our species and its quality of superceeding characterising and distinguishing features (in terms of mundane, everyday experience). If one is to class morality under the category of universals, one should modify the definition to incorporate features that are non-specific and objective. Herein lies the problem with morality; it is such a variable phenomenon, with large fluctuations in individual perspective. From this point there are two main options available given current knowledge on the subject. Democratically, the qualities of a universal morality could be determined through majority vote. Alternatively, a select group of individuals or one definitive authority could propose and define a universal concept of morality. One is left with few options on how to proceed.

If a universal conceptualisation of morality is to be proposed, an individual perspective is the only avenue left with the tools we have at our disposal. We have already discussed the possibility of internal vs external morality (bowing to pressures that dictate human morality is indivisibly selfish, and removing the focus from good vs evil considerations). This, combined with a weighted system that emphasises not the degree of goodness, but rather the consideration of the self versus others, results in a useful measure of morality (for example, there will always be a small percentage of internal focus). But what are we using as the basis for our measurement? Intention has already proved to be elusive, as is objective observation of acts (moral behaviours can be reliant on internal reasoning to determine their moral worth, some behaviours go unobserved or can be ambiguous to an external agent). Discounting the possibility of a technological breakthrough enabling direct thought observation (and the ethical considerations such an invasion of privacy would bring), it seems difficult on how we can proceed.

Perhaps it is best to simply take a leap of faith, believing in humanity’s ability to make judgements regarding moral behaviour. Instead of cynically throwing away our intrinsic abilities (which surely do vary in effectiveness within the population), we should trust that at least some of us would have the insight to make the call. With morality, the buck definitely stops with the individual, which is a fact that most people can have a hard time swallowing. Moral responsibility definitely rests with the persons involved, and in combination with a universally expansive definition, makes for some interesting assertations of blame, not to mention a pressuring force to educate the populace on the virtues of fostering introspective skills.

In the first part of this article, I outlined a possible definition of time and (keeping in touch with the article’s title) offered a brief historical account of time measurement. This outline demonstrated humanity’s changing perception of the nature of time, and how an increase in the accuracy with which it is measured can affect not only our understanding of this phenomenon, but also how we perceive reality. In this article I will begin with the very latest physical theory explaining the potential nature of time, followed by a discussion on several interesting observations concerning the fluctuations that seem to characterise humanity’s chronological experience. Finally, I hope to promote a hypothesis (even though it may simply be stating the blatantly obvious) that the flow and experience of time is uniquely variable, in that the concept of ‘absolute time’ is as dead as the ‘ether’ or absolute reference point of early 19th century physics.

Classical physics dominated the concept of time up until the beginning of the 20th century. In this respect, time (in the same vein as motion) as having an ‘absolute’  reference point. That is, time was constant and consistent across the universe and for all observers, regardless of velocity or local gravitational effects. Of course, Einstein turned all this on its head with his theories of general and special relativity. Time dilation was a new and exciting concept in the physical measure of this phenomenon. Both the speed of the observer (special relativity) and the presence of a gravitational field (general relativity) were predicted to have an effect on the passage of time. The main point to consider in combination with these predictions is that by the very nature of the theory, relativity insists that all events are relative, or change with perspective, in respect to some external observer.

Consider two clocks (A and B), separated by distance x. According to special relativity, if clock B is accelerated to a very high speed (at least 30,000km/s for the effects to become detectable), time dilation effects will come into play. In effect, relative to clock A (which is running on ‘normal’ Earth time), clock B will be seen to run slower. An observer travelling with clock B would not notice these effects – time would continue to pass normally within their frame of reference. It is only upon return and the clocks are directly compared that the inaccuracy becomes apparent. Empirically, this effect is well established, and offers an explanation as to why muons (extremely short-lived particles) are able to make it to the Earth’s surface before decaying. Cosmic rays slam into the Earth’s atmosphere at high speed, producing sufficient energy when they collide with molecules for the generation of muons and neutrinos. These muons, which normally decay after a distance of 0.6km (if stationary/moving slowly), are travelling so fast that time dilation effects act to slow down the radiological emission process. Thus, these particles survive much longer (penetrating some 700m underground) than normal.

General relativity also predicts an effect on our perceptions of time. Objects with large mass produce gravitational fields, which in turn, are predicted to influence time by slowing down its perceived effects in proportion to the observer’s proximity to the field. Clock A is on the Earth’s surface, while Clock B is attached to an orbiting satellite. As Clock B is further from the centre of the Earth, the gravitational field at a lower potential, that is, it is weaker and exerts less of an effect. Consequently, the elapsed time at B (relative to Clock A) will be shorter (ie, Clock B is running faster). Again, this effect has been tested empirically, with clocks on board GPS satellites forced to undergo regular adjustments to keep them in line with Earth-bound instrumentation (thus enabling accuracy in pinpointing locations). Interestingly, the effects of both types of dilation are additive; the stronger effect wins out, resulting in either a net gain or loss of time. Objects moving fast within a gravitational field should then experience both a slowing down and speeding up of time relative to an external observer (this was in fact recorded in an experiment involving atomic clocks on board commercial airliners).

Frustratingly, the physical basis for such dilation seems to be enmeshed with the complicated mathematics and technical jargon. Why exactly does this dilation occurs? Descriptions of the phenomenon seem to lack any real insight into this question, and instead proffer statements to the effect of ‘this is simply what relativity predicts’. It is an important question to ask, I think, as philosophically, the question of ‘why’ is just as important as the empirical ‘how’, and should follow as a natural consequence. By probing the meta-physical aspects of time we can aim to better understand how it can influence the human sensory experience and adapt this new-found knowledge to practical applications.

Based on relativity’s notion of a non-absolute framework of time, and incorporating the predictions of time dilation, it seems plausible that time could be reducible to a particulate origin. The field of quantum physics has already made great headway in proposing that all matter acts in a wave-particle duality; in the form of waves, photons and matter travel along all possible routes between two points, with the crests and troughs interfering with, or reinforcing, each other. Similar to the double slit experiment (light and dark interference pattern), only the path that is reinforced remains and the wave collapses (quantum de-coherence) into a particle that we can directly observe and measure. This approach is know as the ‘sum over histories’ hypothesis, proposed by Richard Feynman (which also opens up the possibility of ‘many worlds’; alternative universes that branch off at each event in time).

In respect to time, perhaps its re-imagining as a particle could explain the effects on gravity and velocity, in the form of dilation. One attempt is the envisaged ‘Chronon’, a quantised form of time which disrupts the commonly held interpretation of a continuous experience. This theory is supported via the natural unit of Planck Time, some 5.39121 x 10ˆ-44 seconds. Beyond this limit, time is thought to be indistinguishable and the notion of separate events undefinable. Of course, we are taking a leap of faith here in assuming that time is a separate, definable entity. Perhaps the reality is entirely different.

Modern philosophy seems to fall over when attempting to interpret the implications of theoretical physics. Perhaps the subject matter is becoming increasingly complex, requiring dedicated study in order to grasp even the simplest concepts. Whatever the reason, the work of philosophers has moved away from the pursuits of science and towards topics such as language. What science needs is an army of evaluators, ready to test their theories with practical concerns in mind. Time has not escaped this fate either. Scientists seem content, even ‘trigger happy’ in their usage of the anthropic principle in explaining the etiology of their theories and any practical inquiry as to why things are the way they are. Basically, any question of why evokes a response along the lines of ‘well, if it were any different, conditions of the universe would not be sufficient for the evolutions of intelligent beings such as ourselves, who are capable of asking the very question of why!’. Personally, this approach does make sense, but seems to have the distinct features of a ‘cop-out’ and circularity; alot of the underlying reasoning is missing which prohibits deeper inquiry. It also allows theologians to promote arguments for the existence of a creator; ‘god created the universe in such a way as to ensure our existence’.

What has this got to do with time? Well, put simply, the anthropicists propose that  if time were to flow in a direction contrary to that which is experienced, the laws of science would not hold, thus excluding the possibility of our existence as well as violating the principles of CPT symmetry (C=particle/antiparticle replacement, P=taking the mirror image and T=the direction of time). Even Stephen Hawking weighs in on the debate, and in his Brief History of Time, proposes the CPT model in combination with the second law of thermodynamics (entropy, or disorder, always increases). The arrow of time, thus, must correspond to and align with the directions of these cosmological tendencies (universe inflates, which is the same direction as increasing entropy, which is the same as psychological perceptions of time).

So, after millenia of study in the topic of chronology, we seem to be a long way off from a concrete definition and explanation of time. With the introduction of relativity, some insights into the nature of time have been extracted, however philosophers still have a long way to go before practical implications are expounded from the very latest theories (Quantum Physics, String Theory etc). Indeed, some scientists believe that if a grand unified theory is to be discovered, we need to further refine our definitions of time and work backwards towards the very instant of the big bang (under which it is proposed that all causality breaks down).

Biologically, is time perceived equally among not only humans but also other species (animals)? Are days where time seems to ‘stand still’ sharing some common feature that could support the notion of time as a definable physical property of the universe (eg the Chronon particle)? On such days are we passing through a region of warped spacetime (thus a collective, shared experience) or do we carry an internal psychological timepiece that ticks to its own tock, regardless of how others are experiencing it? When we die is the final moment stretched to a relative infinity (relative to the deceased) as neurons loose their potential to carry signals (ala falling into a black hole, the perception of time slows to an imperceptible halt) or does the blackness take us in an instant? Maybe time will never fully be understood, but it is an intriguing topic that warrants further discussion, and judging by the surplus of questions, not in any hurry to reveal its mysteries anytime soon.

After returning from a year-long hiatus to the United Kingdom and continental Europe, I thought it would be prudent to share my experiences. Having caught the travel bug several years ago when visiting the UK for the first time, a year long overseas working holiday seemed like a dream come true. What I didn’t envisage was the effects of this experience on cognitions, specifically, the feelings of displacement, disorientation and dissatisfaction. In this article I aim to examine the effects of a changing environment on the human perceptual experience, as it relates to overseas, out-group exposure and the psychological mechanisms underlying these cognitive fluctuations.

It seems that the human need to belong runs deeper than most would care to admit. Having discounted any possibility of ‘homesickness’ prior to arrival in the UK, I was surprised to find myself unwittingly (or perhaps conforming to unconscious social expectation – but we aren’t psychoanalysts here!) experiencing the characteristic symptomatology of overall depression, including sub-signs of negative affect, longing for a return home and feelings concurrent with social ostracism. This struck me as odd, in that if one is aware of an impending event, surely this awareness predisposes one to a lesser effect simply through mental preparation and conscious deflection of the expected symptoms. The fact that negative feelings were still experienced despite such awareness causes an alternative etiology for the phenomenon of homesickness. Indeed, it offers a unique insight into the human condition; at a superficial level our dependency on consistency and familiarity, and at a deeper, more fundamental level, a possible interpretation of underlying cognitive processes involved in making sense of the world and responding to stimuli.

Taken at face value, a change in an individual’s usual physical and social environment displays the human reliance on group stability. From an evolutionary perspective, the prospect of travel to new and unfamiliar territories (and potential groups of other humans) is a altogether risky affair. On the one hand, the individual (or group) could possibly face death or injury through anthropogenic means or from the physical environment. On the other hand, a lack of change reduces stimulation genetically (through interbreeding with biologically related group members), cognitively (reduced problem solving, mental stagnation once initial challenges relating to the environment are overcome) and socially (exposure to familiar sights and sounds reduces the capacity for growth in language and, ipsofacto, culture). In addition, the reduction of physical resources through consumption and degradation of the land via over-farming (hunting) is another reason for moving beyond the confines of what is safe and comfortable. As the need for biological sustenance outranks all other human requirements (according to Maslow’s hierarchy), inductively it seems plausible that this could be the main motivating factor why human groups migrate and risk everything for the sake of exploring the unconquered territories of terra incognito. 

The mere fact that we do, and have (as shown throughout history) uprooted our familiar ties and trundled off in search of a better existence seems to make the aforementioned argument a moot point. It is not something to be debated, it is merely something that humans just do. Evolution favours travel, with the potential benefits outweighing the risks by far. The promise of greener pastures on the other side is almost enough to guarantee success. The cognitive stimulation such travel brings may also improve the future chances of success in this operation through learnt experiences and the conquering of challenges, as facilitated by human ingenuity.

But what of the social considerations when travelling? Are our out-group prejudices so intense that the very notion of travel to unchartered waters causes waves of anxiety? Are we fearing the unknown, our ability to adapt and integrate or the possibility that we may not make it out alive and survive to propagate our genes? Is personality a factor in predicting an individual’s performance (in terms of adaptation to the new environment, integration with a new group and success at forging new relationships)? From personal experience, perhaps a combination of all these factors and more.

We can begin to piece together a rough working model of travel and its effects on an individual’s social and emotional stability/wellbeing. The change in a social and physical environment seems to predict the activation of certain evolutionary survival mechanisms that are mediated by several conditions of the travel undertaken. Such conditions could involve; similarity of the target country to the country of origin (in terms of culture, language, ethnic diversity, political values etc),  social support to the individual (group size when travelling, facilities to make contact with group members left behind), personality characteristics of the individual (impulsive, extroverted vs introverted, attachment style, confidence) and cognitive ability to integrate and adapt (language skills, intelligence, social ability). Thus we have a (predicted) linear relationship whereby an increase in the degree of change (measured on a multitude of variables such as physical characteristics, social aspects, perceptual similarities) from the original environment to the target environment causes a resultant change in the psychological distress of the individual (either increased or decreased dependent upon the characteristics of the mediating variables).

Perceptually, travel also seems to have an effect on the salience and characteristics of the experience. In this instance we have deeper cognitive processes that activate which influence the human sensory experience on a fundamental level. The model employed here is one of stimulus-response, handed down through evolutionary means from a distant ancestor. Direct observation of perceptual distortion while travelling is apparent when visiting a unique location. Personally, I would describe the experience as an increase in arousal to one of hyper-vigilance. Compared to subsequent visits to the same location, the original seems somehow different in a perceptual sense. Colours, smells, sounds and tastes are all vividly unique. Details are stored in memory that are ignored and discounted after the first event. In essence, the second visit to a place seems to change the initial memory. It almost seems like a different place.

While I am unsure as to whether this is experienced by anyone apart from myself, evolutionarily it makes intuitive sense. The automation of a hyper-vigilant mental state would prove invaluable when placed in a new environment. Details spring forth and are accentuated without conscious effort, thus improving the organism’s chances of survival. When applied to modern situations, however, it is not only disorientating, but also very disconcerting (at least in my experience).

Moving back to social aspects of travel, I have found it to be both simultaneously a gift and a curse. Travel has enabled an increased understanding and appreciation of different cultures, ways of life and alternative methods for getting things done. In the same vein, however, it has instilled a distinct feeling of unease and dissatisfaction with things I once held dear. Some things you simply take for granted or fail to take notice of and challenge. In this sense, exposure to other cultures is liberating; especially in Europe where individuality is encouraged (mainly in the UK) and people expect more (resulting in a greater number of opportunities for those that work hard to gain rewards and recognition). The Australian way of life, unfortunately, is one that is intolerant of success and uniqueness. Stereotypical attitudes are abundant, and it is frustrating to know that there is a better way of living out there.

Perhaps this is one of the social benefits of travel; the more group members that do it increases the chances of changing ways of life towards more tolerant and efficient methods. Are we headed towards a world-culture where diversity is replaced with (cultural) conformity? Is this ethically viable or warranted? Could it do more harm than good? It seems to me that there would be some positive aspects for a global conglomerate of culture. Then again, the main attraction of travel lies in the experience of the foreign and unknown. To remove that would be to remove part of the human longing for exploration and a source of cognitive, social and physical stimulation. Perhaps instead we should encourage travel in society’s younger generations, exposing them to such experiences and encouraging internal change based on better ways of doing things. After all, we are the ones that will be running the country someday.

The essence of mathematics cannot be easily discerned. This intellectual pursuit lurks behind a murky haze of complexity. Those that are fortunate enough to have natural ability in this field are able to manipulate algebraic equations as easily as spoken word. However, for the vast majority of the population, mathematical expertise is elusive, receding away at each desperate grasp and attempt at comprehension. What exactly is this strange language of numerical shapes, with its logical rule-sets and quirky laws of commutativity? It seems as though the more intensely this concept is scrutinised, the faster its superfluous layers of complexity are peeled away. But what of these hidden foundations? Are mathematical formulations the key to understanding the nature of reality? Can all this complexity around which we eke out a meagre existence really condense into a single set of equations? If not, what are the implications for and likelihood of a purely mathematical and unified ‘theory of everything? These are the questions I would like to explore in this article.

The history of mathematics dates back to the dawn of civilisation. The earliest known examples of mathematical reasoning are believed to be from some 70,000 years BC. Geometric patterns and shapes on cave-walls shed light onto how our ancestors may have thought about abstract concepts. These primitive examples also include rudimentary attempts at measuring the passage of time through measured, systematic notches and depictions of celestial cycles. Humankind’s abilities progressed fairly steadily from this point, with the next major revolution in mathematics occurring some 3000-4000 years BC.

Neolithic religious sites (such as Stonehenge, UK and Ġgantija, Malta) are thought to have made use of the growing body of mathematical knowledge and an increased awareness and appreciation of standardised observation. In a sense, these structures spawned appreciation of mathematical representation by encouraging measurement standardisation. For example, a static structure allows for patterns in constellations and deviations from the usual to stand out in prominence. Orion’s belt rises over stone X in January, progressing towards stone Y; what position will the heavens be in tomorrow?

Such observational practices allowed mathematics, through the medium of astronomy, to foster and grow. Humanity began to recognise the cyclical rhythm of nature and use this standardised base to extrapolate and predict future events. It was not until 2000BC that mathematics grew into some semblance of the formalised language we use today. Spurred on by the great ancient civilisations of Greece and Egypt, mathematical knowledge advanced at a rapid pace. Formalised branches of maths emerged around this time period, with construction projects inspiring minds to realise the underlying patterns and regularities in nature. Pythagoras’ Theorem is but one prominent result from the inquiries of this time as is Euclid’s work on geometry and number theory. Mathematics grew steadily, although hampered by the ‘Dark Ages’ (Ptolemic model of the universe) and a subsequent waning interest in scientific method.

Arabic scholars picked up this slack, contributing greatly to geometry, astronomy and number theory (the numerical system of base ten we use today is an adoption of Arabic descent). Newton’s Principia was perhaps the first wide-spread instance of formalised applied mathematics (in the form of generalised equations; geometry had previously been employed for centuries in explaining planetary orbits) in the context of explaining and predicting physical events.

However, this brings us no closer to the true properties of mathematics. An examination of the historical developments in this field simply demonstrates that human ability began with rudimentary representations and has since progressed to a standardised, formalised institution. What essentially are these defining features? Building upon ideas proposed by Frayn (2006), our gift for maths arises from prehistoric attempts at grouping and classifying external objects. Humans (and lower forms of life) began with the primitive notion of ‘big’ versus ‘small’, that is, the comparison of groupings (threats, friends or food sources). Mathematics comprises our ability to make analogies, recognise patterns and predict future events; a specialised language with which to conduct the act of mental juggling. Perhaps due to the increasing encephalic volume and neuronal connectivity (spurred on by genetic mutation and social evolution) humankind progressed beyond the simple comparison of size and required a way of mentally manipulating objects in the physical world. Counting a small herd of sheep is easy; there is a finger, toe or stick notch with which to capture the property of small and large. But what happens when the herd becomes unmanageably large, or you wish to compare groups of herds (or even different animals)? Here, the power of maths really comes into a world of its own.

Referring back to the idea of social evolution acting as a catalyst for encephalic development, perhaps emerging social patterns also acted to improve mathematical ability. More specifically, the disparities in power as individuals become more elevated compared to their compatriots would have precipitated a need to keep track of assets and incur taxation. Here we observe the leap from singular comparison of external group sizes (leaning heavily on primal instincts of flight/fight and satiation) to a more abstract, representative use of mathematics. Social elevation brings about wealth and resources. Power over others necessitates some way of keeping track of these possessions (as the size of the wealth outgrows the managerial abilities of one person). Therefore, we see not only a cognitive, but also a social, aspect of mathematical evolution and development.

It is this move away from the direct and superficial towards abstract universality that heralded a new destiny for mathematics. Philosophers and scientists alike wondered (and still wonder today) whether the patterns and descriptions of reality offered by maths are really getting to the crux of the matter. Can mathematics be the one tool with which a unified theory of everything can be erected? Mathematical investigations are primarily concerned with underlying regularities in nature; patterns. However it is the patterns themselves that are the fundamental essence of the universe; mathematics simply describes them and allows for their manipulation. The use of numerals is arbitrary; interchange them with letters or even squiggles in the dirt and the only thing that changes is the rule-set to combine and manipulate them. Just as words convey meaning and grammatical laws are employed with conjunctions to connect (addition?) premises, numerals stand as labels and the symbols between them convey the operation to be performed. When put this way, mathematics is synonymous with language, it is just highly standardised and ‘to the point’.

However this feature is a double-edged sword. The sterile nature of numerals (lacking such properties as metaphor, analogy and other semantic parlour tricks) leaves their interpretation open. A purely mathematical theory is only as good as the interpreter. Human thought processes descend upon formulae picking apart and extracting like a vulture battles haphazardly over a carcass. Thus the question moves from one of validating mathematics as an objective tool to a more fundamental evaluation of human perception and interpretation. Are the patterns we observe in nature really some sort of objective reality, or are they simply figments of our over-active imagination; coincidences or ‘brain puns’ that just happen to align our thoughts with external phenomena?

If previous scientific progress is anything to go by, humanity is definitely onto something. As time progresses, our theories become closer and closer to unearthing the ‘true’ formulation of what underpins reality. Quantum physics may have dashed our hopes of ever knowing with complete certainty what a particle will do when poke and prodded, but at least we have a fairly good idea. Mathematics also seems to be the tool with which this lofty goal will be accomplished. Its ability to allow manipulation of the intangible is immense. The only concern is whether the increasing abstractivity of physical theories is outpacing our ability to interpret and comprehend them. One only has to look at the plethora of alternative quantum interpretations to see evidence for this effect.

Recent developments in mathematics include the mapping of E8. From what can be discerned by a ‘lay-man’, E8 is a multi-dimensional geometric figure, the exact specifications of which eluded mathematicians since the 19th century. It was only through a concerted effort involving hundreds of computers operating in parallel that its secrets were revealed. Even more exciting is the recent exclamation of a potential ‘theory of everything’. The brainchild behind this effort is not what could be called stereotypical; this ‘surfing scientist’ claims to have utilised the new-found knowledge of E8 to unite the four fundamental forces of nature under one banner. Whether his ship turns out to hold any water is something that remains to be seen. The full paper can be obtained here.

This theory is not the easiest to understand; elegant but inherently complex. Intuitively, two very fitting characteristics of a potential theory of everything. The following explanation from Slashdot.org is perhaps the most easily grasped for the non-mathematically inclined.

“The 248-dimensions that he is talking about are not like the time-space dimensions, which particles move through. They describe the state of the particle itself – things like spin, charge, etc. The standard model has 6(?) properties. Some of the combinations of these properties are allowed, some are not. E8 is a very generalized mathematical model that has 248-properties, where only some of the combinations are allowed. What Garrett Lisi showed is that the rules that describe the allowed combinations of the 6 properties of the standard model show up in E8, and furthermore, the symmetries of gravity can be described with it as well.” Slashdot.org, (2007).

Therefore, E8 is a description of particle properties, not the ‘shape’ of some omnipresent, underlying pervasive force. The geometric characteristics of the shape outline the numbers of particles, their properties and the constraints over these properties (possible states, such as spin, charge etc). In effect, the geometric representation is an illustration of underlying patterns and relationships amongst elementary particles. The biggest strength of this theory is that it offers testable elements, and predictions of as yet undiscovered physical constituents of the universe.

It is surely an exciting time to live, as these developments unfurl. On first glance, mathematics can be an incredibly complex undertaking, in terms of both comprehension and performance. Once the external layers of complexity are peeled away, we are left with the raw fundamental feature; a description of underlying universals. Akin to every human endeavour, the conclusions are open to interpretation, however with practice, and an open mind free from prejudicial tendencies, humanity may eventually crack the mysteries of the physical universe. After all, we are a component of this universe therefore it makes intuitive (if not empirical) sense that our minds should be relatively objective and capable of unearthing a comprehensive ‘theory of everything’.

Many of us take the capacity to sense the world for granted. Sight, smell, touch, taste and hearing combine to paint an uninterrupted picture of the technicolour apparition we call reality. Such lucid representations are what we use to define objects in space, plan actions and manipulate our environment. However, reality isn’t all that it’s cracked up to be. Namely, our role in defining the universe in which we live is much greater than we think. Humanity, through the use of sensory organs and the resulting interpretation of physical events, succeeds in weaving a scientific tapestry of theory and experimentation. This textile masterpiece may be large enough to ‘cover all bases’ (in terms of explaining the underlying etiology of observations), however it might not be made of the right material. With what certainty do scientific observations carry a sufficient portion of objectivity? What role does the human mind and its modulation of sensory input have in creating reality? What constitutes objective fact and how can we be sure that science is ‘on the right track’ with its model of empirical experimentation? Most importantly, is science at the cusp of an empirical ‘dark age’ where the limitations of perception fundamentally hamper the steady march of theoretical progress? These are the questions I would like to explore in this article.

The main assumption underlying scientific methodology is that the five sensory modalities employed by the human body are, by and large, uniformly employed. That is, despite small individual fluctuations in fidelity, the performance of the human senses is mostly equal. Visual acuity and auditory perception are sources of potential variance, however the advent of certain medical technologies has circumnavigated and nullified most of these disadvantages (glasses and hearing aids, respectively). In some instances, such interventions may even improve the individual’s sensory experience, superseding ‘normal’ ranges through the use of further refined instruments. Such is the case with modern science as the realm of classical observation becomes subverted by the need for new, revolutionary methods designed to observe both the very big and the very small. Satellites loaded with all manner of detection equipment have become our eyes for the ultra-macro; NASA’s COBE orbiter gave us the first view of early universal structure via detection of the cosmic microwave background radiation (CMB). Likewise, scanning probe microscopy (SPM) enabled scientists to observe on the atomic scale, below the threshold of visible light. In effect, we have extended and supplemented our ability to perceive reality.

But are these innovations also improving the objective quality of observations, or are we being led into a false sense of security? Are we becoming comfortable with the idea that what we see constitutes what is really ‘out there’? Human senses are notoriously prone to error. In addition, machines are only as good as their creator. Put another way, artificial intelligence has not yet superseded the human ‘home grown’ alternative. Therefore, can we rely on a human-made, artificial extension of perception with which to make observations? Surely we are compounding the innate inaccuracies, introducing a successive error rate with each additional sensory enhancement. Not to mention the interpretation of such observations and the role of theory in whittling down alternatives.

Consensus cannot be reached on whether what I perceive is anything like what you perceive. Is my perception of the colour green the same as yours? Empirically and philosophically, we are not yet at a position to determine with any objectivity whether this question is true. We can examine brain structure and compare regions of functional activity, however the ability to directly extract and record aspects of meaning/consciousness is still firmly in the realms of science-fiction. The best we can do is simply compare and contrast our experiences through the medium of language (which introduces its own set of limitations).As aforementioned, the human sensory experience can, at times, become lost in translation.

Specifically, the ability of our minds to disentangle the information overload that unrelentingly flows through mental channels can wane due to a variety of influences. Internally, the quality of sensory inputs is governed at a fundamental level by biological constraints. Millions of years of evolution has resulted in a vast toolkit of sensory automation. Vision, for example, has developed in such a way as to become a totally unconscious and reflexive phenomenon. The biological structure of individual retinal cells predisposes them to respond to certain types of movement, shapes and colours. Likewise, the organisation of neurons within regions of the brain, such as the primary visual cortex in the occipital lobe, processes information with pre-defined mannerisms. In the case of vision, the vast majority of processing is done automatically, thus reducing the overall level of awareness and direct control the conscious mind has over the sensory system. The conclusion here is that we are limited by physical structure rather than differences in conscious discrimination.

The retina acts as the both the primary source of input as well as a first-order processor of visual information In brief, photons are absorbed by receptors on the back wall of the eye. These incoming packets of energy are absorbed by special proteins (rods – light intensity, cones – colour) and trigger action potentials in attached neurons. Low level processing is accomplished by a lateral organisation of retinal cells; ganglionic neurons are able to communicate with their neighbours and influence the likelihood of their signal transmission. Cells communicating in this manner facilitates basic feature recognition (specifically, edges/light and dark discrepancies) and motion detection.

As with all the sensory modalities, information is then transmitted to the thalamus, a primitive brain structure that acts as a communications ‘hub’; its proximity to the brain stem (mid and hind brains) ensures that reflexes are privy to visual input prior to the conscious awareness. The lateral geniculate nucleus is the region of the thalamus which splits incoming visual input into three main signals; (M, P and K). Interestingly, these channels stream inputs into signals with unique properties (eg exclusively colour, motion etc). In addition, the cross lateralisation of visual input is a common feature of human brains. Left and right fields of view are diverted at the optic chiasm and processed on common hemispheres (left field of view from both eyes processed on the right side of the brain). One theory as to why this system develops is to minimise the impact of uni-lateral hemispheric damage – the ‘dual brain’ hypothesis (each hemisphere can act as an independent agent, reconciling and supplementing reductions in function due to damage).

We seem to lazily fall back on these automated subsystems with enthusiasm, never fully appreciating and flexing the full capabilities of sensory appendages. Micheal Frayn, in his book ‘The Human Touch’ demonstrates this point aptly;

“Slowly, as you force yourself to observe and not to take for granted what seems so familiar, everything becomes much more complicated…That simple blueness that you imagined yourself seeing turns out to have been interpreted, like everything else, from the shifting, uncertain material on offer” Frayn, 2006, p26

Of course, we are all blissfully ignorant of these finer details when it comes to interpreting the sensory input gathered by our bodies. The consciousness acts ‘with what it’s got’, without a care as to the authenticity or objectivity of the observations. We can observe this first hand in a myriad of different ways; ways in which the unreal is treated as if it were real. Hallucinations are just one mechanism where the brain is fooled. While we know such things are false, to a degree (depending upon the etiology, eg schizophrenia), such visual disturbances nonetheless are able to provoke physiological and emotional reactions. In summary, the biological (and automated) component of perception very much determines how we react to, and observe, the external world. In combination with the human mind (consciousness), which introduces a whole new menagerie of cognitive baggage, a large amount of uncertainty is injected into our perceptual experience.

Expanding outwards from this biological launchpad, it seems plausible that the qualities which make up the human sensory experience should have an effect on how we define the world empirically. Scientific endeavour labours to quantify reality and strip away the superfluous extras leaving only constitutive and fundamental elements. In order to accomplish this task, humanity employs the use of empirical observation. The segway between biological foundations of perception and the paradigm of scientific observation involves a similarity in sensory limitation. Classical observation was limited by ‘naked’ human senses. As the bulk of human knowledge grew, so too did the need to extend and improve methods of observation. Consequently, science is now possibly realising the limitation of the human mind to digest an overwhelming plethora of information.

Currently, science is restricted by the development of technology. Progress is only maintained through the ingenuity of the human mind to solve biological disadvantages of observation. Finely tuned microscopes tap into quantum effects in order to measure individual atoms. Large radio-telescope arrays link together for an eagle’s eye view of the heavens. But as our methods and tools for observing grow in complexity, so too does the degree of abstract reasoning that is required to grasp the implications of their findings. Quantum theory is one such warning indicator.

Like a lighthouse sweeps the night sky and signals impending danger, quantum physics, or more precisely, humanity’s inability to agree on any one consensus which accurately models reality, could be telling us something. Perhaps we are becoming too reliant on our tools of observation, using them as a crutch in a vain attempt to avoid our biological limitations. Is this a hallmark of our detachment from observation? Quantum ‘spookiness’ could simply be the result of a fundamental limitation of the human mind to internally represent and perceive increasingly abstract observations. Desperately trying to consume the reams of information that result from rapid progress and intense observation, scientific paradigms become increasingly specialised and diverged, increasing the degree of inter-departmental bureaucracy. It now takes a lifetime of training to even grasp the basics of current physical theory, let alone the time taken to dissect observations and truly grasp their essence.

In a sense, science is at a crossroads. One pathway leads to an empirical dead end; humanity has exhausted every possible route of explanation. The other involves either artificial augmentation (in essence, AI that can do the thinking for us) or a fundamental restructuring of how science conducts its business. Science is in danger of information overload; the limitations introduced by a generation of unrelenting technical advancement and increasingly complex tools with which to observe has taken its toll. Empirical progress is stalling, possibly due to a lack of understanding by those doing the observing. Science is detaching from its observations at an alarming rate, and if we aren’t careful, in danger of loosing sight of what the game is all about. The quest for knowledge and understanding of the universe in which we live.

Morality is a phenomenon that permeates through both society as a whole and also individually via the consciousness of independent entities. It is a force that regularly influences our behaviour and is experienced (in some form or another) universally, species-wide. Intuitively, morality seems to be at the very least, a sufficient condition for the creation of human groups. Without it, co-operation between individuals would be non-existent. But does morality run deeper? Is it, in fact, a necessary condition of group formation and a naturally emergent phenomenon that stems from the interaction of replicating systems? Or can morality only be experienced by organisms operating on a higher plane of existence – those that have the required faculties with which to weigh up pros and cons, engage in moral decision making and other empathic endeavors (related to theory of mind)?

The resolution to this question depends entirely on how one defines the term. If we take morality to encompass the act of mentally engaging in self-reflective thought as a means with which to guide observable behaviours (acting in either selfish or selfless interests), then the answer to our question is yes, morality seems to be inescapably and exclusively linked only to humanity. However, if we twinge this definition and look at the etiology of morality – where this term draws its roots and how it developed over time, one finds that even the co-operative behaviours of primitive organisms could be said to construe some sort of basic morality. If we delve even deeper and ask how such behaviours came to be, we find that the answer is not quite so obvious. Can a basic version of morality (observable through cooperative behaviours) result as a natural consequence of interactions beyond the singular?

When viewed from this perspective, cooperation and altruism seem highly unlikely; a system of individually competing organisms, logically, would evolve to favour the individual rather than the group. This question is especially prudent when studying cooperative behaviours in bacteria or more complex, multicellular forms of life, as they lack a consciousness capable of considering delayed rewards or benefits from selfless acts

In relation to humanity, why are some individuals born altruistic while others take advantage without cause for guilt? How can ‘goodness’ evolve in biological systems when it runs counter to the benefit of the individual? These are the questions I would like to explore in this article.

Morality, in the traditional, philosophical sense is often constructed in a way that describes the meta-cognitions humans experience in creating rules for appropriate (or inappropriate) behaviour (inclusive of mental activity). Morality can take on a vast array of flavours; evil at one extreme, goodness at the other. We use our sense of morality in order to plan and justify our thoughts and actions, incorporating it into our mental representations of how the world functions and conveys meaning. Morality is a dynamic; it changes with the flow of time, the composition of society and the maturity of the individual. We use it not only to evaluate the intentions and behaviours of ourselves, but also of others. In this sense, morality is an overarching egoistic ‘book of rules’ which the consciousness consults in order to determine whether harm or good is being done. Thus, it seeps into many of our mental sub-compartments; decision making, behavioural modification, information processing, emotional response/interpretation and mental planning (‘future thought’) to name a few.

As morality entertains such a privileged omni-presence, humanity has, understandably, long sought to not only provide standardised ‘rules of engagement’ regarding moral conduct but has also attempted to explain the underlying psychological processes and development of our moral capabilities. Religion, thus, could perhaps be the first of such attempts at explanation. It certainly contains many of the idiosyncrasies of morality and proposes a theistic basis for human moral capability. Religion removes ultimate moral responsibility from the individual, instead placing it upon the shoulders of a higher authority – god. The individual is tasked with simple obedience to the moral creeds passed down from those privileged few who are ‘touched’ with divine inspiration.

But this view does morality no justice. Certainly, if one does not subscribe to theistic beliefs then morality is in trouble; by this extreme positioning, morality is synonymous with religion and one definitely cannot live without the other.

Conversely (and reassuringly), in modern society we have seen that morality does exist in individuals whom lack spirituality. It has been reaffirmed as an intrinsically human trait with deeper roots than the scripture of religious texts. Moral understanding has matured beyond the point of appealing to a higher being and has reattached itself firmly to the human mind. The problem with this newfound interpretation is that in order for morality to be considered as a naturally emergent product of biological systems, moral evolution is a necessary requirement. Put simply, natural examples of moral systems (consisting of cooperative behaviour and within group preference) must be observable in the natural environment. Moral evolution must be a naturally occurring phenomenon.

A thought experiment known as the “Prisoner’s dilemma” summarises succinctly the inherent problems with the natural evolution of mutually cooperative behaviour. This scenario consists of two parties, prisoners, whom are seeking an early release from jail. They are given the choice of either a) betraying their cellmate and walking free while the other has their sentence increased – ‘defecting’ or b) staying silent and mutually receiving a shorter sentence – ‘cooperating’. It becomes immediately apparent that in order for both parties to benefit, both should remain silent and enjoy a reduced incarceration period. Unfortunately, and also the catalyst for the terming of this scenario as a dilemma, the real equilibrium point is for both parties to betray. Here, the pay-off is the largest – walking free while your partner in crime remains behind with an increased sentence. In the case of humans, it seems that some sort of meta-analysis has to be done, a nth-order degree of separation (thinking about thinking about thinking), with the most dominant stratagem resulting in betrayal by both parties.

Here we have an example of the end product; an advanced kind of morality resulting from social pressures and their influence on overall outcome (should I betray or cooperate – do I trust this person?). In order to look at the development of morality from its more primal roots, it is prudent to examine research in the field of evolutionary biology. One such empirical investigation (conducted by Aviles, 2002that is representative of the field involves the mathematical simulation of interacting organisms. Modern computers lend themselves naturally to the task of genetic simulation. Due to the iterative nature of evolution, thousands of successive generations live, breed and die in the time it takes the computer’s CPU to crunch through the required functions. Aviles (2002) took this approach and created a mathematical model that begins at t = 0 and follows pre-defined rules of reproduction, genetic mutation and group formation. The numerical details are irrelevant; suffice to say that cooperative behaviours emerged in combination with ‘cheaters’ and ‘freeloaders’. Thus we see the dichotomous appearance of a basic kind of morality that has evolved spontaneously and naturally, even though the individual may suffer a ‘fitness’ penalty. More on this later.

“[the results] suggest that the negative effect that freeloaders have on group productivity (by failing to contribute to communal activities and by making groups too large) should be sufficient to maintain cooperation under a broad range of realistic conditions even among nonrelatives and even in the presence of relatively steep fitness costs of cooperation” Aviles, (2002).

Are these results translatable to reality? It is all well and good to speak of digital simulations with vastly simplified models guiding synthetic behaviour; the real test comes in observation of naturally occurring forms of life. Discussion by Kreft and Bonhoeffer (2005) lends support to the reality of single-celled cooperation, going so far as suggesting that “micro-organisms are ever more widely recognized as social”. Surely an exaggerated caricature of the more common definition of ‘socialness’, however the analogy is appropriate. Kreft et al effectively summarise the leading research in this field, and put forward the resounding view that single-celled organisms can evolve to show altruistic (cooperative) behaviours. We should hope so; otherwise the multicellularity which led to the evolution of humanity would have nullified our species’ development before it even started!

But what happened to those pesky mutations that evolved to look out for themselves? Defectors (choosing not to cooperate) and cheaters (choosing to take advantage of altruists) are also naturally emergent. Counter-intuitively, such groups are shown to be kept in their place by the cooperators. Too many cheaters, and the group fails through exploitation. The key lies in the dynamic nature of this process. Aviles (2002) found that in every simulation, the number of cheaters was kept in control by the dynamics of the group. A natural equilibrium developed, with the total group size fluctuating according to the number of cheaters versus cooperators. In situations where cheaters ruled; the group size dropped dramatically, resulting in a lack of productive work and reduced reproductive rates. Thus, the number of cheaters is kept in check by the welfare of the group. It’s almost a love/hate relationship; the system hates exploiters, but in saying that, it also tolerates their existence (in sufficient numbers).

Extrapolating from these conclusions, a logical outcome would be the universal adoption of cooperative behaviours. There are prime examples of this in nature; bee and ant colonies, migratory birds, various aquatic species, even humans (to an extent) all work together towards the common good. The reason why we don’t see this more often, I believe, is due to convergent evolution – different species solved the same problem from different approaches. Take flight for example – this has been solved separate times in history by both birds and insects. The likelihood of cooperation is also affected by external factors; evolutionary ‘pressures’ that can guide the flow of genetic development. The physical structure of the individual, environmental changes and resource scarcity are all examples of such factors that can influence whether members of the same species work together.

Humanity is a prime example; intrinsically we seem to have a sense of inner morality and tendency to cooperate when the conditions suit. The addition of consciousness complicates morality somewhat, in that we think about what others might do in the same situation, defer to group norms/expectations, conform to our own premeditated moral guidelines and are paralyzed by indecisiveness. We also factor in environmental conditions, manipulating situations through false displays of ‘pseudo-morality’ to ensure our survival in the event of resource scarcity. But when the conditions are just so, humanity does seem to pull up its trousers and bind together as a singular, collective organism. When push comes to shove humanity can work in unison. However just as bacteria evolve cheaters and freeloaders, so to does humanity give birth to individuals that seem to lack a sense of moral guidance.

Morality must be a universal trait, a naturally emergent phenomenon that predisposes organisms to cooperate towards the common good. But just as moral ‘goodness’ evolves naturally, so too does immorality. Naturally emergent cheaters and freeloaders are an intrinsic part of the evolution of biological systems. Translating these results to the plight of humanity, it becomes apparent that such individual traits are also naturally occurring in society. Genetically, and to a lesser extent, environmentally, traits from both ends of the moral scale will always be a part of human society. This surely has implications for the plans of a futurist society, relying solely on humanistic principles. Moral equilibrium is ensured, at least biologically, for the better or worse. Whether we can physically change the course of natural evolution and produce a purely cooperative species is a question that can only be answered outside the realms of philosophy.

When people attempt to describe their sense of self, what are they actually incorporating into the resultant definition? Personality is perhaps the most common conception of self, with vast amounts of empirical validation. However, our sense of self runs deeper than such superficial descriptions of behavioural traits. The self is an amalgamation of all that is contained within the mind; a magnificent average of every synaptic transmission and neuronal network. Like consciousness, it is an emergent phenomenon (the sum is greater than the parts). But unlike the conscious, self ceases to be when individual components are removed or modified. For example, consciousness is virtually unchanged (in the sense of what it defines – directed, controlled thought) with the removal of successive faculties. We can remove physical brain structures such as the amygdala and still utilise our capacities for consciousness, albeit loosing a portion of the informative inputs. However the self is a broader term, describing the current mental state of ‘what is’. It is both a snapshot of the descriptive, providing a broad overview of what we are at time t, and prescriptive, in that the sense of self has an influence over how behaviours are actioned and information is processed.

In this article I intend to firstly describe the basis of ‘traditional’ measures of the self; empirical measures of personality and cognition. Secondly I will provide a neuro-psychological outline of the various brain structures that could be biologically responsible for eliciting our perceptions of self. Finally, I wish to propose the view that our sense of self is dynamic, fluctuating daily based on experience and discuss how this could affect our preconceived notions of introspection.

Personality is perhaps one of the most measured variables in psychology. It is certainly one of the most well-known, through its portrayal in popular science as well as self-help psychology. Personality could also be said to comprise a major part of our sense of self, in that the way in which we respond to and process external stimuli (both physically and mentally) has major effects on who we are as an entity. Personality is also incredibly varied; whether due to genetics, environment or a combination of both. For this reason, psychological study of personality takes on a wide variety of forms.

The lexical hypothesis, proposed by Francis Galton in the 19th century, became the first stepping stone from which the field of personality psychometrics was launched. Galton’s posit was that the sum of human language, its vocabulary (lexicon), contains the necessary ingredients from which personality can be measured. During the 20th century, others expanded on this hypothesis and refined Galton’s technique through the use of Factor Analysis (a mathematical model that summarises common variance into factors). Methodological and statistical criticisms of this method aside, the lexical hypothesis proved to be useful in classifying individuals into categories of personality. However this model is purely descriptive; it simply summarises information, extracting no deeper meaning or providing background theory with which to explain the etiology of such traits. Those wishing to learn more about descriptive measures of personality can find this information under the headings ‘The Big Five Inventory’ (OCEAN) and Hans Eysencks Three Factor model (PEN).

Neuropsychological methods of defining psychology are less reliant on statistical methods and utilise a posteriori knowledge (as opposed to the lexical hypothesis which relies on reasoning/deduction). Thus, such theories have a solid empirical background with first-order experimental evidence to provide support to the conclusions reached. One such theory is the BIS/BAS (behavioural inhibition/activation system). Proposed by Gray (1982), the BIS/BAS conception of personality builds upon individual differences in cortical activity in order to arrive at the observable differences in behaviour. Such a revision of personality turns the tables on traditional methods of research in this area, moving away from superficially describing the traits to explaining the underlying causality. Experimental evidence has lent support to this model through direct observation of cortical activity (functional MRI scans). Addicts and sensation seekers are found to have high scores on behavioural activation (associated with increased per-frontal lobe activity), while introverts score high on behavioural inhibition. This seems to match up with our intuitive preconceptions of these personality groupings; sensation seekers are quick to action, in short they tend to act first and think later. Conversely, introverts act more cautiously, adhering to a policy of ‘looking before they leap’. Therefore, while not encapsulating as wide a variety of individual personality factors as the ‘Big Five’, the BIS/BAS model and others based on neurobiological foundations seem to be tapping into a more fundamental, materialistic/reductionist view of behavioural traits. The conclusion here is that directly observable events and the resulting individual differences ipso facto arise from specific regions in the brain.

Delving deeper into this neurology, the sense of self may have developed as a means to an end; the end in this case is predicting the behaviour of others. Therefore, our sense of self and consciousness may have evolved as a way of internally simulating how our social competitors think, feel and act. V. Ramachandran (M.D.), in his Edge.org exclusive essay, calls upon his neurological experience and knowledge of neuroanatomy to provide a unique insight into the physiological basis of self. Mirror neurons are thought to act as mimicking simulators of external agents, in that they show activity both performing a task and while observing someone else performing the same task. It is argued that such neuronal conglomerates evolved due to social pressures; a method of second guessing the possible future actions of others. Thus, the ability to direct these networks inwards was an added bonus. The human capacity for constructing a valid theory of mind also gifted us with the ability to scrutinise the self from a meta-perspective (an almost ‘out-of-body’ experience ala a ‘Jimeny the Cricket’ style conscience).

Mirror neurons also act as empathy meters; firing across synaptic events during moments of emotional significance. In effect, our ability to recognise the feelings of others stems from a neuronal structure that actually elicits such feelings within the self. Our sense of self, thus, is inescapably intertwined with that of other agents’ self. Like it or not, biological dependence on the group has resulted in the formation of neurological triggers which fire spontaneously and without our consent. In effect, the intangible self can be influenced by other intangibles, such as emotional displays. We view the world through ‘rose coloured glasses’ with an emphasis on theorizing the actions of others through how we would respond in the same situation.

So far we have examined the role of personality in explaining a portion of what the term ‘self’ conveys. In addition, a biological basis for self has been introduced which suggests that both personality and the neurological capacity for introspection are both anatomically definable features of the brain. But what else are we referring to when we speak of having a sense of self? Surely we are not doing this construct justice if all that it contains is differences in behavioural disposition and anatomical structure.

Indeed, the sense of self is dynamic. Informational inputs constantly modify and update our knowledge banks, which in turn, have ramifications for self. Intelligence, emotional lability, preferences, group identity, proprioreception (spatial awareness); the list is endless. Although some of these categories of self may be collapsible into higher order factors (personality could incorporate preference and group behaviour), it is arguable that to do so would result in the loss of information. The point here is that to look at the bigger picture may obscure the finer details that can lead to further enlightenment on what we truly mean when we discuss self.

Are you the same person you were 10 years ago? In most cases, if not all, the answer will be no. Core traits may remain relatively stable, such as temperament, however arguably, individuals change and grow over time. Thus, their sense of self changes as well, some people may become more attuned to their sense of self than others, developing a close relationship through introspective analysis. Others, sadly, seem to lack this ability of meta-cognition; thinking about thinking, asking the questions of ‘why’, ‘who am I’ and ‘how did I come to be’. I believe this has implications for the growth of humanity as a species.

Is a state of societal eudaimonia sustainable in a population that has varying levels of ‘selfness’? If self is linked to the ability to simulate the minds of others, which is also dependent upon both neurological structure (leading to genetic mutation possibly reducing or modifying such capacities) and empathic responses, the answer to this question is a resounding no. Whether due to nature or nurture, society will always have individuals whom are more self-aware than others, and as a result, more attentive and aware of the mental states of others. A lack of compassion for the welfare of others coupled with an inability to analyse the self with any semblance of drive and purpose spells doom for a harmonious society. Individuals lacking in self will refuse, through ignorance, to grow and become socially aware.

Perhaps collectivism is the answer; forcing groups to co-habitate may introduce an increased appreciation for theory of mind. If the basis of this process is mainly biological (as it would seem to be), such a policy would be social suicide. The answer could dwell in the education system. Introducing children to the mental pleasures of psychology and at a deeper level, philosophy, may result in the recognisation of the importance of self-reflection. The question here is not only whether students will grasp these concepts with any enthusiasm, but also if such traits can be taught via traditional methods. More research must be conducted into the nature of the self if we are to have an answer to this quandry. Is self related directly to biology (we are stuck with what we have) or can it be instilled via psycho-education and a modification of environment?

Self will always remain a mystery due to its dynamic and varied nature. It is with hope that we look to science and encourage its attempts to pin down the details on this elusive subject. Even if this quest fails to produce a universal theory of self, perhaps it will be successful in shedding at least some light onto the murky waters of self-awareness. In doing so, psychology stands to benefit both from a philosophical and a clinical perspective, increasing our knowledge of the causality underlying disorders of the self (body dysmorphia, depression/suicide, self-harming) .

If you haven’t already done so, take a moment now to begin your journey of self discovery; you might just find something you never knew was there!

Most of us would like to think that we are independent agents that are in control of our destiny. After all, free-will is one of the unique phenomena that humanity can claim as its own – a fundamental part of our cognitive toolkit. Experimental evidence, in the form of neurological imaging has been interpreted as an attack on mental freedom. Studies that highlight the possibility of unconscious activity preceding the conscious ‘will to act’ seem to almost sink the arguments from non-determinists (libertarians). In this article I plan to outline this controversial research and offer an alternative interpretation; one which does not infringe on our abilities to act independent and of our own accord. I would then like to explore some of the situations where free-will could be ‘missing in action’ and suggest that the frequency at which this occurs is larger than expected.

A seminal investigation conducted by Libet et al (1983) first challenged (empirically) our preconceived notions of free-will. The setup consisted of an electroencephalograph (EEG, measuring overall electrical potentials through the scalp) connected to the subject and a large clock with markings denoting various time periods. Subjects were required to simply flick their wrist whenever a feeling urged them to do so. The researchers were particularly interested in the “Bereitschaftspotential” or readiness potential; a signature EEG pattern of activity that signals the beginning of volitional initiation of movement. Put simply, the RP is an measurable spike in electrical activity from the pre-motor region of the cerebral cortex – a mental preparatory action that put the wheels of movement into action.

Results of this experiment indicated that the RP significantly preceded the subjects’ reported sensations of conscious awareness. That is, the act of wrist flicking seemed to precede conscious awareness of said act. While the actual delay between RP detection and conscious registration of intent to move was small (by our standards), the half a second gap was more than enough to assert that a measurable difference had occurred. Libet interpreted these findings as having vast implications for free-will. It was argued that since electrical activity preceded conscious awareness of the intent to move, free-will to initiate movement (Libet allowed free-will to control movements already in progress, that is, modify their path or act as a final ‘veto’ in allowing or disallowing it to occur) was non-existent.

Many have taken the time to respond to Libet’s initial experiment. Daniel Dennet (in his book Freedom Evolves) provides an apt summary of the main criticisms. The most salient rebuttal comes in the form of signal delay. Consciousness is notoriously slow in comparison to the automated mental processes that act behind the scenes. Take the sensation of pain, for example. Initial stimulation of the nerve cells must firstly reach sufficient levels for an action potential to fire, causing dendrites to flood ions into the synaptic gap. The second-order neuron then receives these chemical messengers, modifying its electrical charge and causing another action potential to fire along its myelinated axon. Now, taking into account the length that this signal must travel (at anywhere from 1-10m/s), it will then arrive at the thalamus, the brain’s sensory ‘hub’ where it is then routed to consciousness. Consequently, there is a measurable gap between the external event and conscious awareness; perhaps made even larger if the signal is small (low pain) or the mind is distracted. In this instance, electrical activity is also taking place and preceding consciousness. Arguably the same phenomenon could be occurring in the Libet experiment.

Delays are inevitably introduced when consciousness is involved in the equation. The brain is composed of a conglomerate of specialised compartments, each communicating with its neighbours and performing its own part of the process in turn. Evolution has drafted brains that act automatic first, and conscious second. Consequently, the automatic gains priority over the directed. Reflexes and instincts act to save our skins long before we are even aware of the problem. Naturally, electrical activity in the brain could thus precede conscious awareness.

In the Libet experiment, the experimental design itself could be misleading. Libet seems to equate his manipulation of consciousness timing with free-will, when in actual fact, the agent has already decided freely that they will follow instructions. What I am trying to say here is that free-will does not have to act as an initiator to every movement; rather it acts to ‘set the stage’ for events and authorises the operation to go ahead. When told to move voluntarily, the agent’s will makes the decision to either comply or rebel. Compliance causes the agent to authorise movement, but the specifics are left up to chance. Perhaps a random input generator (quantum indeterminacy?) provides the catalyst with which this initial order combines to create the RP and eventual movement. Conscious registration of this fact only occurs once the RP is already starting to form.

Looking at things from this perspective, consciousness seems to play a constant game of ‘catch-up’ with the automated processes in our brains. Our will is content to act as a global authority, leaving the more menial and mundane tasks up to our brain’s automated sub-compartments. Therefore, free-will is very much alive and kicking, albeit sometimes taking a back-seat to the unconscious.

We have begun by exploring the nature of free-will and how it links in with consciousness. But what of these unconscious instincts that seek to override our sense of direction and seek to regress humanity back to its more animalistic and primitive ancestry? Such instincts act covertly; sneakily acting whilst our will is otherwise indisposed. Left unabated, the agent that gives themselves completely to urges and evolutionary drives could be said to be devoid of free-will, or at the very least, somewhat lacking compared to more ‘aware’ individuals. Take sexual arousal, for instance. Like it or not, our bodies act on impulse, removing free-will from the equation with simplistic stimulus:response conditioning processes. Try as we might, sexual arousal (if allowed to follow its course) acts immediately upon visual or physical stimulation. It is only when the consciousness kicks into gear and yanks on the leash attached to our unconscious that control is regained. Eventually, with enough training, it may be possible to override these primitive responses, but the conscious effort required to sustain such a project would be psychically draining.

Society also seeks to rob us of our free-will. People are pushed and pulled by group norms, expectations of others and the messages that are constantly bombarding us on a daily basis. Rather than encouraging individualism, modern society is instead urging us to follow trends. Advertising is crafted in a way that the individual may even be fooled into thinking that they are arriving at decisions of their own volition (subliminal messaging), when in actual fact, it is simply tapping into some basic human need for survival (food, sex, shelter/security etc).

Ironically, science itself could also be said to be reducing the amount of free-will we can exert. Scientific progress seeks to make the world deterministic; that is, totally predictable through increasingly accurate theories. While the jury is still out as to whether ‘ultimate’ accuracy in prediction will ever occur (arguably, there is not enough bits of information in the universe with which to construct a computer powerful enough to complete such a task) science is coming closer to a deterministic framework whereby the paths of individual particles can be predicted. Quantum physics is but the next hurdle to be overcome in this quest for omniscience. If the inherent randomness that lies within quantum processes is ever fully explained, perhaps we will be at a place (at least scientifically) to model a individual’s future action based on a number of initial variables.

What could this mean for the nature of free-will? If past experiments are anything to go by (Libet et al), it will rock our sense of self to the core. Are we but behaviouristic automatons as the psychologist Skinner proposed? Delving deeper into the world of the quanta, will we ever be able to realistically model and predict the paths of individual particles and thus the future course of the entire system? Perhaps the Heisenberg Uncertainty Principle will spare us from this bleak fate. The indivisible randomness of the quantum wave function could potentially be the final insurmountable obstacle that neurological researchers and philosophers alike will never be able to conquer.

While I am all for scientific progress and increasing the bulk of human knowledge, perhaps we are jumping the gun with this free-will stuff. Perhaps some things are better left mysterious and unexplained. A defeatist attitude if ever I saw one, but it could be justified. After all, how would you feel if you knew every action was decided before you were even a twinkle in your father’s eye? Would life even be worth living? Sure, but it would take alot of reflection and a personality that could either deny or reconcile the feelings of unease that such a proposition brings.

They were right; ignorance really is bliss.

Compartmentalisation of consciousness