You are currently browsing the tag archive for the ‘science’ tag.

The supreme scale and vast expanse of the Universe is awe inspiring. Contemplation of its grandeur has been described as a type of scientific spiritualism; broadening the mind’s horizons in a vain attempt to grasp our place amongst such awesome magnitude. Containing some 200 billion stars (or 400 billion, depending on whom you ask), our relatively humble home in the Milky Way is but one of billions of other such homes for other countless billions of stars. Likewise, our small blue dot of a planet is but one of possible billions of similar planets spread throughout the Universe.

To think that we are alone in such a vast expanse of space is not only unlikely, but irrational. For eons, human egocentricism has blinkered ideology and spirituality. Our belief systems place humanity upon a pedestal, indicating implicitly that we are alone and incredibly unique. The most salient of which is the ‘Almagest’; Ptolemy’s Earth-centred view of the Universe.

While we may be unique, the tendency of belief systems to invoke meaning in our continued existence leaves no place for humility. The result of this human focussed Universe is one where our race arrogantly fosters its own importance. Consequently, the majority of the populace has little or no concern in cosmic contemplation, nor an appreciation of truly objective thought with the realisation that Earth and our intelligent civilisation does not give sole definition to the cosmos. The Universe will continue to exist as it always have whether we are around or not.

But to do otherwise would spell certain doom for our civilisation, and it is easy to see why humans have placed so much importance upon themselves in the grand scheme of things. The Earth is home to just one intelligent species, namely us. If the Neanderthals had survived, it surely would have been a different story (in terms of the composition of social groups). Groups seem to unite against common foes, therefore a planet with two or more intelligent species would distinguish less within themselves, and more between. Given the situation we find ourselves in as the undisputed lords of this planet, it is no wonder we attach such special significance to ourselves as a species (and to discrediting the idea that we are not alone in the Universe).

It seems as if humanity needs their self-esteem bolstered when faced with the harsh reality that our existence is trivial when compared to the likelihood of other forms of life and the grandeur of the Universe at large. Terror Management Theory is but one psychological hypothesis as to why this may be the case. The main postulate of this theory is that our mortality is the most salient factor throughout life. A tension is created because on the one hand, death is inevitable, and on the other, we are intimately aware of its approach yet desperately try to minimise its effects on our lives. Thus it is proposed that humanity attempts to minimise the terror associated with impending death through cultural and spiritual beliefs (afterlife, the notion of mind/body duality – the soul continues on after death). TMT puts an additional spin on the situation by suggesting cultural world-views, and the tendency for people to protect these values at all costs (reaffirming cultural beliefs by persecuting the views of others reduces the tension produced by death).

While the empirical validity of TMT is questionable (experimental evidence is decidedly lacking), human belief systems do express an arrogance that prevents a more holistic system from emerging. The Ptolemic view dominated scientific inquiry during the middle ages, most likely due to its adoption by the church. Having the Earth as the centre of the Universe coincided nicely with theological beliefs that humanity is the sole creation of god. It may also have improved the ‘scientific’ standing of theology in that it was apparently supported by theory. What the scholars of this period failed to realise was the principle of Occam’s Razor, that being the simpler the theory the better (if it still explains the same observations). The overly complicated Ptolemic system could explain the orbit of planetary bodies, at the expense of simplicity (via the addition of epicycles to explain the anomalous motion of planets).

Modern cosmology has thankfully overthrown such models, however the ideology remains. Perhaps hampered and weighed down by daily activities, people simply do not have the time to consider an existence outside of their own immediate experience. From an evolutionary perspective, an individual would risk death if thought processes were wasted on external contemplation, rather than a selfish and immediate satisfaction of biological needs. Now that society has progressed to a point where time can be spent on intellectual pursuits, it makes sense that outmoded beliefs regarding our standing in the Universe should be rectified.

But just how likely is the possibility of life elsewhere? Science-fiction has long been an inspiration in this regard, its tales of Martian invaders striking terror into generations of children. The first directed empirical venture in this area came about with the SETI conference at Green Bank, West Virginia in 1961. At this conference, not only were the efforts of radio-astronomers to detect foreign signals discussed in detail, but one particular formulation was also put forward. Known as the Drake Equation, it was aimed at quantifying and humanising the very large numbers that are thrown about when discussing intergalactic probabilities.

Basically the equation takes a series of values thought to contribute to the likelihood of intelligent life evolving, multiplying the probabilities together and outputting a single number; the projected number of intelligent civilisations in the galaxy. Of course, the majority of the numbers used are little more than educated guesses. However, even with conservative values, this number is above 1. Promising stuff.

Fortunately, with each astronomical advance these numbers are further refined, giving a (hopefully) more accurate picture of reality. The SETI project may have even found the first extra-terrestrial signal in 1977. Dubbed the ‘Wow!’ signal (based on the researcher’s margin comments on the printout sheet), this burst of activity bore all the hallmarks of artificial origin. Sadly, this result has not been replicated despite numerous attempts.

All hope is not lost. SETI has received a revitalising injection of funds from none other than Microsoft’s Paul Allen, as well as the immensely popular SETI@Home initiative which utilises distributed network technology to sort through the copious amounts of generated data. Opponents to SETI form two main camps; those whom believe it is a waste of funds better spent on more Earthly concerns (a valid point) and those whom perceive SETI as dangerous to our continued existence. The latter point is certainly plausible (albeit unlikely). The counter claim in this instance is that if such a civilisation did exist and was sufficiently advanced to travel intergalactic distances, the last thing on their mind would be the annihilation of our insignificant species.

The notion of Star Trek’s ‘Prime Directive’ seems the most likely situation to have unfolded thus far. Extra-terrestrial civilisations would most likely seek a policy of non-interference with our meager planet, perhaps actively disguising their transmissions in an attempt to hide their activity and prevent ‘cultural contamination’.

Now all we need is for the faster-than-light barrier to be crossed and the Vulcans will welcome us into the galactic society.

Advertisements

The human brain and the internet share a key feature in their layout; a web-like structure of individual nodes acting in unison to transmit information between physical locations. In brains we have neurons, comprised in turn of myelinated axons and dendrites. The internet is comprised of similar entities, with connections such as fibre optics and ethernet cabling acting as the mode of transport for information. Computers and routers act as gateways (boosting/re-routing) and originators of such information.

How can we describe the physical structure and complexity of these two networks? Does this offer any insight into their similarities and differences? What is the plausibility of a conscious Internet? These are the questions I would like to explore in this article.

At a very basic level, both networks are organic in nature (surprisingly, in the case of the Internet); that is, they are not the product of an ubiquitous ‘designer’ and are given the freedom to evolve as their environment sees fit. The Internet is given permission to grow without a directed plan. New nodes and capacity added haphazardly. The naturally evolved topology of the Internet is one that is distributed; the destruction of nodes has little effect on the overall operational effectiveness of the network. Each node has multiple connections, resulting in an intrinsic redundancy where traffic is automatically re-routed to the target destination via alternate paths.

We can observe a similar behaviour in the human brain. Neurological plasticity serves a function akin to the distributed nature of the Internet. Following injury to regions of the brain, adjacent areas can compensate for lost abilities by restructuring neuronal patterns. For example, injuries to the frontal cortex motor area can be minimised with adjacent regions ‘re-learning’ otherwise mundane tasks that have since been lost as a result of the injury. While such recoveries are entirely possibly with extensive rehabilitation, two key factors determine the likelihood and efficiency of the operation; the intensity of the injury (percentage of brain tissue destroyed, location of injury) and leading from this, the chronological length of recovery. These factors introduce the first discrepancy between these two networks.

Unlike the brain, the Internet is resilient to attacks on its infrastructure. Local downtime is a minor inconvenience as traffic moves around such bottlenecks by taking the next fastest path available. Destruction of multiple nodes has little effect on the overall web of information. Users may loose access to or experience slowness in certain areas, but compared to the remainder of possible locations (not to mention redundancies in content – simply obtain the information elsewhere) such lapses are just momentary inconveniences. But are we suffering from a lack of perspective when considering the similarities of the brain and the virtual world? Perhaps the problem is one related to a sense of scale. The destruction of nodes (computers) could instead be interpreted in the brain as the removal of individual neurons. If one takes this proposition then the differences begin to loose their lucidity.

An irrefutable difference, however, arises when one considers both the complexity and the purpose of the two networks. The brain contains some 100 billion neurons, whilst the Internet comprises a measly 1 billion users by comparison (with users roughly equating the number of nodes, or access terminals that are physically connected to the Internet). Brains are the direct product of evolution, created specifically to keep the organism alive in an unwelcoming and hostile living environment. The Internet, on the other hand, is designed to accommodate a never-ending torrent of expanding human knowledge.  Thus the dichotomy in purpose between these two networks is quite distinguished, with the brain focusing on reactionary and automated responses to stimuli while the Internet aims to store information and process requests for its extraction to the end user.

Again we can take a step back and consider the similarities of these two networks. Looking at topology, it is apparent that the distributed nature of the Internet is similar to the structure and redundancy of the human brain. In addition, the Internet is described as a ‘scale-free’ or power-law network, indicating that a small percentage of highly connected nodes accounts for a very large percentage of the overall traffic flow. In effect, a targeted attack on these nodes would be successful in totally destroying the network. The brain, by comparison, appears to be organised into distinct and compartmentalised regions. Target just a few or even one of these collections of cells and the whole network collapses.

It would be interesting to empirically investigate the hypothesis that the brain is also a scale-free network that is graphically represented via a power law. Targetting the thalamus for destruction, (which is a central hub through which sensory information is redirected) might have the same devastating effect on the brain as destroying the ICANN headquarters in the USA (responsible for domain name assignment).

As aforementioned, the purposes of these two networks are different, yet share the common bond of processing and transferring information. At such a superficial level we see that the brain and the Internet are merely storage and retrieval devices, upon which the user (or directed thought process) are sent on a journey through a virtual world towards their intended target (notwithstanding the inevitable sidetracks along the way!). Delving deeper, the differences in purpose act as a deterrent when one considers the plausibility of consciousness and self-awareness.

Which brings us to the cusp of the article. Could the Internet, given sufficient complexity, become a conscious entity in the same vein as the human brain? Almost immediately the hypothesis is dashed due to its rebellion against common sense. Surely it is impossible to propose that a communications network based upon binary machines and internet protocols could ever achieve a higher plane of existence. But the answer might not be as clear cut as one would like to believe. controversially, both networks could be controlled by indeterminate processes. The brain, at its very essence, is governed by quantum unpredictability. Likewise, activity on the Internet is directed by self-aware, indeterminate beings (which in turn, are the result of quantum processes). At what point does the flow of information over a sufficiently complex network result in an emergent complexity mots notably characterised by a self-aware intelligence? Just as neurons react to the incoming electrical pulses of information, so too do the computers of the internet pass along packets of data. Binary code is equated with action potentials; either information is transmitted or not.

Perhaps the most likely (and worrying) outcome in a futurist world would be the integration of an artificial self-aware intelligence with the Internet. Think Skynet from the Terminator franchise. In all possibility such an agent would have the tools at its disposal to highjack the Internet’s comprising nodes and reprogram them in such a fashion as to facilitate the growth of an even greater intelligence. The analogy here is if the linking of human minds were possible, the resulting intelligence would be great indeed – imagine a distributed network of humanity, each individual brain linked to thousands of others in a grand web of shared knowledge and experience.

Fortunately such a doomsday outlook is most likely constrained within the realms of science fiction. Reality tends to have a reassuring banality about it that prevents the products of human creativity from becoming something more solid and tangible. Whatever the case may be in regards to the future of artificial intelligence, the Internet will continue to grow in complexity and penetration. As end user technology improves, we take a continual step closer towards an emergent virtual consciousness, whether it be composed of ‘uploaded’ human minds or something more artificial in nature. Let’s just hope that a superior intelligence can find a use for humanity in such a future society.

A recurring theme and technological prediction of futurists is one in which human intelligence supersedes that of the previous generation through artificial enhancement. This is a popular topic on the Positive Futurist website maintained by Dick Pelletier, and one which provides food for thought. Mr Pelletier outlines a near future (2030s) where a combination of nanotechnology and insight into the inner workings of the human brain facilitate an exponential growth of intelligence. While the accuracy of such a prediction is open to debate (specifically the technological possibilities of successful development within the given timeframe), if such a rosy future did come to fruition what would be the consequences on society? Specifically, would an increase of average intelligence necessarily result in an overall improvement to quality of life? If so, which areas would be mostly affected (eg morality, socio-economic status)? These are the questions I would like to explore in this article.

The main argument provided by futurists is that technological advances relating to nano-scale devices will soon be realised and implemented throughout society. By utilising these tiny automatons to the largest extent possible, it is thought that both disease and aging could be eradicated by the middle of this century. This is due to the utility of nanobots, specifically their ability to carry out pre-programmed tasks in a collective and automated fashion without any conscious awareness on behalf of the host. In essence, nano devices could act as a controllable extension of the human body, giving health professionals the power to monitor and treat throughout the organisms lifespan. But the controllers of these instruments need to know what to target and how to best direct their actions; a point of possible sabotage to the futurists’ plan. In all likelihood, however, such problems will only prove to serve as temporary hindrances and should be overcome through extensive testing and development phases.

Assuming that a) such technology is possible and b) it can be controlled to produce the desired results, the future looks bright for humanity. By further extending nanotechnology with cutting edge neurological insight, it is feasible that intelligence can be artificially increased. The possibility of artificial intelligence and the development of an interface with the human mind almost ensures a future filled with rapid growth. To this end, an event aptly named the ‘technological singularity’ has been proposed, which outlines the extension of human ability through aritificial means. The singularity allows for innovation to exceed the rate of development; in short, humankind could advance (technologically) faster than the rate of input. While the plausibility of such an event is open to debate, it does sound feasible that artificial intelligence could assist us to develop new and exciting breakthroughs in science. If conscious, self-directed intelligence were to be artificially created this may assist humanity even further; perhaps the design of specific minds would be possible (need a physical breakthrough – just create an artificial Einstein). Such an idea hinges totally on the ability of neuroscientists to unlock the secrets of the human brain and allow the manipulation or ‘tailoring’ of specific abilities.

While the jury is still out debating the details of how such a feat will be made technologically possible, a rough outline of the methodologies involved in artificial augmentation could be enlightening. Already we are seeing the effects of a society increasingly driven by information systems. People want to know more in a shorter time, in other words, increase efficiency and volume. To compensate for the already torrential hordes of information available on various mediums (the internet springs to mind) humanity relies increasingly on ways to filter, absorb and understand stimuli. We are seeing not only a trend in artificial aids (search engines, database software, larger networks) but also a changing pattern in the way we scan and retain information. Internet users are now forced to make quick decisions and scan superficially at high speed to obtain information that would otherwise be lost amidst the backlog of detail. Perhaps this is one way in which humanity is guiding the course of evolution and retraining the minds basic instincts away from more primitive methods of information gathering (perhaps it also explains our parents’ ineptitude for anything related to the IT world!) This could be one of the first targets for augmentation; increasing the speed of information transfer via programmed algorithms that fuse our natural biological mechanisms of searching with the power of logical, machine-coded functions. Imagine being able to combine the biological capacity to effortlessly scan and recognise facial features with the speed of computerised programming.

How would such technology influence the structure of society today? The first assumption that must be taken is the universal implementation/adoption of such technologies by society. Undoubtedly there will be certain populations whom refuse for whatever reason, most likely due to a perceived conflict with their belief system. It is important to preserve and respect such individuality, even if it means that these populations will be left behind in terms of intellectual enlightenment. Critics of future societies and futurists in general argue that a schism will develop, akin to the rising disparities in wealth distribution present within today’s society. In counter-argument, I would respond that an increase in intelligence would likewise cause a global rise in morality. While this relationship is entirely speculative, it is plausible to suggest that a person’s level of moral goodness is at least related (if not directly) to their intelligence.

Of course, there are notable exceptions to this rule whereby intelligent people have suffered from moral ineptitude, however an increased neurological understanding and a practical implementation of ‘designer’ augmentations (as it relates to improving morality) would negate the possibility of a majority ‘superclass’ whom persecutes groups of ‘naturals’. At the very worst, there may be a period of unrest at the implementation of such technology while the majority of the population catches up (in terms of perfecting the implantation/augmentation techniques and achieving the desired level of moral output). Such innovations may even act as a catalyst for developing a philosophically sound model of universal morality; something which would in turn, allow the next generation of neurological ‘upgrades’ to implement.

Perhaps we are already in the midst of our future society. Our planet’s declining environment may hasten the development of such augmentation to improve our chances of survival. Whether this process involves the discarding of our physical bodies for a more impervious, intangible machine-based life or otherwise remains to be seen. With the internet’s rising popularity and increasing complexity, a virtual ‘Matrix-esque’ world in which such programs could live might not be so far-fetched after all. Whatever the future holds, it is certainly an exciting time in which to live. Hopefully humanity can overcome the challenges of the future in a positive way and without too much disruption to our technological progress.

The monk sat meditating. Alone atop a sparsely vegetated outcrop, all external stimulus infusing psychic energy within his calm, receptive mind. Distractions merely added to his trance, assisting the meditative state to deepen and intensify. Without warning, the experience culminated unexpectedly with a fluttering of eyelids. The monk stood, content and empowered with newfound knowledge. He has achieved pure insight…

The term ‘insight’ is often attributed to such vivid descriptions of meditation and religious devotion. More specifically, religions such as Buddhism promote the concept of insight (vipassana) as a vital prerequisite for spiritual nirvana, or transcendence of the mind to a higher plane of existence. But does insight exist for the everyday folk of the world? Are the momentary flashes of inspiration and creativity part and parcel of the same phenomenon or are we missing out on something much more worthwhile? What neurological basis does this mental state have and how can its materialisation be ensured? These are the questions I would like to explore in this article.

Insight can be defined as the mental state whereby confusion and uncertainty are replaced with certainty, direction and confidence.  It has many alternative meanings and contexts regarding its use, ranging from a piece of obtained information to the psychological capacity to introspect objectively (as according to some external judge – introspection is by its very name subjective in nature). Perhaps the most fascinating and generally applicable context is one which can be described as ‘an instantaneous flash of brilliance’ or ‘a sudden clearing of murky intellect and intense feelings of accomplishment’. In short, insight (in the context which I am interested) is one which can be attributed to the genius’ of society, those that seemingly bring together tiny shreds of information and piece them together to solve a particularly challenging problem.

Archimedes is perhaps the most widely cited example of human insight. As the story goes, Archimedes was inspired by the displacement of water in his bathtub to formulate a theory of calculating the volume of an irregular object. This technique was of great empirical importance as it allowed a reliable measure of density (referred to as ‘purity’ in those ancient times, and arising from a more fiscal motivation such as gold purity). The climax of the story describes a naked Archimedes running wildly through the streets unable to control his excitement at this ‘Eureka’ moment. Whether the story is actually true or not has little bearing on the force of the argument presented; all of us have most likely experienced this moment at one point in our lives, and is best summarised by the overcoming of seemingly insurmountable odds to conquer a difficult obstacle or problem.

But where does this inspiration come from? It almost seems as though the ‘insightee’ is unaware of the mental efforts to arrive at a solution, perhaps feeling a little defeated after a day spent in vain. Insight then appears at an unexpected moment, almost as though the mind is working unconsciously and without direction, and offers a brilliant method for victory. The mind must have some unconscious ability to process and connect information regardless of our directed attention to achieve moments such as this. Seemingly unconnected pieces of information are re-routed and brought to our attention in the context of the previous problem. Thus could there be a neurobiological basis for insight? One that is able to facilitate a behind-the-scenes process?

Perhaps insight is encouraged by the physical storage and structure of neural networks. In the case of Archimedes, the solution was prompted by the mundane task of taking a bath; superficially unrelated to the problem, however the value of its properties inflated by a common neural pathway (low bathwater – insert leg – raised bathwater similar to volumes and matter in general). That is, the neural pathways activated by taking a bath are somehow similar to those activated by the rumination of the problem at hand. Alternatively, the unconscious mind may be able to draw basic cause and effect conclusions which are then boosted to the forefront of our minds if they are deemed to be useful (ie: are they immediately relevant to the task being performed). Whatever the case may be, it seems that at times, our unconscious minds are smarter than our conscious attention.

The real question is whether insight is an intangible state of mind (ala ‘getting into the zone’) that can be turned on and off (thus making it useful for extending humanity’s mental capabilities), or whether it is just a mental byproduct from overcoming a challenge (hormonal response designed to encourage such thinking in the future). Can the psychological concept of insight be applied via a manipulation of the subject’s composition (neuronally)  and environmental characteristics (conductive to achieving insight), or is it merely an evolved response that serves a (behaviourally) reinforcing purpose?

Undoutedly the agent’s environment plays a part in determining the likelihood of insight occurring. Taking into account personal preferences (does the person prefer quite spaces for thinking?) the characteristics of the environment could serve to hamper the induction of such a mental state if it is sufficiently irritating to the individual. Insight may also be closely linked with intelligence, and depending on your personal conception of this, neurological structure (if one purports a strictly biological basis of intelligence). If this postulate is taken at face value, we have the conclusion that the degree of intelligence is directly related to the likelihood of insight, and perhaps also to the ‘quality’ of the insightful event (ie: a measure of its brilliance in comparison to inputs such as the level of available information and difficulty of the problem).

But what of day to day insight, it seems to crop up in all sorts of situations. In this context, insight might require a grading scale as to its level of brilliance if its use is to be justified in more menial situations and circumstances. Think of that moment when you forget a particular word, and try as you might, cannot remember it for the life of you. Recall also that flash of insight where the answer is simply handed to you on a platter without any conscious need to retrieve it. Paradoxically, it seems that the harder we try to solve the problem, the more difficult it becomes. However, is this due to efficiency problems such as ‘bottlenecking’ of information transfer, personality traits such as performance anxiety/frustration or some underlying and unconscious process that is able to retrieve information without conscious direction?

Whatever the case may be, our scientific knowledge on the subject is distinctly lacking, therefore an empirical inquiry into the matter is more than warranted (if it hasn’t already been commissioned). Psychologically, the concept of insight could be tested experimentally by providing subjects with a problem to solve and manipulating  the level of information (eg ‘clues’) and its relatedness to the problem (with consideration taken to intelligence, perhaps two groups, high and low intelligence). This may help to uncover whether insight is a factor to do with information processing or something deeper. If science can learn how to artificially induce a mental state akin to insight, the benefits for a positive-futurist society would be grand indeed.

Teleportation is no longer banished to the realm of science fiction. It is widely accepted that what was once considered a physical impossibility is now directly achievable through quantum manipulations of individual particles. While the methods involved are still in their infancy (single electrons are the heaviest particle to be teleported), we can at least begin to appreciate and think about the possibilities on the basis of plausibility. Specifically, what are the implications for personal identity if this method of transportation is possible on a human scale? Atomically destructing and reconstructing an individual at an alternate location could introduce problems with consciousness. Is this the same person or simply an identical twin with its own thoughts, feelings and desires? These are the questions I would like to discuss in this article.

Biologically we lose our bodies several times over during one human life-time. Complete organs are replaced diurnally with little thought given to the implications for self-identity. It is a phenomenon that is often overlooked, and especially so in relation to recent empirical developments with quantum teleportation. If we are biologically replaced with regularity does this imply that our sense of self is, likewise, dynamic in nature and constantly evolving? There would be reasonable arguements for both sides of this debate; maturity and daily experience do result in a varied mental environment. However, one wonders if this has more to do with innate processes such as information transfer/recollection/modification rather than purely the biological characteristics of individual cells (in relation to cell division and rejuvenation processes).

Thus it could be argued that identity is a largely conscious (in terms of seeking out information and creating internal schema of identity) and directed process. This does not totally rule out the potential for identity based upon changes to biological structure. Perhaps the effects are more subtle, modifying our identities in such a way as to facilitate maturity or even mental illness (if the duplication process is disturbed). Cell mutation (neurological tumor growth) is one such example whereby a malfunctioning biological process can result in direct and often drastic changes to identity.

However, I believe it is safe to assume that “normal” tissue regenerative processes do not result in any measurable changes to identity. What makes teleportation so different? Quantum teleportation has been used to teleport photons from one location to another, and more recently, particles with mass (electrons). The process is decidedly less romantic than science-fiction authors would have us believe; classical transmission of information is still required, and a receiving station must still be established at the desired destination. What this means is that matter transportation, ala ‘Star Trek’ transporters, is still very much an unforeseeable fiction. In addition, something as complex as the human body would require incredible computing power to scan at sufficient detail, another limiting factor in its practicality. Fortunately, there are potential uses for this technology such as in the fledging industry of quantum computers.

The process works around the limitations of the quantum Uncertainty Principle (which states that the exact properties of a quantum system can never be known in exact detail) through a process known as the “Einstein-Podolsky-Rosen” effect. Einstein had real issues with Quantum Mechanics; he didn’t like it at all (to quote the cliche ‘Spooky action at a distance’). The EPR paper was aimed at irrefutably proving the implausibility of entangled pairs of quantum particles. John Stewart Bell tripped the Einstein proposition on its head when he demonstrated that entangled particles do in fact exhibit statistically significant random behaviours (that is, the frequencies of each action correlated between both particles too highly to be due to chance alone). The fact that entanglement does not violate the no-communication theorem is good news for our assumptions regarding reality, but more bad news for teleportation fans. Information regarding the quantum state of the teleportee is still required to be transmitted via conventional methods for reassembly at the other end.

Quantum teleportation works by initially scanning the quantum state of a particle at A, with care taken not to cause too much disruption (measurement distorts the original, the harder you look the more uncertain the result). This partial scan is then transmitted at relativistic speeds to the receiver at B. A pair of entangled particles is then dispatched to both teleportation stations. Entangled particle 1 at A interacts with the remainder of A (minus the scanned out information sent to B). Entanglement then assures that this information will be instantaneously available at B (via entangled particle 2). Utilising the principles of the EPR effect and Bell’s statistical correlations, it is then possible to reconstruct the state of the original particle A at the distant location, B. While the exact mechanism is beyond the technical capacity of philosophy, it is prudent to say that the process works by taking the entangled information from EP2 and combining it with the classically transmitted information that was scanned out of the original particle, A.

Casting practicality aside for the sake of philosophical discussion,  if such a process became possible for a being as complex as a human, what would be the implications for consciousness and identity? Common sense tells us that if an exact replica could be duplicated then how is this in any way different to the original? One would simply ‘wake-up’ at the new location within the same body and mind as you left. Those that subscribe to a Cartesian view of separated body and mind would look upon teleportation with an abhorrent revulsion. Surely along the way we are loosing a part of what makes us uniquely human; some sort of intangible soul or essence of mind which cannot be reproduced? This leads one to similar thought experiments. What if another being somewhere in the Universe is born with the exact mental characteristics as yourself? Would this predispose them to some sort of underlying and phenomenological connection? Perhaps this is supported by anecdotal evidence from empirical studies into identical twins. It is thought such individuals share a common bond, demonstrating almost telepathic abilities at times. Although it could be argued that the nature of this mechanism is probably no more mystical than a familiar acquaintance predicting how you would react in a given situation, or similarities in brain structure predisposing twins to ‘higher than average’ mental convergence events.

Quantum teleportation on conscious beings also raises serious moral implications. Is it considered murder to deconstruct the individual at point A, or is this initial crime nullified once the reassembly is completed? Is it still considered immoral if someone else appears at the receiver due to error or quantum fluctuation? Others may argue that it is no different to conventional modes of transport; human error should be dealt as such (necessary condition for the label of crime/immorality) and naturally occurring disasters interpreted as nothing more than random events.

While it is doubtful that we will ever see teleportation on a macro scale, we should remain mindful of the philosophical and practical implications of emerging technologies. Empirical forces are occasionally blinded to these factors when such innovations are announced to the general public. While it is an important step in society that such processes are allowed to continue, the rate at which they are appearing can be cause for alarm if they impinge upon our human rights and the preservation of individuality. There has never been a more pressing time for philosophers to think about the issues and offer their wisdom to the world.

In the first part of this article, I outlined a possible definition of time and (keeping in touch with the article’s title) offered a brief historical account of time measurement. This outline demonstrated humanity’s changing perception of the nature of time, and how an increase in the accuracy with which it is measured can affect not only our understanding of this phenomenon, but also how we perceive reality. In this article I will begin with the very latest physical theory explaining the potential nature of time, followed by a discussion on several interesting observations concerning the fluctuations that seem to characterise humanity’s chronological experience. Finally, I hope to promote a hypothesis (even though it may simply be stating the blatantly obvious) that the flow and experience of time is uniquely variable, in that the concept of ‘absolute time’ is as dead as the ‘ether’ or absolute reference point of early 19th century physics.

Classical physics dominated the concept of time up until the beginning of the 20th century. In this respect, time (in the same vein as motion) as having an ‘absolute’  reference point. That is, time was constant and consistent across the universe and for all observers, regardless of velocity or local gravitational effects. Of course, Einstein turned all this on its head with his theories of general and special relativity. Time dilation was a new and exciting concept in the physical measure of this phenomenon. Both the speed of the observer (special relativity) and the presence of a gravitational field (general relativity) were predicted to have an effect on the passage of time. The main point to consider in combination with these predictions is that by the very nature of the theory, relativity insists that all events are relative, or change with perspective, in respect to some external observer.

Consider two clocks (A and B), separated by distance x. According to special relativity, if clock B is accelerated to a very high speed (at least 30,000km/s for the effects to become detectable), time dilation effects will come into play. In effect, relative to clock A (which is running on ‘normal’ Earth time), clock B will be seen to run slower. An observer travelling with clock B would not notice these effects – time would continue to pass normally within their frame of reference. It is only upon return and the clocks are directly compared that the inaccuracy becomes apparent. Empirically, this effect is well established, and offers an explanation as to why muons (extremely short-lived particles) are able to make it to the Earth’s surface before decaying. Cosmic rays slam into the Earth’s atmosphere at high speed, producing sufficient energy when they collide with molecules for the generation of muons and neutrinos. These muons, which normally decay after a distance of 0.6km (if stationary/moving slowly), are travelling so fast that time dilation effects act to slow down the radiological emission process. Thus, these particles survive much longer (penetrating some 700m underground) than normal.

General relativity also predicts an effect on our perceptions of time. Objects with large mass produce gravitational fields, which in turn, are predicted to influence time by slowing down its perceived effects in proportion to the observer’s proximity to the field. Clock A is on the Earth’s surface, while Clock B is attached to an orbiting satellite. As Clock B is further from the centre of the Earth, the gravitational field at a lower potential, that is, it is weaker and exerts less of an effect. Consequently, the elapsed time at B (relative to Clock A) will be shorter (ie, Clock B is running faster). Again, this effect has been tested empirically, with clocks on board GPS satellites forced to undergo regular adjustments to keep them in line with Earth-bound instrumentation (thus enabling accuracy in pinpointing locations). Interestingly, the effects of both types of dilation are additive; the stronger effect wins out, resulting in either a net gain or loss of time. Objects moving fast within a gravitational field should then experience both a slowing down and speeding up of time relative to an external observer (this was in fact recorded in an experiment involving atomic clocks on board commercial airliners).

Frustratingly, the physical basis for such dilation seems to be enmeshed with the complicated mathematics and technical jargon. Why exactly does this dilation occurs? Descriptions of the phenomenon seem to lack any real insight into this question, and instead proffer statements to the effect of ‘this is simply what relativity predicts’. It is an important question to ask, I think, as philosophically, the question of ‘why’ is just as important as the empirical ‘how’, and should follow as a natural consequence. By probing the meta-physical aspects of time we can aim to better understand how it can influence the human sensory experience and adapt this new-found knowledge to practical applications.

Based on relativity’s notion of a non-absolute framework of time, and incorporating the predictions of time dilation, it seems plausible that time could be reducible to a particulate origin. The field of quantum physics has already made great headway in proposing that all matter acts in a wave-particle duality; in the form of waves, photons and matter travel along all possible routes between two points, with the crests and troughs interfering with, or reinforcing, each other. Similar to the double slit experiment (light and dark interference pattern), only the path that is reinforced remains and the wave collapses (quantum de-coherence) into a particle that we can directly observe and measure. This approach is know as the ‘sum over histories’ hypothesis, proposed by Richard Feynman (which also opens up the possibility of ‘many worlds’; alternative universes that branch off at each event in time).

In respect to time, perhaps its re-imagining as a particle could explain the effects on gravity and velocity, in the form of dilation. One attempt is the envisaged ‘Chronon’, a quantised form of time which disrupts the commonly held interpretation of a continuous experience. This theory is supported via the natural unit of Planck Time, some 5.39121 x 10ˆ-44 seconds. Beyond this limit, time is thought to be indistinguishable and the notion of separate events undefinable. Of course, we are taking a leap of faith here in assuming that time is a separate, definable entity. Perhaps the reality is entirely different.

Modern philosophy seems to fall over when attempting to interpret the implications of theoretical physics. Perhaps the subject matter is becoming increasingly complex, requiring dedicated study in order to grasp even the simplest concepts. Whatever the reason, the work of philosophers has moved away from the pursuits of science and towards topics such as language. What science needs is an army of evaluators, ready to test their theories with practical concerns in mind. Time has not escaped this fate either. Scientists seem content, even ‘trigger happy’ in their usage of the anthropic principle in explaining the etiology of their theories and any practical inquiry as to why things are the way they are. Basically, any question of why evokes a response along the lines of ‘well, if it were any different, conditions of the universe would not be sufficient for the evolutions of intelligent beings such as ourselves, who are capable of asking the very question of why!’. Personally, this approach does make sense, but seems to have the distinct features of a ‘cop-out’ and circularity; alot of the underlying reasoning is missing which prohibits deeper inquiry. It also allows theologians to promote arguments for the existence of a creator; ‘god created the universe in such a way as to ensure our existence’.

What has this got to do with time? Well, put simply, the anthropicists propose that  if time were to flow in a direction contrary to that which is experienced, the laws of science would not hold, thus excluding the possibility of our existence as well as violating the principles of CPT symmetry (C=particle/antiparticle replacement, P=taking the mirror image and T=the direction of time). Even Stephen Hawking weighs in on the debate, and in his Brief History of Time, proposes the CPT model in combination with the second law of thermodynamics (entropy, or disorder, always increases). The arrow of time, thus, must correspond to and align with the directions of these cosmological tendencies (universe inflates, which is the same direction as increasing entropy, which is the same as psychological perceptions of time).

So, after millenia of study in the topic of chronology, we seem to be a long way off from a concrete definition and explanation of time. With the introduction of relativity, some insights into the nature of time have been extracted, however philosophers still have a long way to go before practical implications are expounded from the very latest theories (Quantum Physics, String Theory etc). Indeed, some scientists believe that if a grand unified theory is to be discovered, we need to further refine our definitions of time and work backwards towards the very instant of the big bang (under which it is proposed that all causality breaks down).

Biologically, is time perceived equally among not only humans but also other species (animals)? Are days where time seems to ‘stand still’ sharing some common feature that could support the notion of time as a definable physical property of the universe (eg the Chronon particle)? On such days are we passing through a region of warped spacetime (thus a collective, shared experience) or do we carry an internal psychological timepiece that ticks to its own tock, regardless of how others are experiencing it? When we die is the final moment stretched to a relative infinity (relative to the deceased) as neurons loose their potential to carry signals (ala falling into a black hole, the perception of time slows to an imperceptible halt) or does the blackness take us in an instant? Maybe time will never fully be understood, but it is an intriguing topic that warrants further discussion, and judging by the surplus of questions, not in any hurry to reveal its mysteries anytime soon.

The essence of mathematics cannot be easily discerned. This intellectual pursuit lurks behind a murky haze of complexity. Those that are fortunate enough to have natural ability in this field are able to manipulate algebraic equations as easily as spoken word. However, for the vast majority of the population, mathematical expertise is elusive, receding away at each desperate grasp and attempt at comprehension. What exactly is this strange language of numerical shapes, with its logical rule-sets and quirky laws of commutativity? It seems as though the more intensely this concept is scrutinised, the faster its superfluous layers of complexity are peeled away. But what of these hidden foundations? Are mathematical formulations the key to understanding the nature of reality? Can all this complexity around which we eke out a meagre existence really condense into a single set of equations? If not, what are the implications for and likelihood of a purely mathematical and unified ‘theory of everything? These are the questions I would like to explore in this article.

The history of mathematics dates back to the dawn of civilisation. The earliest known examples of mathematical reasoning are believed to be from some 70,000 years BC. Geometric patterns and shapes on cave-walls shed light onto how our ancestors may have thought about abstract concepts. These primitive examples also include rudimentary attempts at measuring the passage of time through measured, systematic notches and depictions of celestial cycles. Humankind’s abilities progressed fairly steadily from this point, with the next major revolution in mathematics occurring some 3000-4000 years BC.

Neolithic religious sites (such as Stonehenge, UK and Ġgantija, Malta) are thought to have made use of the growing body of mathematical knowledge and an increased awareness and appreciation of standardised observation. In a sense, these structures spawned appreciation of mathematical representation by encouraging measurement standardisation. For example, a static structure allows for patterns in constellations and deviations from the usual to stand out in prominence. Orion’s belt rises over stone X in January, progressing towards stone Y; what position will the heavens be in tomorrow?

Such observational practices allowed mathematics, through the medium of astronomy, to foster and grow. Humanity began to recognise the cyclical rhythm of nature and use this standardised base to extrapolate and predict future events. It was not until 2000BC that mathematics grew into some semblance of the formalised language we use today. Spurred on by the great ancient civilisations of Greece and Egypt, mathematical knowledge advanced at a rapid pace. Formalised branches of maths emerged around this time period, with construction projects inspiring minds to realise the underlying patterns and regularities in nature. Pythagoras’ Theorem is but one prominent result from the inquiries of this time as is Euclid’s work on geometry and number theory. Mathematics grew steadily, although hampered by the ‘Dark Ages’ (Ptolemic model of the universe) and a subsequent waning interest in scientific method.

Arabic scholars picked up this slack, contributing greatly to geometry, astronomy and number theory (the numerical system of base ten we use today is an adoption of Arabic descent). Newton’s Principia was perhaps the first wide-spread instance of formalised applied mathematics (in the form of generalised equations; geometry had previously been employed for centuries in explaining planetary orbits) in the context of explaining and predicting physical events.

However, this brings us no closer to the true properties of mathematics. An examination of the historical developments in this field simply demonstrates that human ability began with rudimentary representations and has since progressed to a standardised, formalised institution. What essentially are these defining features? Building upon ideas proposed by Frayn (2006), our gift for maths arises from prehistoric attempts at grouping and classifying external objects. Humans (and lower forms of life) began with the primitive notion of ‘big’ versus ‘small’, that is, the comparison of groupings (threats, friends or food sources). Mathematics comprises our ability to make analogies, recognise patterns and predict future events; a specialised language with which to conduct the act of mental juggling. Perhaps due to the increasing encephalic volume and neuronal connectivity (spurred on by genetic mutation and social evolution) humankind progressed beyond the simple comparison of size and required a way of mentally manipulating objects in the physical world. Counting a small herd of sheep is easy; there is a finger, toe or stick notch with which to capture the property of small and large. But what happens when the herd becomes unmanageably large, or you wish to compare groups of herds (or even different animals)? Here, the power of maths really comes into a world of its own.

Referring back to the idea of social evolution acting as a catalyst for encephalic development, perhaps emerging social patterns also acted to improve mathematical ability. More specifically, the disparities in power as individuals become more elevated compared to their compatriots would have precipitated a need to keep track of assets and incur taxation. Here we observe the leap from singular comparison of external group sizes (leaning heavily on primal instincts of flight/fight and satiation) to a more abstract, representative use of mathematics. Social elevation brings about wealth and resources. Power over others necessitates some way of keeping track of these possessions (as the size of the wealth outgrows the managerial abilities of one person). Therefore, we see not only a cognitive, but also a social, aspect of mathematical evolution and development.

It is this move away from the direct and superficial towards abstract universality that heralded a new destiny for mathematics. Philosophers and scientists alike wondered (and still wonder today) whether the patterns and descriptions of reality offered by maths are really getting to the crux of the matter. Can mathematics be the one tool with which a unified theory of everything can be erected? Mathematical investigations are primarily concerned with underlying regularities in nature; patterns. However it is the patterns themselves that are the fundamental essence of the universe; mathematics simply describes them and allows for their manipulation. The use of numerals is arbitrary; interchange them with letters or even squiggles in the dirt and the only thing that changes is the rule-set to combine and manipulate them. Just as words convey meaning and grammatical laws are employed with conjunctions to connect (addition?) premises, numerals stand as labels and the symbols between them convey the operation to be performed. When put this way, mathematics is synonymous with language, it is just highly standardised and ‘to the point’.

However this feature is a double-edged sword. The sterile nature of numerals (lacking such properties as metaphor, analogy and other semantic parlour tricks) leaves their interpretation open. A purely mathematical theory is only as good as the interpreter. Human thought processes descend upon formulae picking apart and extracting like a vulture battles haphazardly over a carcass. Thus the question moves from one of validating mathematics as an objective tool to a more fundamental evaluation of human perception and interpretation. Are the patterns we observe in nature really some sort of objective reality, or are they simply figments of our over-active imagination; coincidences or ‘brain puns’ that just happen to align our thoughts with external phenomena?

If previous scientific progress is anything to go by, humanity is definitely onto something. As time progresses, our theories become closer and closer to unearthing the ‘true’ formulation of what underpins reality. Quantum physics may have dashed our hopes of ever knowing with complete certainty what a particle will do when poke and prodded, but at least we have a fairly good idea. Mathematics also seems to be the tool with which this lofty goal will be accomplished. Its ability to allow manipulation of the intangible is immense. The only concern is whether the increasing abstractivity of physical theories is outpacing our ability to interpret and comprehend them. One only has to look at the plethora of alternative quantum interpretations to see evidence for this effect.

Recent developments in mathematics include the mapping of E8. From what can be discerned by a ‘lay-man’, E8 is a multi-dimensional geometric figure, the exact specifications of which eluded mathematicians since the 19th century. It was only through a concerted effort involving hundreds of computers operating in parallel that its secrets were revealed. Even more exciting is the recent exclamation of a potential ‘theory of everything’. The brainchild behind this effort is not what could be called stereotypical; this ‘surfing scientist’ claims to have utilised the new-found knowledge of E8 to unite the four fundamental forces of nature under one banner. Whether his ship turns out to hold any water is something that remains to be seen. The full paper can be obtained here.

This theory is not the easiest to understand; elegant but inherently complex. Intuitively, two very fitting characteristics of a potential theory of everything. The following explanation from Slashdot.org is perhaps the most easily grasped for the non-mathematically inclined.

“The 248-dimensions that he is talking about are not like the time-space dimensions, which particles move through. They describe the state of the particle itself – things like spin, charge, etc. The standard model has 6(?) properties. Some of the combinations of these properties are allowed, some are not. E8 is a very generalized mathematical model that has 248-properties, where only some of the combinations are allowed. What Garrett Lisi showed is that the rules that describe the allowed combinations of the 6 properties of the standard model show up in E8, and furthermore, the symmetries of gravity can be described with it as well.” Slashdot.org, (2007).

Therefore, E8 is a description of particle properties, not the ‘shape’ of some omnipresent, underlying pervasive force. The geometric characteristics of the shape outline the numbers of particles, their properties and the constraints over these properties (possible states, such as spin, charge etc). In effect, the geometric representation is an illustration of underlying patterns and relationships amongst elementary particles. The biggest strength of this theory is that it offers testable elements, and predictions of as yet undiscovered physical constituents of the universe.

It is surely an exciting time to live, as these developments unfurl. On first glance, mathematics can be an incredibly complex undertaking, in terms of both comprehension and performance. Once the external layers of complexity are peeled away, we are left with the raw fundamental feature; a description of underlying universals. Akin to every human endeavour, the conclusions are open to interpretation, however with practice, and an open mind free from prejudicial tendencies, humanity may eventually crack the mysteries of the physical universe. After all, we are a component of this universe therefore it makes intuitive (if not empirical) sense that our minds should be relatively objective and capable of unearthing a comprehensive ‘theory of everything’.

Many of us take the capacity to sense the world for granted. Sight, smell, touch, taste and hearing combine to paint an uninterrupted picture of the technicolour apparition we call reality. Such lucid representations are what we use to define objects in space, plan actions and manipulate our environment. However, reality isn’t all that it’s cracked up to be. Namely, our role in defining the universe in which we live is much greater than we think. Humanity, through the use of sensory organs and the resulting interpretation of physical events, succeeds in weaving a scientific tapestry of theory and experimentation. This textile masterpiece may be large enough to ‘cover all bases’ (in terms of explaining the underlying etiology of observations), however it might not be made of the right material. With what certainty do scientific observations carry a sufficient portion of objectivity? What role does the human mind and its modulation of sensory input have in creating reality? What constitutes objective fact and how can we be sure that science is ‘on the right track’ with its model of empirical experimentation? Most importantly, is science at the cusp of an empirical ‘dark age’ where the limitations of perception fundamentally hamper the steady march of theoretical progress? These are the questions I would like to explore in this article.

The main assumption underlying scientific methodology is that the five sensory modalities employed by the human body are, by and large, uniformly employed. That is, despite small individual fluctuations in fidelity, the performance of the human senses is mostly equal. Visual acuity and auditory perception are sources of potential variance, however the advent of certain medical technologies has circumnavigated and nullified most of these disadvantages (glasses and hearing aids, respectively). In some instances, such interventions may even improve the individual’s sensory experience, superseding ‘normal’ ranges through the use of further refined instruments. Such is the case with modern science as the realm of classical observation becomes subverted by the need for new, revolutionary methods designed to observe both the very big and the very small. Satellites loaded with all manner of detection equipment have become our eyes for the ultra-macro; NASA’s COBE orbiter gave us the first view of early universal structure via detection of the cosmic microwave background radiation (CMB). Likewise, scanning probe microscopy (SPM) enabled scientists to observe on the atomic scale, below the threshold of visible light. In effect, we have extended and supplemented our ability to perceive reality.

But are these innovations also improving the objective quality of observations, or are we being led into a false sense of security? Are we becoming comfortable with the idea that what we see constitutes what is really ‘out there’? Human senses are notoriously prone to error. In addition, machines are only as good as their creator. Put another way, artificial intelligence has not yet superseded the human ‘home grown’ alternative. Therefore, can we rely on a human-made, artificial extension of perception with which to make observations? Surely we are compounding the innate inaccuracies, introducing a successive error rate with each additional sensory enhancement. Not to mention the interpretation of such observations and the role of theory in whittling down alternatives.

Consensus cannot be reached on whether what I perceive is anything like what you perceive. Is my perception of the colour green the same as yours? Empirically and philosophically, we are not yet at a position to determine with any objectivity whether this question is true. We can examine brain structure and compare regions of functional activity, however the ability to directly extract and record aspects of meaning/consciousness is still firmly in the realms of science-fiction. The best we can do is simply compare and contrast our experiences through the medium of language (which introduces its own set of limitations).As aforementioned, the human sensory experience can, at times, become lost in translation.

Specifically, the ability of our minds to disentangle the information overload that unrelentingly flows through mental channels can wane due to a variety of influences. Internally, the quality of sensory inputs is governed at a fundamental level by biological constraints. Millions of years of evolution has resulted in a vast toolkit of sensory automation. Vision, for example, has developed in such a way as to become a totally unconscious and reflexive phenomenon. The biological structure of individual retinal cells predisposes them to respond to certain types of movement, shapes and colours. Likewise, the organisation of neurons within regions of the brain, such as the primary visual cortex in the occipital lobe, processes information with pre-defined mannerisms. In the case of vision, the vast majority of processing is done automatically, thus reducing the overall level of awareness and direct control the conscious mind has over the sensory system. The conclusion here is that we are limited by physical structure rather than differences in conscious discrimination.

The retina acts as the both the primary source of input as well as a first-order processor of visual information In brief, photons are absorbed by receptors on the back wall of the eye. These incoming packets of energy are absorbed by special proteins (rods – light intensity, cones – colour) and trigger action potentials in attached neurons. Low level processing is accomplished by a lateral organisation of retinal cells; ganglionic neurons are able to communicate with their neighbours and influence the likelihood of their signal transmission. Cells communicating in this manner facilitates basic feature recognition (specifically, edges/light and dark discrepancies) and motion detection.

As with all the sensory modalities, information is then transmitted to the thalamus, a primitive brain structure that acts as a communications ‘hub’; its proximity to the brain stem (mid and hind brains) ensures that reflexes are privy to visual input prior to the conscious awareness. The lateral geniculate nucleus is the region of the thalamus which splits incoming visual input into three main signals; (M, P and K). Interestingly, these channels stream inputs into signals with unique properties (eg exclusively colour, motion etc). In addition, the cross lateralisation of visual input is a common feature of human brains. Left and right fields of view are diverted at the optic chiasm and processed on common hemispheres (left field of view from both eyes processed on the right side of the brain). One theory as to why this system develops is to minimise the impact of uni-lateral hemispheric damage – the ‘dual brain’ hypothesis (each hemisphere can act as an independent agent, reconciling and supplementing reductions in function due to damage).

We seem to lazily fall back on these automated subsystems with enthusiasm, never fully appreciating and flexing the full capabilities of sensory appendages. Micheal Frayn, in his book ‘The Human Touch’ demonstrates this point aptly;

“Slowly, as you force yourself to observe and not to take for granted what seems so familiar, everything becomes much more complicated…That simple blueness that you imagined yourself seeing turns out to have been interpreted, like everything else, from the shifting, uncertain material on offer” Frayn, 2006, p26

Of course, we are all blissfully ignorant of these finer details when it comes to interpreting the sensory input gathered by our bodies. The consciousness acts ‘with what it’s got’, without a care as to the authenticity or objectivity of the observations. We can observe this first hand in a myriad of different ways; ways in which the unreal is treated as if it were real. Hallucinations are just one mechanism where the brain is fooled. While we know such things are false, to a degree (depending upon the etiology, eg schizophrenia), such visual disturbances nonetheless are able to provoke physiological and emotional reactions. In summary, the biological (and automated) component of perception very much determines how we react to, and observe, the external world. In combination with the human mind (consciousness), which introduces a whole new menagerie of cognitive baggage, a large amount of uncertainty is injected into our perceptual experience.

Expanding outwards from this biological launchpad, it seems plausible that the qualities which make up the human sensory experience should have an effect on how we define the world empirically. Scientific endeavour labours to quantify reality and strip away the superfluous extras leaving only constitutive and fundamental elements. In order to accomplish this task, humanity employs the use of empirical observation. The segway between biological foundations of perception and the paradigm of scientific observation involves a similarity in sensory limitation. Classical observation was limited by ‘naked’ human senses. As the bulk of human knowledge grew, so too did the need to extend and improve methods of observation. Consequently, science is now possibly realising the limitation of the human mind to digest an overwhelming plethora of information.

Currently, science is restricted by the development of technology. Progress is only maintained through the ingenuity of the human mind to solve biological disadvantages of observation. Finely tuned microscopes tap into quantum effects in order to measure individual atoms. Large radio-telescope arrays link together for an eagle’s eye view of the heavens. But as our methods and tools for observing grow in complexity, so too does the degree of abstract reasoning that is required to grasp the implications of their findings. Quantum theory is one such warning indicator.

Like a lighthouse sweeps the night sky and signals impending danger, quantum physics, or more precisely, humanity’s inability to agree on any one consensus which accurately models reality, could be telling us something. Perhaps we are becoming too reliant on our tools of observation, using them as a crutch in a vain attempt to avoid our biological limitations. Is this a hallmark of our detachment from observation? Quantum ‘spookiness’ could simply be the result of a fundamental limitation of the human mind to internally represent and perceive increasingly abstract observations. Desperately trying to consume the reams of information that result from rapid progress and intense observation, scientific paradigms become increasingly specialised and diverged, increasing the degree of inter-departmental bureaucracy. It now takes a lifetime of training to even grasp the basics of current physical theory, let alone the time taken to dissect observations and truly grasp their essence.

In a sense, science is at a crossroads. One pathway leads to an empirical dead end; humanity has exhausted every possible route of explanation. The other involves either artificial augmentation (in essence, AI that can do the thinking for us) or a fundamental restructuring of how science conducts its business. Science is in danger of information overload; the limitations introduced by a generation of unrelenting technical advancement and increasingly complex tools with which to observe has taken its toll. Empirical progress is stalling, possibly due to a lack of understanding by those doing the observing. Science is detaching from its observations at an alarming rate, and if we aren’t careful, in danger of loosing sight of what the game is all about. The quest for knowledge and understanding of the universe in which we live.

Morality is a phenomenon that permeates through both society as a whole and also individually via the consciousness of independent entities. It is a force that regularly influences our behaviour and is experienced (in some form or another) universally, species-wide. Intuitively, morality seems to be at the very least, a sufficient condition for the creation of human groups. Without it, co-operation between individuals would be non-existent. But does morality run deeper? Is it, in fact, a necessary condition of group formation and a naturally emergent phenomenon that stems from the interaction of replicating systems? Or can morality only be experienced by organisms operating on a higher plane of existence – those that have the required faculties with which to weigh up pros and cons, engage in moral decision making and other empathic endeavors (related to theory of mind)?

The resolution to this question depends entirely on how one defines the term. If we take morality to encompass the act of mentally engaging in self-reflective thought as a means with which to guide observable behaviours (acting in either selfish or selfless interests), then the answer to our question is yes, morality seems to be inescapably and exclusively linked only to humanity. However, if we twinge this definition and look at the etiology of morality – where this term draws its roots and how it developed over time, one finds that even the co-operative behaviours of primitive organisms could be said to construe some sort of basic morality. If we delve even deeper and ask how such behaviours came to be, we find that the answer is not quite so obvious. Can a basic version of morality (observable through cooperative behaviours) result as a natural consequence of interactions beyond the singular?

When viewed from this perspective, cooperation and altruism seem highly unlikely; a system of individually competing organisms, logically, would evolve to favour the individual rather than the group. This question is especially prudent when studying cooperative behaviours in bacteria or more complex, multicellular forms of life, as they lack a consciousness capable of considering delayed rewards or benefits from selfless acts

In relation to humanity, why are some individuals born altruistic while others take advantage without cause for guilt? How can ‘goodness’ evolve in biological systems when it runs counter to the benefit of the individual? These are the questions I would like to explore in this article.

Morality, in the traditional, philosophical sense is often constructed in a way that describes the meta-cognitions humans experience in creating rules for appropriate (or inappropriate) behaviour (inclusive of mental activity). Morality can take on a vast array of flavours; evil at one extreme, goodness at the other. We use our sense of morality in order to plan and justify our thoughts and actions, incorporating it into our mental representations of how the world functions and conveys meaning. Morality is a dynamic; it changes with the flow of time, the composition of society and the maturity of the individual. We use it not only to evaluate the intentions and behaviours of ourselves, but also of others. In this sense, morality is an overarching egoistic ‘book of rules’ which the consciousness consults in order to determine whether harm or good is being done. Thus, it seeps into many of our mental sub-compartments; decision making, behavioural modification, information processing, emotional response/interpretation and mental planning (‘future thought’) to name a few.

As morality entertains such a privileged omni-presence, humanity has, understandably, long sought to not only provide standardised ‘rules of engagement’ regarding moral conduct but has also attempted to explain the underlying psychological processes and development of our moral capabilities. Religion, thus, could perhaps be the first of such attempts at explanation. It certainly contains many of the idiosyncrasies of morality and proposes a theistic basis for human moral capability. Religion removes ultimate moral responsibility from the individual, instead placing it upon the shoulders of a higher authority – god. The individual is tasked with simple obedience to the moral creeds passed down from those privileged few who are ‘touched’ with divine inspiration.

But this view does morality no justice. Certainly, if one does not subscribe to theistic beliefs then morality is in trouble; by this extreme positioning, morality is synonymous with religion and one definitely cannot live without the other.

Conversely (and reassuringly), in modern society we have seen that morality does exist in individuals whom lack spirituality. It has been reaffirmed as an intrinsically human trait with deeper roots than the scripture of religious texts. Moral understanding has matured beyond the point of appealing to a higher being and has reattached itself firmly to the human mind. The problem with this newfound interpretation is that in order for morality to be considered as a naturally emergent product of biological systems, moral evolution is a necessary requirement. Put simply, natural examples of moral systems (consisting of cooperative behaviour and within group preference) must be observable in the natural environment. Moral evolution must be a naturally occurring phenomenon.

A thought experiment known as the “Prisoner’s dilemma” summarises succinctly the inherent problems with the natural evolution of mutually cooperative behaviour. This scenario consists of two parties, prisoners, whom are seeking an early release from jail. They are given the choice of either a) betraying their cellmate and walking free while the other has their sentence increased – ‘defecting’ or b) staying silent and mutually receiving a shorter sentence – ‘cooperating’. It becomes immediately apparent that in order for both parties to benefit, both should remain silent and enjoy a reduced incarceration period. Unfortunately, and also the catalyst for the terming of this scenario as a dilemma, the real equilibrium point is for both parties to betray. Here, the pay-off is the largest – walking free while your partner in crime remains behind with an increased sentence. In the case of humans, it seems that some sort of meta-analysis has to be done, a nth-order degree of separation (thinking about thinking about thinking), with the most dominant stratagem resulting in betrayal by both parties.

Here we have an example of the end product; an advanced kind of morality resulting from social pressures and their influence on overall outcome (should I betray or cooperate – do I trust this person?). In order to look at the development of morality from its more primal roots, it is prudent to examine research in the field of evolutionary biology. One such empirical investigation (conducted by Aviles, 2002that is representative of the field involves the mathematical simulation of interacting organisms. Modern computers lend themselves naturally to the task of genetic simulation. Due to the iterative nature of evolution, thousands of successive generations live, breed and die in the time it takes the computer’s CPU to crunch through the required functions. Aviles (2002) took this approach and created a mathematical model that begins at t = 0 and follows pre-defined rules of reproduction, genetic mutation and group formation. The numerical details are irrelevant; suffice to say that cooperative behaviours emerged in combination with ‘cheaters’ and ‘freeloaders’. Thus we see the dichotomous appearance of a basic kind of morality that has evolved spontaneously and naturally, even though the individual may suffer a ‘fitness’ penalty. More on this later.

“[the results] suggest that the negative effect that freeloaders have on group productivity (by failing to contribute to communal activities and by making groups too large) should be sufficient to maintain cooperation under a broad range of realistic conditions even among nonrelatives and even in the presence of relatively steep fitness costs of cooperation” Aviles, (2002).

Are these results translatable to reality? It is all well and good to speak of digital simulations with vastly simplified models guiding synthetic behaviour; the real test comes in observation of naturally occurring forms of life. Discussion by Kreft and Bonhoeffer (2005) lends support to the reality of single-celled cooperation, going so far as suggesting that “micro-organisms are ever more widely recognized as social”. Surely an exaggerated caricature of the more common definition of ‘socialness’, however the analogy is appropriate. Kreft et al effectively summarise the leading research in this field, and put forward the resounding view that single-celled organisms can evolve to show altruistic (cooperative) behaviours. We should hope so; otherwise the multicellularity which led to the evolution of humanity would have nullified our species’ development before it even started!

But what happened to those pesky mutations that evolved to look out for themselves? Defectors (choosing not to cooperate) and cheaters (choosing to take advantage of altruists) are also naturally emergent. Counter-intuitively, such groups are shown to be kept in their place by the cooperators. Too many cheaters, and the group fails through exploitation. The key lies in the dynamic nature of this process. Aviles (2002) found that in every simulation, the number of cheaters was kept in control by the dynamics of the group. A natural equilibrium developed, with the total group size fluctuating according to the number of cheaters versus cooperators. In situations where cheaters ruled; the group size dropped dramatically, resulting in a lack of productive work and reduced reproductive rates. Thus, the number of cheaters is kept in check by the welfare of the group. It’s almost a love/hate relationship; the system hates exploiters, but in saying that, it also tolerates their existence (in sufficient numbers).

Extrapolating from these conclusions, a logical outcome would be the universal adoption of cooperative behaviours. There are prime examples of this in nature; bee and ant colonies, migratory birds, various aquatic species, even humans (to an extent) all work together towards the common good. The reason why we don’t see this more often, I believe, is due to convergent evolution – different species solved the same problem from different approaches. Take flight for example – this has been solved separate times in history by both birds and insects. The likelihood of cooperation is also affected by external factors; evolutionary ‘pressures’ that can guide the flow of genetic development. The physical structure of the individual, environmental changes and resource scarcity are all examples of such factors that can influence whether members of the same species work together.

Humanity is a prime example; intrinsically we seem to have a sense of inner morality and tendency to cooperate when the conditions suit. The addition of consciousness complicates morality somewhat, in that we think about what others might do in the same situation, defer to group norms/expectations, conform to our own premeditated moral guidelines and are paralyzed by indecisiveness. We also factor in environmental conditions, manipulating situations through false displays of ‘pseudo-morality’ to ensure our survival in the event of resource scarcity. But when the conditions are just so, humanity does seem to pull up its trousers and bind together as a singular, collective organism. When push comes to shove humanity can work in unison. However just as bacteria evolve cheaters and freeloaders, so to does humanity give birth to individuals that seem to lack a sense of moral guidance.

Morality must be a universal trait, a naturally emergent phenomenon that predisposes organisms to cooperate towards the common good. But just as moral ‘goodness’ evolves naturally, so too does immorality. Naturally emergent cheaters and freeloaders are an intrinsic part of the evolution of biological systems. Translating these results to the plight of humanity, it becomes apparent that such individual traits are also naturally occurring in society. Genetically, and to a lesser extent, environmentally, traits from both ends of the moral scale will always be a part of human society. This surely has implications for the plans of a futurist society, relying solely on humanistic principles. Moral equilibrium is ensured, at least biologically, for the better or worse. Whether we can physically change the course of natural evolution and produce a purely cooperative species is a question that can only be answered outside the realms of philosophy.

When people attempt to describe their sense of self, what are they actually incorporating into the resultant definition? Personality is perhaps the most common conception of self, with vast amounts of empirical validation. However, our sense of self runs deeper than such superficial descriptions of behavioural traits. The self is an amalgamation of all that is contained within the mind; a magnificent average of every synaptic transmission and neuronal network. Like consciousness, it is an emergent phenomenon (the sum is greater than the parts). But unlike the conscious, self ceases to be when individual components are removed or modified. For example, consciousness is virtually unchanged (in the sense of what it defines – directed, controlled thought) with the removal of successive faculties. We can remove physical brain structures such as the amygdala and still utilise our capacities for consciousness, albeit loosing a portion of the informative inputs. However the self is a broader term, describing the current mental state of ‘what is’. It is both a snapshot of the descriptive, providing a broad overview of what we are at time t, and prescriptive, in that the sense of self has an influence over how behaviours are actioned and information is processed.

In this article I intend to firstly describe the basis of ‘traditional’ measures of the self; empirical measures of personality and cognition. Secondly I will provide a neuro-psychological outline of the various brain structures that could be biologically responsible for eliciting our perceptions of self. Finally, I wish to propose the view that our sense of self is dynamic, fluctuating daily based on experience and discuss how this could affect our preconceived notions of introspection.

Personality is perhaps one of the most measured variables in psychology. It is certainly one of the most well-known, through its portrayal in popular science as well as self-help psychology. Personality could also be said to comprise a major part of our sense of self, in that the way in which we respond to and process external stimuli (both physically and mentally) has major effects on who we are as an entity. Personality is also incredibly varied; whether due to genetics, environment or a combination of both. For this reason, psychological study of personality takes on a wide variety of forms.

The lexical hypothesis, proposed by Francis Galton in the 19th century, became the first stepping stone from which the field of personality psychometrics was launched. Galton’s posit was that the sum of human language, its vocabulary (lexicon), contains the necessary ingredients from which personality can be measured. During the 20th century, others expanded on this hypothesis and refined Galton’s technique through the use of Factor Analysis (a mathematical model that summarises common variance into factors). Methodological and statistical criticisms of this method aside, the lexical hypothesis proved to be useful in classifying individuals into categories of personality. However this model is purely descriptive; it simply summarises information, extracting no deeper meaning or providing background theory with which to explain the etiology of such traits. Those wishing to learn more about descriptive measures of personality can find this information under the headings ‘The Big Five Inventory’ (OCEAN) and Hans Eysencks Three Factor model (PEN).

Neuropsychological methods of defining psychology are less reliant on statistical methods and utilise a posteriori knowledge (as opposed to the lexical hypothesis which relies on reasoning/deduction). Thus, such theories have a solid empirical background with first-order experimental evidence to provide support to the conclusions reached. One such theory is the BIS/BAS (behavioural inhibition/activation system). Proposed by Gray (1982), the BIS/BAS conception of personality builds upon individual differences in cortical activity in order to arrive at the observable differences in behaviour. Such a revision of personality turns the tables on traditional methods of research in this area, moving away from superficially describing the traits to explaining the underlying causality. Experimental evidence has lent support to this model through direct observation of cortical activity (functional MRI scans). Addicts and sensation seekers are found to have high scores on behavioural activation (associated with increased per-frontal lobe activity), while introverts score high on behavioural inhibition. This seems to match up with our intuitive preconceptions of these personality groupings; sensation seekers are quick to action, in short they tend to act first and think later. Conversely, introverts act more cautiously, adhering to a policy of ‘looking before they leap’. Therefore, while not encapsulating as wide a variety of individual personality factors as the ‘Big Five’, the BIS/BAS model and others based on neurobiological foundations seem to be tapping into a more fundamental, materialistic/reductionist view of behavioural traits. The conclusion here is that directly observable events and the resulting individual differences ipso facto arise from specific regions in the brain.

Delving deeper into this neurology, the sense of self may have developed as a means to an end; the end in this case is predicting the behaviour of others. Therefore, our sense of self and consciousness may have evolved as a way of internally simulating how our social competitors think, feel and act. V. Ramachandran (M.D.), in his Edge.org exclusive essay, calls upon his neurological experience and knowledge of neuroanatomy to provide a unique insight into the physiological basis of self. Mirror neurons are thought to act as mimicking simulators of external agents, in that they show activity both performing a task and while observing someone else performing the same task. It is argued that such neuronal conglomerates evolved due to social pressures; a method of second guessing the possible future actions of others. Thus, the ability to direct these networks inwards was an added bonus. The human capacity for constructing a valid theory of mind also gifted us with the ability to scrutinise the self from a meta-perspective (an almost ‘out-of-body’ experience ala a ‘Jimeny the Cricket’ style conscience).

Mirror neurons also act as empathy meters; firing across synaptic events during moments of emotional significance. In effect, our ability to recognise the feelings of others stems from a neuronal structure that actually elicits such feelings within the self. Our sense of self, thus, is inescapably intertwined with that of other agents’ self. Like it or not, biological dependence on the group has resulted in the formation of neurological triggers which fire spontaneously and without our consent. In effect, the intangible self can be influenced by other intangibles, such as emotional displays. We view the world through ‘rose coloured glasses’ with an emphasis on theorizing the actions of others through how we would respond in the same situation.

So far we have examined the role of personality in explaining a portion of what the term ‘self’ conveys. In addition, a biological basis for self has been introduced which suggests that both personality and the neurological capacity for introspection are both anatomically definable features of the brain. But what else are we referring to when we speak of having a sense of self? Surely we are not doing this construct justice if all that it contains is differences in behavioural disposition and anatomical structure.

Indeed, the sense of self is dynamic. Informational inputs constantly modify and update our knowledge banks, which in turn, have ramifications for self. Intelligence, emotional lability, preferences, group identity, proprioreception (spatial awareness); the list is endless. Although some of these categories of self may be collapsible into higher order factors (personality could incorporate preference and group behaviour), it is arguable that to do so would result in the loss of information. The point here is that to look at the bigger picture may obscure the finer details that can lead to further enlightenment on what we truly mean when we discuss self.

Are you the same person you were 10 years ago? In most cases, if not all, the answer will be no. Core traits may remain relatively stable, such as temperament, however arguably, individuals change and grow over time. Thus, their sense of self changes as well, some people may become more attuned to their sense of self than others, developing a close relationship through introspective analysis. Others, sadly, seem to lack this ability of meta-cognition; thinking about thinking, asking the questions of ‘why’, ‘who am I’ and ‘how did I come to be’. I believe this has implications for the growth of humanity as a species.

Is a state of societal eudaimonia sustainable in a population that has varying levels of ‘selfness’? If self is linked to the ability to simulate the minds of others, which is also dependent upon both neurological structure (leading to genetic mutation possibly reducing or modifying such capacities) and empathic responses, the answer to this question is a resounding no. Whether due to nature or nurture, society will always have individuals whom are more self-aware than others, and as a result, more attentive and aware of the mental states of others. A lack of compassion for the welfare of others coupled with an inability to analyse the self with any semblance of drive and purpose spells doom for a harmonious society. Individuals lacking in self will refuse, through ignorance, to grow and become socially aware.

Perhaps collectivism is the answer; forcing groups to co-habitate may introduce an increased appreciation for theory of mind. If the basis of this process is mainly biological (as it would seem to be), such a policy would be social suicide. The answer could dwell in the education system. Introducing children to the mental pleasures of psychology and at a deeper level, philosophy, may result in the recognisation of the importance of self-reflection. The question here is not only whether students will grasp these concepts with any enthusiasm, but also if such traits can be taught via traditional methods. More research must be conducted into the nature of the self if we are to have an answer to this quandry. Is self related directly to biology (we are stuck with what we have) or can it be instilled via psycho-education and a modification of environment?

Self will always remain a mystery due to its dynamic and varied nature. It is with hope that we look to science and encourage its attempts to pin down the details on this elusive subject. Even if this quest fails to produce a universal theory of self, perhaps it will be successful in shedding at least some light onto the murky waters of self-awareness. In doing so, psychology stands to benefit both from a philosophical and a clinical perspective, increasing our knowledge of the causality underlying disorders of the self (body dysmorphia, depression/suicide, self-harming) .

If you haven’t already done so, take a moment now to begin your journey of self discovery; you might just find something you never knew was there!