You are currently browsing the category archive for the ‘Reality’ category.

We are all fascinatingly unique beings. Our individuality not only defines who we are, but also binds us together as a society. Each individual contributes unique talents towards a collaborative pool of human endeavour, in effect, enabling modern civilisation to exist as it does today. We have the strange ability to simultaneously preserve an exclusive sense of self whilst also contributing to the greater good through cooperative effort – loosing a bit of our independence through conformity in the process. But what does this sense of self comprise of? How do we get to be the distinguished being that we are despite the best efforts of conformist group dynamics and how can we apply such insights towards the establishment of a future society that respects individual liberty?

The nature versus nurture debate has raged for decades, with little ground won on either side. Put simply, the schism formed between those whom subscribed to the ‘tabula rasa’ or blank slate approach (born with individuality) and those whom believed our uniqueness is a product of the environment in which we live. Like most debates in science, there is no definitive answer. In practice, both variables interact and combine to produce variation in the human condition. Therefore, the original question is no longer valid; it diverges from one of two polarised opposites to one of quantity (how much variation is attibutable to nature/nurture).

Twin and adoption studies have provided the bulk of empirical evidence in this case, and with good reason. Studies involving monozygotic twins allows researchers to control for heritability (nature) of certain behavioural traits. This group can then be compared to other twins reared separately (manipulation of environment) or a group of fraternal twins/adopted siblings (same environment, different genes). Of course, limitations are still introduced whereby an exhaustive list of and exerted control over every environmental variable is impossible. The interaction of genes with environment is another source of confusion, as is the expression of random traits which seem to have no correlation with either nature or nurture.

Can the study of personality offer any additional insight into the essence of individuality? The majority of theories within this paradigm of psychology are purely descriptive in nature. That is, they only serve to summarise a range of observable behaviours and nuances into key factors. The ‘Big Five’ Inventory is one illustrative example. By measuring an individual’s subscription to each area of personality (through responses to predetermined questions), it is thought that variation between people can be psychometrically measured and defined according to scores on five separate dimensions. By utilising mathematical techniques such as factor analysis, a plethora of personality measures have been developed. Each subjective interpretation of the mathematical results combined with cultural differences and experimental variation between samples has produced many similar theories that differ only in the labels applied to the measured core traits.

Other empirical theories attempt to improve on the superficiality of such descriptive scales by introducing biological (nature) fundamentals. One such example is the “BIS/BAS” measure. By attributing personality (specifically behavioural inhibition and activation) to variation in neurological structure and function, this theory expands upon more superficial explanations. Rather than simply summarising and describing dimensions of personality, neuro-biological theories allow causality to be attributed to underlying features of the individual’s physiology. In short, such theories propose that there exists a physical thing to which neuropsychologists can begin to attach the “essence of I”.

Not to be forgotten, enquiries into the effects of nurture, or one’s environment, on personal development have bore many relevant and intriguing fruits. Bronfrenbrenner’s Ecological Systems theory is one such empirical development that attempts to qualify the various influences (and their level of impact) on an individual’s development. The theory is ecological in nature due to the nested arrangement of its various ‘spheres of influence’. Each tier of the model corresponds to an environmental stage that is further removed from the direct experience of the individual. For example, the innermost Microsystem pertains to immediate factors, such as family, friends and neighbourhood. Further out, the Macrosystem defines influences such as culture and political climate; while not exerting a direct effect, these components of society still shape the way we think and behave.

But we seem to be only scratching the surface of what it actually means to be a unique individual. Rene Descartes was one of many philosophers with an opinion on where our sense of self originates. He postulated a particular kind of dualism, whereby the mind and body exist as two separate entities. The mind was though to influence the body (and vice versa) through the pineal gland (a small neurological structure that actually secretes hormones). Mind was also equated with ‘soul’, perhaps to justify the intangible nature of this seat of consciousness. Thus, such philosophies of mind seem to indirectly support the nature argument; humans have a soul, humans are born with souls, souls are intangible aspects of reality, therefore souls cannot be directly influenced by perceived events and experiences. However Descartes seemed to be intuitively aware of this limitation and built in a handy escape clause; the pineal gland. Revolutionary for its time, Descartes changed the way philosophers thought about the sense of self, and went so far as to suggest that the intangible soul operated on a bi-directional system (mind influences body, body influences mind).

The more one discusses self, the deeper and murkier the waters become. Self in the popular sense refers to mental activity distinct from our external reality and the minds of others (I doubt, I think, Therefore I am). However, self comprises a menagerie of summative sub-components, such as; identity, consciousness, free-will, self-actualisation, self-perception (esteem, confidence, body image) and moral identity, to name but a few. Philosophically and empirically, our sense of self has evolved markedly, seemingly following popular trends throughout the ages. Beginning with a very limited and crude sense of self within proto-human tribes, the concept of self has literally exploded to an extension of god’s will (theistic influences) and more recently, a more reductionist and materialist sense where individual expression and definition are a key tenet. Ironically, our sense of self would not have been possible without the existence of other ‘selves’ against which comparisons could be made and intellects clashed.

Inspiration is one of the most effective behavioural motivators. In this day and age it is difficult to ignore society’s pressures to conform. Paradoxically, success in life is often a product of creativity and individuality; some of the wealthiest people are distinctly different from the banality of normality. It seems that modern society encourages the mundane, but I believe this is changing. The Internet has ushered in a new era of self-expression. Social networking sites allow people to share ideas and collaborate with others and produce fantastic results. As the access to information becomes even easier and commonplace, ignorance will no longer be a valid excuse. People will be under increased pressure to diverge from the path of average if they are to be seen and heard. My advice; seek out experiences as if they were gold. Use the individuality of others to mold and shape values, beliefs and knowledge into a worthy framework within which you feel at ease. Find, treasure and respect your “essence of I”; it is a part of everyone of us that can often become lost or confused in this chaotic world within which we live.

Closely tied to our conceptions of morality, conspiracy occurs when the truth is deliberately obscured. Conspiracy is often intimately involved with, and precipitated by political entities whom seek to minimise any negative repercussions of such truth becoming public knowledge. But what exactly does a conspiracy involve? According to numerous examples from popular culture, conspiracies arise from smaller, constituent and autonomous units within governmental bodies and/or military organisations, and usually involve some degree of ‘coverup’ or deliberate misinformation/clouding of actual events that have taken place. Such theories, while potentially having some credulous background, are for the most part ridiculed as neurotic fantasies that have no grounding in reality. How then do individuals maintain such obviously false ideas in the face of societal pressure? What are the characteristics of a ‘conspiracy theorist’ and how do these traits distinguish them from society as a whole? What do conspiracy theories tell us about human nature? These are the questions I would like to explore in this article.

As a child I was intensely fascinated with various theories regarding alien activity of earth. Surely a cliche in today’s world, but the alleged events that occurred in Roswell, Tunguska and Rendlesham Forest are a conspirator’s dream. Fortunately I no longer hold these events in any factual stead; rather, as I have aged and matured so too has my ability to examine evidence rationally (something that conspiracy theorists seem unable to accomplish). Introspection on my childhood motivations for believing these theories potentially reveals key characteristics of believers in conspiracy. Aliens were a subject of great personal fear as a young child, thus encouraging a sort of morbid fascination and desire to understand/explain (perhaps in an attempt to regain some control over these entities that could supposedly appear at will). Indeed, a fear of alien abduction seems to merely be the modern reincarnation of previous childhood fears, such as goblins and demons. Coupled with the ‘pseudo-science’ that accompanies conspiracy theories, it is no wonder that the young and otherwise impressionable are quickly mesmerised and enlisted into the cause. A strong emotional bond connects the beliefs with the evidence in an attempt to relieve uncomfortable feelings.

Conspiracy theories may act as a quasi-scientific attempt to explain the unknown, not too dissimilar to religion (and perhaps utilising the same neurological mechanisms).  While a child could be excused for believing such fantasies, it is intriguing how adults can maintain and perpetuate wild conspiracy beliefs without regret. Cognitive dissonance may act as an underlying regulator and maintainer of such beliefs, in that the more radical they become, the more they are subscribed to (in an attempt at minimising the psychological discomfort that internal hypocrisy brings). But where do these theories come from? Surely there must be at least some factual basis for their creation. Indeed there is, however the evidence is often mis-interpreted or there is sufficient cause for distrust in the credibility of the information ( in light of the deliverer’s past history). Therefore we have two main factors that can determine whether the information will be interpreted as a conspiracy; the level of trust an individual ascribes to the information source (taking into account that person’s past dealings with the agent and personality/presence of neurotic disorders) and the degree of ambiguity in the said events (personal interpretation different to that reported, perceptual experience sufficiently vivid to cause disbelief in alternate explanation).

To take the alleged alien craft crash landing at Roswell as a case in point, it becomes obvious where the conspiracy began to develop within the chronological timeframe of events and for what reasons. Roswell also demonstrates the importance of maintaining a trust in authority; the initial printing of ‘Flying Disc Recovered By USAF’ in a local newspaper was quickly retracted and replaced with a more menial and uninteresting ‘weather balloon’ explanation. Reportedly, this explanation was accepted by the people of the time and all claims of alien space craft forgotten about until the 1970s, some 30 years after the actual event. The conspiracy was revitalised by the efforts of a single individual (perhaps seeking his own ‘five minutes of fame’), thus demonstrating the power of one person’s belief supported by others in authority (the primary researcher, Friedman, was a nuclear physicist and respected writer). Coupled with convenient (in that it is ambiguous) and an aggressive interpretation of circumstantial evidence, the alleged incident at Roswell has since risen to global fame. Taken in the context of historical happenings at this period in history (aftermath of WW2, beginnings of Cold War – increase in military top secret projects) it is no wonder that imagination began to replace reality; people now had a means to attribute a cause and explanation to that which they clearly had no substantiated understanding of. There was also the catalyst for thinking that governments engaged in trickery what with the numerous special operations conducted in a clandestine manner and quickly covered up when things went awry (eg Bay of Pigs incident).

Thus the power of conspiracy has been demonstrated. Originating from just a single individual’s private beliefs, it seems as if the fable twinges a common thread within those susceptible. As epitomised by Mulder’s office poster in the X-Files, people ‘want to believe’. That is, the hypocrisy in maintaining such obviously false beliefs is downplayed through a conscious effort to misinterpret counter-evidence and emphasize minimalist details that support the theory. As aforementioned, the role of pseudo-science does wonders to support conspiracy theories and increase their attractiveness to those that would otherwise discount the proposition. By merging the harsh reality of science with the obvious fantasy that is the subject matter of most conspiracies, people have a semi-plausible framework within which to construct their theories and establish consistency for defending their position. It is a phenomenon that is quite similar to religion; the misuse and misinterpretation of “evidence” to satisfy the desire of humanity to regain control over the unexplainable and support a corrupted hidden agenda (distrust of authority).

There is little that distinguishes between the characteristics of conspiracy theorists and religious fundamentalists; both share a common bond in their singlemindedness and perceived superiority over the ‘disbelievers’. But their are subtle differences. Conspiracy theorists undertake a lifelong crusade to uncover the truth – an adversarial relationship develops where the theorist is elevated to a level of moral and intellectual superiority (at having uncovered the conspiracy and thwarted any attempts at deception). On the other hand, the religious seem to take their gospel at face value, perhaps at a deeper level and with a greater certainty than the theorists (perhaps due to the much longer history of religion and firm establishment within society). The point here is that while there may be such small differences between the two groups, the underlying psychological mechanisms could be quite similar; they certainly seem to be related due to the common grounding within our belief system.

Psychologically, conspiracies are thought to arise for a number of reasons. As already mentioned, the role of cognitive dissonance is one psychic mechanism that may perpetuate these beliefs in the face of overwhelming contradictory evidence. The psychoanalytic concept of projection is one theorised catalyst that is proposed to dictate the formulation of conspiracy theories. It is thought that the theorist subconsciously projects their own perceived vices onto the target in the form of conspiracy and deception. Thus the conspirator becomes an embodiment of what the theorist despises, regardless of the objective truth. The second leading psychological cause of conspiracy theory creation is one that involves a tendency to apply ‘rules of thumb’ to social events. Humans believe that significant events have significant causes, such as the death of a celebrity. There is no shortage of such occasions even in recent months what with the untimely death of Hollywood actors and local celebrities. Such events rock the foundation of our worldviews, often to such a large extent that artificial causes are attributed to reassure ourselves that the world is predictable (even if the resulting theory is so artificially complex that any plausibility quickly evaporates).

It is interesting to note that the capacity to form beliefs based on large amounts of imagination and very little fact is present within most of us. Take a moment to stop and think about what you thought the day the twin towers came down, or maybe when Princess Diana was killed. Did you formulate some radical postulations based on your own interpretations and hidden agendas? For the vast majority of us, time proves the ultimate ajudicator and acts to dismiss fanciful ideas out of hand. But for some, the attractiveness of having one up on their fellow citizen at having uncovered some secretive ulterior motive reinforces such beliefs until they become infused with the person’s sense of identity. The truth is nice to have, however some things in life definitely do not have explanations rooted in the deception of some higher power. Random events do happen, without any need for a hidden omnipresent force dictating events from behind the scenes.

PS: Elvis isn’t really dead, he’s hanging out with JFK at Area 51 where they faked the moon landings. Pardon me whilst I don my tin-foil hat, I think the CIA is using my television to perform mind control…

The topic of free-will is one of the largest problems facing modern philosophers. An increasing empirical onslaught has done little to alleviate these murky waters. In actuality, each scientific breakthrough has resulted in greater philosophical confusion, whether it be due to an impractical knowledge base that is needed to interpret these results or counter-intuitive outcomes (RP signal, brain activity precedes conscious action). My own attempts to shed some light onto this matter are equally feeble, which has precipated the creation of the present article. What is the causal nature of the universe? Is each action determined and directly predictable from a sufficiently detailed starting point or is there a degree of inherent uncertainty? How can we reconcile the observation that free-will appears to be a valid characteristic of humanity with mounting scientific evidence to the contrary (eg Grand Unified Theory)? These are the questions I would like to discuss.

‘Emergent’ seems to be the latest buzzword in popular science. While the word is appealing when describing how complexity can arise from relatively humble beginnings, it does very little to actually explain the underlying process. These two states are simply presented on a platter, the lining of which is composed of fanciful ’emergent’ conjourings. While there is an underlying science behind the process involving dynamic systems (modelled on biological growth and movement), there does seem to be an element of hand waving and mystique.

This state of affairs does nothing to help current philosophical floundering. Intuitively, free-will is an attractive feature of the universe. People feel comfortable knowing that they have a degree of control over the course of their life. A loss of such control could even be construed as a faciliator of mental illness (depression, bipolar disorder). Therefore, the attempts of science to develop a unified theory of complete causal prediction seems to undermine our very nature as human beings. Certainly, some would embrace the notion of a deterministic universe with open arms, happy to put uncertainty to an end. However, one would do well (from a Eudamonic point of view) to cognitively reframe anxiety regarding the future to an expectation of suprise and anticipation at the unknown.

While humanity is firmly divided over their preference for a predictable or uncertain universe, the problem remains that we appear to have a causally determined universe with individual freedom of choice and action. Quantum theory has undermined determinism and causality to an extent, with the phenomenon of spontaneous vaccuum energy supporting the possibility of events occuring without any obvious cause. Such evidence is snapped up happily by proponents of free-will with little regard as to its real-world plausibility.This is another example of philosophical hand-waving, where the real problem involves a form of question begging; that is, a circular argument with the premise requiring a proof of itself in order to remain valid! For example, the following argument is often used;

  1. Assume quantum fluctuations really are indeterminate in nature (underlying causality ala ‘String Theory’ not applicable).
  2. Free-will requires indeterminacy as a physical prerequisite.
  3. Quantum fluctuations are responsible for free-will.

 To give credit where it is due, the actual arguments used are more defined than that which is outlined above, however the basic structure is similar. Basic premises can be outlined and postulates put forward describing the possible form of neurological free will, however as with most developing fields the supporting evidence is skimp at best. And to make matters worse, quantum theory has shown that human intuition is often not the best method of attempting an explaination.

 However, if we work with what we have, perhaps something useful will result. This includes such informal accounts such as anecdotal evidence. The consideration of such evidence has led to the creation of two ‘maxims’ that seem to summarise the evidence presented in regards to determinsm and free-will.

Maxim one. The degree of determinism within a system is reliant upon the scale of measurement; a macro form of measurement results in a predominantly deterministic outcome, while a micro form of measurement results in an outcome that is predominantly ‘free’ or unpredictable. What this is saying is that determinism and freedom can be directly reconciled and coexist within the same construct of reality. Rather than existing as two distinctly separate entities, these universal characteristics should be reconceptualised as two extremities on a sliding scale of some fundamental quality. Akin to Einstein’s General Relativity, the notions of determinism and freedom are also relative to the observer. In other words, how we examine the fabric of reality (large or small scale) results in a worldview that is either free or constrained by predictability. Specifically, quantum scale measurements allow for an indeterministic universe, while larger scale phenomenon are increasingly easier to predict (with a corresponding decrease in the accuracy in the measurement tool). In short, determinism (or free-will) is not a physical property of the universe, but a characteristic of perception and an artifact of the mesaurement method used. While this maxim seems commonsensical and almost obvious, I believe the idea that both determinism and free-will are reconcilable features of this universe is a valid proposition that warrants further investigation.

Maxim Two: Indeterminacy and free-will are naturally occuring results that emerge from the complex interaction of a sufficient number of interacting deterministic systems (actual mechanisms unknown). Once again we are falling back on the explanatory scapegoat of ’emergence’, however its use is partially justified (in the light of empirical developments). For example, investigations into fractal patterns and the modelling of chaotic systems seems to justify the existence of emergent complexity. Fractals are generated from a finite set of definable equations and result in an intensely complicated geometric figure with infinite regress, the surface features undulating with each magnification (interestingly, fractal patterns are a naturally occuring feature in the physical world, and can result from biological growth patterns and magnetic field lines). Chaos is a similar phenomemon, beginning from reasonably humble initial circumstances, and due to an amalgamation of interferring variables results in an overall system of indeterminacy and unpredictability (eg weather patterns). Perhaps this is the mechanism of human consciousness of freedom of will; individual (and deterministic) neurons contribute enmasse to an overall emergent system that is unpredictable. As a side note, such a position also supports the possibility of artificial intelligence; build something that is sufficiently complex and ‘human-like’ consciousness and freedom will result.

The two maxims proposed may seem to be quite obvious on cursory inspection, however it can be argued that the proposal of a universe in which determinism and freedom of will form two alternative interpretations of a common, underlying reality is unique. Philisophically, the topic is difficult to investigate and discuss due to limitations on empirical knowledge and an increasing requirement for specialised technical insight into the field.

The ultimate goal of modern empiricism is to reduce reality to a strictly deterministic foundation. In keeping with this aim, experimentation hopes to arrive at physical laws of nature that are increasingly accurate and versatile in their generality. Quantum theory has since put this inexorable march on hold while futile attempts are made to circumvent the obstacle that is the uncertainty principle.

Yet perhaps there is a light at the end of the tunnel, however dim the journey may be. Science may yet produce a grand unified theory that reduces free-will to causally valid, ubiquitous determinism. More than likely, as theories of free-will become closer to explaining the etiology of this entity, we will find a clear and individually applicable answer receding frustratingly into the distance. From a humanistic perspective, it is hoped that some degree of freedom will be preserved in this way. After all, the freedom to act independently and an uncertainty of the future is what makes life worth living!

Teleportation is no longer banished to the realm of science fiction. It is widely accepted that what was once considered a physical impossibility is now directly achievable through quantum manipulations of individual particles. While the methods involved are still in their infancy (single electrons are the heaviest particle to be teleported), we can at least begin to appreciate and think about the possibilities on the basis of plausibility. Specifically, what are the implications for personal identity if this method of transportation is possible on a human scale? Atomically destructing and reconstructing an individual at an alternate location could introduce problems with consciousness. Is this the same person or simply an identical twin with its own thoughts, feelings and desires? These are the questions I would like to discuss in this article.

Biologically we lose our bodies several times over during one human life-time. Complete organs are replaced diurnally with little thought given to the implications for self-identity. It is a phenomenon that is often overlooked, and especially so in relation to recent empirical developments with quantum teleportation. If we are biologically replaced with regularity does this imply that our sense of self is, likewise, dynamic in nature and constantly evolving? There would be reasonable arguements for both sides of this debate; maturity and daily experience do result in a varied mental environment. However, one wonders if this has more to do with innate processes such as information transfer/recollection/modification rather than purely the biological characteristics of individual cells (in relation to cell division and rejuvenation processes).

Thus it could be argued that identity is a largely conscious (in terms of seeking out information and creating internal schema of identity) and directed process. This does not totally rule out the potential for identity based upon changes to biological structure. Perhaps the effects are more subtle, modifying our identities in such a way as to facilitate maturity or even mental illness (if the duplication process is disturbed). Cell mutation (neurological tumor growth) is one such example whereby a malfunctioning biological process can result in direct and often drastic changes to identity.

However, I believe it is safe to assume that “normal” tissue regenerative processes do not result in any measurable changes to identity. What makes teleportation so different? Quantum teleportation has been used to teleport photons from one location to another, and more recently, particles with mass (electrons). The process is decidedly less romantic than science-fiction authors would have us believe; classical transmission of information is still required, and a receiving station must still be established at the desired destination. What this means is that matter transportation, ala ‘Star Trek’ transporters, is still very much an unforeseeable fiction. In addition, something as complex as the human body would require incredible computing power to scan at sufficient detail, another limiting factor in its practicality. Fortunately, there are potential uses for this technology such as in the fledging industry of quantum computers.

The process works around the limitations of the quantum Uncertainty Principle (which states that the exact properties of a quantum system can never be known in exact detail) through a process known as the “Einstein-Podolsky-Rosen” effect. Einstein had real issues with Quantum Mechanics; he didn’t like it at all (to quote the cliche ‘Spooky action at a distance’). The EPR paper was aimed at irrefutably proving the implausibility of entangled pairs of quantum particles. John Stewart Bell tripped the Einstein proposition on its head when he demonstrated that entangled particles do in fact exhibit statistically significant random behaviours (that is, the frequencies of each action correlated between both particles too highly to be due to chance alone). The fact that entanglement does not violate the no-communication theorem is good news for our assumptions regarding reality, but more bad news for teleportation fans. Information regarding the quantum state of the teleportee is still required to be transmitted via conventional methods for reassembly at the other end.

Quantum teleportation works by initially scanning the quantum state of a particle at A, with care taken not to cause too much disruption (measurement distorts the original, the harder you look the more uncertain the result). This partial scan is then transmitted at relativistic speeds to the receiver at B. A pair of entangled particles is then dispatched to both teleportation stations. Entangled particle 1 at A interacts with the remainder of A (minus the scanned out information sent to B). Entanglement then assures that this information will be instantaneously available at B (via entangled particle 2). Utilising the principles of the EPR effect and Bell’s statistical correlations, it is then possible to reconstruct the state of the original particle A at the distant location, B. While the exact mechanism is beyond the technical capacity of philosophy, it is prudent to say that the process works by taking the entangled information from EP2 and combining it with the classically transmitted information that was scanned out of the original particle, A.

Casting practicality aside for the sake of philosophical discussion,  if such a process became possible for a being as complex as a human, what would be the implications for consciousness and identity? Common sense tells us that if an exact replica could be duplicated then how is this in any way different to the original? One would simply ‘wake-up’ at the new location within the same body and mind as you left. Those that subscribe to a Cartesian view of separated body and mind would look upon teleportation with an abhorrent revulsion. Surely along the way we are loosing a part of what makes us uniquely human; some sort of intangible soul or essence of mind which cannot be reproduced? This leads one to similar thought experiments. What if another being somewhere in the Universe is born with the exact mental characteristics as yourself? Would this predispose them to some sort of underlying and phenomenological connection? Perhaps this is supported by anecdotal evidence from empirical studies into identical twins. It is thought such individuals share a common bond, demonstrating almost telepathic abilities at times. Although it could be argued that the nature of this mechanism is probably no more mystical than a familiar acquaintance predicting how you would react in a given situation, or similarities in brain structure predisposing twins to ‘higher than average’ mental convergence events.

Quantum teleportation on conscious beings also raises serious moral implications. Is it considered murder to deconstruct the individual at point A, or is this initial crime nullified once the reassembly is completed? Is it still considered immoral if someone else appears at the receiver due to error or quantum fluctuation? Others may argue that it is no different to conventional modes of transport; human error should be dealt as such (necessary condition for the label of crime/immorality) and naturally occurring disasters interpreted as nothing more than random events.

While it is doubtful that we will ever see teleportation on a macro scale, we should remain mindful of the philosophical and practical implications of emerging technologies. Empirical forces are occasionally blinded to these factors when such innovations are announced to the general public. While it is an important step in society that such processes are allowed to continue, the rate at which they are appearing can be cause for alarm if they impinge upon our human rights and the preservation of individuality. There has never been a more pressing time for philosophers to think about the issues and offer their wisdom to the world.

Many of us take the capacity to sense the world for granted. Sight, smell, touch, taste and hearing combine to paint an uninterrupted picture of the technicolour apparition we call reality. Such lucid representations are what we use to define objects in space, plan actions and manipulate our environment. However, reality isn’t all that it’s cracked up to be. Namely, our role in defining the universe in which we live is much greater than we think. Humanity, through the use of sensory organs and the resulting interpretation of physical events, succeeds in weaving a scientific tapestry of theory and experimentation. This textile masterpiece may be large enough to ‘cover all bases’ (in terms of explaining the underlying etiology of observations), however it might not be made of the right material. With what certainty do scientific observations carry a sufficient portion of objectivity? What role does the human mind and its modulation of sensory input have in creating reality? What constitutes objective fact and how can we be sure that science is ‘on the right track’ with its model of empirical experimentation? Most importantly, is science at the cusp of an empirical ‘dark age’ where the limitations of perception fundamentally hamper the steady march of theoretical progress? These are the questions I would like to explore in this article.

The main assumption underlying scientific methodology is that the five sensory modalities employed by the human body are, by and large, uniformly employed. That is, despite small individual fluctuations in fidelity, the performance of the human senses is mostly equal. Visual acuity and auditory perception are sources of potential variance, however the advent of certain medical technologies has circumnavigated and nullified most of these disadvantages (glasses and hearing aids, respectively). In some instances, such interventions may even improve the individual’s sensory experience, superseding ‘normal’ ranges through the use of further refined instruments. Such is the case with modern science as the realm of classical observation becomes subverted by the need for new, revolutionary methods designed to observe both the very big and the very small. Satellites loaded with all manner of detection equipment have become our eyes for the ultra-macro; NASA’s COBE orbiter gave us the first view of early universal structure via detection of the cosmic microwave background radiation (CMB). Likewise, scanning probe microscopy (SPM) enabled scientists to observe on the atomic scale, below the threshold of visible light. In effect, we have extended and supplemented our ability to perceive reality.

But are these innovations also improving the objective quality of observations, or are we being led into a false sense of security? Are we becoming comfortable with the idea that what we see constitutes what is really ‘out there’? Human senses are notoriously prone to error. In addition, machines are only as good as their creator. Put another way, artificial intelligence has not yet superseded the human ‘home grown’ alternative. Therefore, can we rely on a human-made, artificial extension of perception with which to make observations? Surely we are compounding the innate inaccuracies, introducing a successive error rate with each additional sensory enhancement. Not to mention the interpretation of such observations and the role of theory in whittling down alternatives.

Consensus cannot be reached on whether what I perceive is anything like what you perceive. Is my perception of the colour green the same as yours? Empirically and philosophically, we are not yet at a position to determine with any objectivity whether this question is true. We can examine brain structure and compare regions of functional activity, however the ability to directly extract and record aspects of meaning/consciousness is still firmly in the realms of science-fiction. The best we can do is simply compare and contrast our experiences through the medium of language (which introduces its own set of limitations).As aforementioned, the human sensory experience can, at times, become lost in translation.

Specifically, the ability of our minds to disentangle the information overload that unrelentingly flows through mental channels can wane due to a variety of influences. Internally, the quality of sensory inputs is governed at a fundamental level by biological constraints. Millions of years of evolution has resulted in a vast toolkit of sensory automation. Vision, for example, has developed in such a way as to become a totally unconscious and reflexive phenomenon. The biological structure of individual retinal cells predisposes them to respond to certain types of movement, shapes and colours. Likewise, the organisation of neurons within regions of the brain, such as the primary visual cortex in the occipital lobe, processes information with pre-defined mannerisms. In the case of vision, the vast majority of processing is done automatically, thus reducing the overall level of awareness and direct control the conscious mind has over the sensory system. The conclusion here is that we are limited by physical structure rather than differences in conscious discrimination.

The retina acts as the both the primary source of input as well as a first-order processor of visual information In brief, photons are absorbed by receptors on the back wall of the eye. These incoming packets of energy are absorbed by special proteins (rods – light intensity, cones – colour) and trigger action potentials in attached neurons. Low level processing is accomplished by a lateral organisation of retinal cells; ganglionic neurons are able to communicate with their neighbours and influence the likelihood of their signal transmission. Cells communicating in this manner facilitates basic feature recognition (specifically, edges/light and dark discrepancies) and motion detection.

As with all the sensory modalities, information is then transmitted to the thalamus, a primitive brain structure that acts as a communications ‘hub’; its proximity to the brain stem (mid and hind brains) ensures that reflexes are privy to visual input prior to the conscious awareness. The lateral geniculate nucleus is the region of the thalamus which splits incoming visual input into three main signals; (M, P and K). Interestingly, these channels stream inputs into signals with unique properties (eg exclusively colour, motion etc). In addition, the cross lateralisation of visual input is a common feature of human brains. Left and right fields of view are diverted at the optic chiasm and processed on common hemispheres (left field of view from both eyes processed on the right side of the brain). One theory as to why this system develops is to minimise the impact of uni-lateral hemispheric damage – the ‘dual brain’ hypothesis (each hemisphere can act as an independent agent, reconciling and supplementing reductions in function due to damage).

We seem to lazily fall back on these automated subsystems with enthusiasm, never fully appreciating and flexing the full capabilities of sensory appendages. Micheal Frayn, in his book ‘The Human Touch’ demonstrates this point aptly;

“Slowly, as you force yourself to observe and not to take for granted what seems so familiar, everything becomes much more complicated…That simple blueness that you imagined yourself seeing turns out to have been interpreted, like everything else, from the shifting, uncertain material on offer” Frayn, 2006, p26

Of course, we are all blissfully ignorant of these finer details when it comes to interpreting the sensory input gathered by our bodies. The consciousness acts ‘with what it’s got’, without a care as to the authenticity or objectivity of the observations. We can observe this first hand in a myriad of different ways; ways in which the unreal is treated as if it were real. Hallucinations are just one mechanism where the brain is fooled. While we know such things are false, to a degree (depending upon the etiology, eg schizophrenia), such visual disturbances nonetheless are able to provoke physiological and emotional reactions. In summary, the biological (and automated) component of perception very much determines how we react to, and observe, the external world. In combination with the human mind (consciousness), which introduces a whole new menagerie of cognitive baggage, a large amount of uncertainty is injected into our perceptual experience.

Expanding outwards from this biological launchpad, it seems plausible that the qualities which make up the human sensory experience should have an effect on how we define the world empirically. Scientific endeavour labours to quantify reality and strip away the superfluous extras leaving only constitutive and fundamental elements. In order to accomplish this task, humanity employs the use of empirical observation. The segway between biological foundations of perception and the paradigm of scientific observation involves a similarity in sensory limitation. Classical observation was limited by ‘naked’ human senses. As the bulk of human knowledge grew, so too did the need to extend and improve methods of observation. Consequently, science is now possibly realising the limitation of the human mind to digest an overwhelming plethora of information.

Currently, science is restricted by the development of technology. Progress is only maintained through the ingenuity of the human mind to solve biological disadvantages of observation. Finely tuned microscopes tap into quantum effects in order to measure individual atoms. Large radio-telescope arrays link together for an eagle’s eye view of the heavens. But as our methods and tools for observing grow in complexity, so too does the degree of abstract reasoning that is required to grasp the implications of their findings. Quantum theory is one such warning indicator.

Like a lighthouse sweeps the night sky and signals impending danger, quantum physics, or more precisely, humanity’s inability to agree on any one consensus which accurately models reality, could be telling us something. Perhaps we are becoming too reliant on our tools of observation, using them as a crutch in a vain attempt to avoid our biological limitations. Is this a hallmark of our detachment from observation? Quantum ‘spookiness’ could simply be the result of a fundamental limitation of the human mind to internally represent and perceive increasingly abstract observations. Desperately trying to consume the reams of information that result from rapid progress and intense observation, scientific paradigms become increasingly specialised and diverged, increasing the degree of inter-departmental bureaucracy. It now takes a lifetime of training to even grasp the basics of current physical theory, let alone the time taken to dissect observations and truly grasp their essence.

In a sense, science is at a crossroads. One pathway leads to an empirical dead end; humanity has exhausted every possible route of explanation. The other involves either artificial augmentation (in essence, AI that can do the thinking for us) or a fundamental restructuring of how science conducts its business. Science is in danger of information overload; the limitations introduced by a generation of unrelenting technical advancement and increasingly complex tools with which to observe has taken its toll. Empirical progress is stalling, possibly due to a lack of understanding by those doing the observing. Science is detaching from its observations at an alarming rate, and if we aren’t careful, in danger of loosing sight of what the game is all about. The quest for knowledge and understanding of the universe in which we live.

Most of us would like to think that we are independent agents that are in control of our destiny. After all, free-will is one of the unique phenomena that humanity can claim as its own – a fundamental part of our cognitive toolkit. Experimental evidence, in the form of neurological imaging has been interpreted as an attack on mental freedom. Studies that highlight the possibility of unconscious activity preceding the conscious ‘will to act’ seem to almost sink the arguments from non-determinists (libertarians). In this article I plan to outline this controversial research and offer an alternative interpretation; one which does not infringe on our abilities to act independent and of our own accord. I would then like to explore some of the situations where free-will could be ‘missing in action’ and suggest that the frequency at which this occurs is larger than expected.

A seminal investigation conducted by Libet et al (1983) first challenged (empirically) our preconceived notions of free-will. The setup consisted of an electroencephalograph (EEG, measuring overall electrical potentials through the scalp) connected to the subject and a large clock with markings denoting various time periods. Subjects were required to simply flick their wrist whenever a feeling urged them to do so. The researchers were particularly interested in the “Bereitschaftspotential” or readiness potential; a signature EEG pattern of activity that signals the beginning of volitional initiation of movement. Put simply, the RP is an measurable spike in electrical activity from the pre-motor region of the cerebral cortex – a mental preparatory action that put the wheels of movement into action.

Results of this experiment indicated that the RP significantly preceded the subjects’ reported sensations of conscious awareness. That is, the act of wrist flicking seemed to precede conscious awareness of said act. While the actual delay between RP detection and conscious registration of intent to move was small (by our standards), the half a second gap was more than enough to assert that a measurable difference had occurred. Libet interpreted these findings as having vast implications for free-will. It was argued that since electrical activity preceded conscious awareness of the intent to move, free-will to initiate movement (Libet allowed free-will to control movements already in progress, that is, modify their path or act as a final ‘veto’ in allowing or disallowing it to occur) was non-existent.

Many have taken the time to respond to Libet’s initial experiment. Daniel Dennet (in his book Freedom Evolves) provides an apt summary of the main criticisms. The most salient rebuttal comes in the form of signal delay. Consciousness is notoriously slow in comparison to the automated mental processes that act behind the scenes. Take the sensation of pain, for example. Initial stimulation of the nerve cells must firstly reach sufficient levels for an action potential to fire, causing dendrites to flood ions into the synaptic gap. The second-order neuron then receives these chemical messengers, modifying its electrical charge and causing another action potential to fire along its myelinated axon. Now, taking into account the length that this signal must travel (at anywhere from 1-10m/s), it will then arrive at the thalamus, the brain’s sensory ‘hub’ where it is then routed to consciousness. Consequently, there is a measurable gap between the external event and conscious awareness; perhaps made even larger if the signal is small (low pain) or the mind is distracted. In this instance, electrical activity is also taking place and preceding consciousness. Arguably the same phenomenon could be occurring in the Libet experiment.

Delays are inevitably introduced when consciousness is involved in the equation. The brain is composed of a conglomerate of specialised compartments, each communicating with its neighbours and performing its own part of the process in turn. Evolution has drafted brains that act automatic first, and conscious second. Consequently, the automatic gains priority over the directed. Reflexes and instincts act to save our skins long before we are even aware of the problem. Naturally, electrical activity in the brain could thus precede conscious awareness.

In the Libet experiment, the experimental design itself could be misleading. Libet seems to equate his manipulation of consciousness timing with free-will, when in actual fact, the agent has already decided freely that they will follow instructions. What I am trying to say here is that free-will does not have to act as an initiator to every movement; rather it acts to ‘set the stage’ for events and authorises the operation to go ahead. When told to move voluntarily, the agent’s will makes the decision to either comply or rebel. Compliance causes the agent to authorise movement, but the specifics are left up to chance. Perhaps a random input generator (quantum indeterminacy?) provides the catalyst with which this initial order combines to create the RP and eventual movement. Conscious registration of this fact only occurs once the RP is already starting to form.

Looking at things from this perspective, consciousness seems to play a constant game of ‘catch-up’ with the automated processes in our brains. Our will is content to act as a global authority, leaving the more menial and mundane tasks up to our brain’s automated sub-compartments. Therefore, free-will is very much alive and kicking, albeit sometimes taking a back-seat to the unconscious.

We have begun by exploring the nature of free-will and how it links in with consciousness. But what of these unconscious instincts that seek to override our sense of direction and seek to regress humanity back to its more animalistic and primitive ancestry? Such instincts act covertly; sneakily acting whilst our will is otherwise indisposed. Left unabated, the agent that gives themselves completely to urges and evolutionary drives could be said to be devoid of free-will, or at the very least, somewhat lacking compared to more ‘aware’ individuals. Take sexual arousal, for instance. Like it or not, our bodies act on impulse, removing free-will from the equation with simplistic stimulus:response conditioning processes. Try as we might, sexual arousal (if allowed to follow its course) acts immediately upon visual or physical stimulation. It is only when the consciousness kicks into gear and yanks on the leash attached to our unconscious that control is regained. Eventually, with enough training, it may be possible to override these primitive responses, but the conscious effort required to sustain such a project would be psychically draining.

Society also seeks to rob us of our free-will. People are pushed and pulled by group norms, expectations of others and the messages that are constantly bombarding us on a daily basis. Rather than encouraging individualism, modern society is instead urging us to follow trends. Advertising is crafted in a way that the individual may even be fooled into thinking that they are arriving at decisions of their own volition (subliminal messaging), when in actual fact, it is simply tapping into some basic human need for survival (food, sex, shelter/security etc).

Ironically, science itself could also be said to be reducing the amount of free-will we can exert. Scientific progress seeks to make the world deterministic; that is, totally predictable through increasingly accurate theories. While the jury is still out as to whether ‘ultimate’ accuracy in prediction will ever occur (arguably, there is not enough bits of information in the universe with which to construct a computer powerful enough to complete such a task) science is coming closer to a deterministic framework whereby the paths of individual particles can be predicted. Quantum physics is but the next hurdle to be overcome in this quest for omniscience. If the inherent randomness that lies within quantum processes is ever fully explained, perhaps we will be at a place (at least scientifically) to model a individual’s future action based on a number of initial variables.

What could this mean for the nature of free-will? If past experiments are anything to go by (Libet et al), it will rock our sense of self to the core. Are we but behaviouristic automatons as the psychologist Skinner proposed? Delving deeper into the world of the quanta, will we ever be able to realistically model and predict the paths of individual particles and thus the future course of the entire system? Perhaps the Heisenberg Uncertainty Principle will spare us from this bleak fate. The indivisible randomness of the quantum wave function could potentially be the final insurmountable obstacle that neurological researchers and philosophers alike will never be able to conquer.

While I am all for scientific progress and increasing the bulk of human knowledge, perhaps we are jumping the gun with this free-will stuff. Perhaps some things are better left mysterious and unexplained. A defeatist attitude if ever I saw one, but it could be justified. After all, how would you feel if you knew every action was decided before you were even a twinkle in your father’s eye? Would life even be worth living? Sure, but it would take alot of reflection and a personality that could either deny or reconcile the feelings of unease that such a proposition brings.

They were right; ignorance really is bliss.

Compartmentalisation of consciousness

A common criticism I have come across during my philosophical wanderings the accusation that such thinkers and dreamers cannot possibly expect their ideas to ever take hold among society. “What is the point of philosophy”, they cry, “if the very musings they are proposing cannot be realistically and pragmatically implemented?” The subtle power of this argument is often overlooked; its point is more than valid. If all philosophy can do is outline an individual’s thoughts in a clear and concise manner without even a hint of how to implement said ideas then what is the point in even airing them! Apart from the intellectual stimulation such discussion brings of course, it seems as though the observations of philosophers are wasted.

In the modern world, the philosopher takes a backseat when it comes to government policy and the daily operation of state. Plato painted a far rosier picture in his ideal Republic which placed philosophers directly in the ruling class. Plato placed great emphasis on the abilities of philosophers to lead effectively.

“Until philosophers rule as kings or those who are now called kings and leading men genuinely and adequately philosophise, that is, until political power and philosophy entirely coincide, while the many natures who at present pursue either one exclusively are forcibly prevented from doing so, cities will have no rest from evils,… nor, I think, will the human race.” (Republic 473c-d)

But is this really attainable? Was Plato correct in stating ‘until (my italics, TC) philosophers rule as kings’? The implication here is that philosophers currently lack certain qualities which make them suitable for the role of leadership. Was Plato referring to a lack of practicality, a lack of confidence in their abilities to lead or something more menial such as the public’s intrinsic distrust of intellectualism? Certainly, looking at the qualities of today’s leaders it seems that one requires expert skills in the art of social deception and persuasion if they are to succeed. When Plato speaks of “those who love the sight of truth” in his description of the ideal “philosopher kings” that would rule the republic, it seems at loggerheads with the reality of modern politics.

So in order to become a successful leader in the modern world, one must be socially skilled and able-minded to sway the opinions of others, even if you don’t end up delivering. The balancing act becomes one that aims to please the majority (either through actual deliverance of election promises or ‘pulling the wool over eyes’ until we forget about them) and upset the minority. Politicians need to know how to ‘play the system’ to their advantage. They must also exude power, real or imaginary, relying on unconscious processes such as social dominance through both verbal and non-verbal communication. Smear campaigns act to taint the reputation of adversaries and deals are brokered with the powerful few that can fund the election campaign with a ‘win at all costs’ attitude (in return for favours once the individual is elected).

So why do such individuals gain a place above the world’s thinkers? Plato would surely be turning in his grave if he knew that his republic ideal would thus far be unrealised. I intend to argue that it is their pragmatism, their ability to turn policies into realities that makes politicians suitable over philosophers. Politicians seem to know the best ways of pleasing everyone at once, even if the outcome is not the best course of action. They can simply snap their fingers and make a problem disappear; ‘swept under the carpet’ temporarily at least until their term ends and the aftermath must be dealt with by another political hopeful.

Philosophers are inherently unpopular. Not because they are wrinkly old men with white beards that mumble and smoke pipes indoors, but rather they tell the truth. The scary thing is, the public does not want to hear about how things should be done; they just want them gone with the least possible inconvenience to their own lives as possible. This is where philosophy runs into trouble.

The whole ethos of philosophy is to objectively consider the evidence and plan for every contingency. It relies on criticism and deliberation in order to arrive at the most efficient outcome possible; and even after all that philosophers are still humble enough to admit they may be wrong. Is this what the public detests so much? Can they not bring themselves to respect a humbled attitude that is open to the possibility of error and willing to make changes for the sake of growth and improvement? It seems this way; society would rather be lied to and feel safe in their false sense of security than be led by individuals that genuinely had the best interests of humanity at heart.

Of course, there is the dark side to philosophy that could possibly destroy its chances of ever becoming a ruling class. The adoption of certain moral standpoints, for instance, are a cause for argument insofar as the majority would never be able to arrive at a consensus in order for them to be enacted. Philosophers seem to have alot of work remaining if they are ever to unite under a banner of cooperation and agreement on their individual positions. Perhaps the search for universals amongst the menagerie of current philosophical paradigms is needed before a ruling body can emerge. As it currently stands, there is simply too much disagreement between individuals over the best course of action to make for a governing body. At least the present system is organised under political parties with members that share a common ideology, thus making deliberations far more efficient than a group of fundamentally opposed (on not only beliefs but also plans of action) philosophers.

Does a philosophical dictatorship offer a way out of this mess? While the concept at heart seems totally counter to what the discipline stands for, perhaps it is the only way forward. At least, in the sense that a solitary individual has greater authoritative power over a lower council of advisors and informants. This arrangement eliminates the problems that arise from disagreement, but seems fundamentally flawed (in the sense that the distibution of power is unequal).

The stuggle between the mental and the practical is not only limited to the realm of politics/philosophy. An individual’s sense of self seems to be split into two distinct entities; one that is intangible, rational, conscious and impractical (the thinker) whilst the other is the inverse, a practical incarnation of ‘you’ that can deal with the unpredicabilities of the world with ease, but exists mostly at some unconsious level. People are adept at planning future events using their mental capacities, although the vast majority of the time, the unconsious ‘pragmatist’ takes over and manages to destroy such carefully laid plans (think of how you plan to tell your loved one you are going out for the night. it doesn’t quite go as smoothly as you planed). Does this problem stem from the inherent inaccuracy of our ‘mental simulators’ which prevents every possible outcome from arising in conscious consideration prior to action? Or does our automatic, unconsious self have a much further reach than we might have hoped? If the latter is correct, the very existence of free-will could be in jeopardy (the possibility of actions arising before conscious thought – to be explored at a later date).

So what of a solution to this quandry. Thus far, it could be argued that this article simply follows in the footsteps of previous philosophy which advocates a strictly ‘thought only’ debate without any real call to action or suggestions for practical implementation. First and foremost, I believe philosophers have a lot to learn from politicians (and quite rightly, vice versa). The notion of Plato’s republic ruled by mental  giants who are experienced in the philosophy of knowledge, ethics and meaning seems, at face value, attractive. Perhaps this is the next step for governmental systems on this planet; if it can be realised in an attainable and realistic fashion.

Perhaps we are already on our way towards Plato’s goal. Rising education levels could be reaching sufficient levels so as to act in a catalytic explosion of political and ideological revolution. But just as philosopher’s tend to forget about the realities of the world, so too are we getting a bit ahead of ourselves. Education levels are not uniform across the globe, even intelligence (we can’t even measure it properly) varies greatly between individuals. Therfore, the problem remains; how to introduce the philosophical principles of meta-knowledge, respect for truth and deliberated moral codes of conduct? Is such a feat even possible what with the variety of intellects on this planet?

One thing is certain. If philosophers (and individuals alike) are ever to overcome the problems that arise from transferring ideas into reality they must take a regular ‘reality check’ and ensure that their discourse can be applicable to society. This is not in any way, shape or form advocating the outlaw of discussion on impractical thought exercises and radical new ideas, but rather pursuading more philosophers to reason about worldly concerns, rather than the abstract. The public needs a new generation of leaders to guide, rather than push or sweep aside, through the troublesome times that surely lay ahead. Likewise, policitians need to start leading passionately and genuinely, with the interests of their citizens at the forefront of every decision and policy amendment. They need to wear their hearts on their sleeves, advocating not only a pragmatic, law-abiding mentality within society, but also a redesign and revitalisation of morality itself. Politicians should be wholly open to criticism, in fact encouraging it in order to truly lead their people with confidence.

Finally, we as individuals should also take time out to think of ways in which we can give that little deliberating voice inside our heads a bit more power to enact itself on the outside world, rather than being silenced by the unconsious, animalistic and unfairly dominating automaton that seems to often cause more harm than good. The phrase ‘look before you leap’ connotes a whole new meaning if this point is to be taken with even a grain of truth.