You are currently browsing the category archive for the ‘General Psychology’ category.

We are all fascinatingly unique beings. Our individuality not only defines who we are, but also binds us together as a society. Each individual contributes unique talents towards a collaborative pool of human endeavour, in effect, enabling modern civilisation to exist as it does today. We have the strange ability to simultaneously preserve an exclusive sense of self whilst also contributing to the greater good through cooperative effort – loosing a bit of our independence through conformity in the process. But what does this sense of self comprise of? How do we get to be the distinguished being that we are despite the best efforts of conformist group dynamics and how can we apply such insights towards the establishment of a future society that respects individual liberty?

The nature versus nurture debate has raged for decades, with little ground won on either side. Put simply, the schism formed between those whom subscribed to the ‘tabula rasa’ or blank slate approach (born with individuality) and those whom believed our uniqueness is a product of the environment in which we live. Like most debates in science, there is no definitive answer. In practice, both variables interact and combine to produce variation in the human condition. Therefore, the original question is no longer valid; it diverges from one of two polarised opposites to one of quantity (how much variation is attibutable to nature/nurture).

Twin and adoption studies have provided the bulk of empirical evidence in this case, and with good reason. Studies involving monozygotic twins allows researchers to control for heritability (nature) of certain behavioural traits. This group can then be compared to other twins reared separately (manipulation of environment) or a group of fraternal twins/adopted siblings (same environment, different genes). Of course, limitations are still introduced whereby an exhaustive list of and exerted control over every environmental variable is impossible. The interaction of genes with environment is another source of confusion, as is the expression of random traits which seem to have no correlation with either nature or nurture.

Can the study of personality offer any additional insight into the essence of individuality? The majority of theories within this paradigm of psychology are purely descriptive in nature. That is, they only serve to summarise a range of observable behaviours and nuances into key factors. The ‘Big Five’ Inventory is one illustrative example. By measuring an individual’s subscription to each area of personality (through responses to predetermined questions), it is thought that variation between people can be psychometrically measured and defined according to scores on five separate dimensions. By utilising mathematical techniques such as factor analysis, a plethora of personality measures have been developed. Each subjective interpretation of the mathematical results combined with cultural differences and experimental variation between samples has produced many similar theories that differ only in the labels applied to the measured core traits.

Other empirical theories attempt to improve on the superficiality of such descriptive scales by introducing biological (nature) fundamentals. One such example is the “BIS/BAS” measure. By attributing personality (specifically behavioural inhibition and activation) to variation in neurological structure and function, this theory expands upon more superficial explanations. Rather than simply summarising and describing dimensions of personality, neuro-biological theories allow causality to be attributed to underlying features of the individual’s physiology. In short, such theories propose that there exists a physical thing to which neuropsychologists can begin to attach the “essence of I”.

Not to be forgotten, enquiries into the effects of nurture, or one’s environment, on personal development have bore many relevant and intriguing fruits. Bronfrenbrenner’s Ecological Systems theory is one such empirical development that attempts to qualify the various influences (and their level of impact) on an individual’s development. The theory is ecological in nature due to the nested arrangement of its various ‘spheres of influence’. Each tier of the model corresponds to an environmental stage that is further removed from the direct experience of the individual. For example, the innermost Microsystem pertains to immediate factors, such as family, friends and neighbourhood. Further out, the Macrosystem defines influences such as culture and political climate; while not exerting a direct effect, these components of society still shape the way we think and behave.

But we seem to be only scratching the surface of what it actually means to be a unique individual. Rene Descartes was one of many philosophers with an opinion on where our sense of self originates. He postulated a particular kind of dualism, whereby the mind and body exist as two separate entities. The mind was though to influence the body (and vice versa) through the pineal gland (a small neurological structure that actually secretes hormones). Mind was also equated with ‘soul’, perhaps to justify the intangible nature of this seat of consciousness. Thus, such philosophies of mind seem to indirectly support the nature argument; humans have a soul, humans are born with souls, souls are intangible aspects of reality, therefore souls cannot be directly influenced by perceived events and experiences. However Descartes seemed to be intuitively aware of this limitation and built in a handy escape clause; the pineal gland. Revolutionary for its time, Descartes changed the way philosophers thought about the sense of self, and went so far as to suggest that the intangible soul operated on a bi-directional system (mind influences body, body influences mind).

The more one discusses self, the deeper and murkier the waters become. Self in the popular sense refers to mental activity distinct from our external reality and the minds of others (I doubt, I think, Therefore I am). However, self comprises a menagerie of summative sub-components, such as; identity, consciousness, free-will, self-actualisation, self-perception (esteem, confidence, body image) and moral identity, to name but a few. Philosophically and empirically, our sense of self has evolved markedly, seemingly following popular trends throughout the ages. Beginning with a very limited and crude sense of self within proto-human tribes, the concept of self has literally exploded to an extension of god’s will (theistic influences) and more recently, a more reductionist and materialist sense where individual expression and definition are a key tenet. Ironically, our sense of self would not have been possible without the existence of other ‘selves’ against which comparisons could be made and intellects clashed.

Inspiration is one of the most effective behavioural motivators. In this day and age it is difficult to ignore society’s pressures to conform. Paradoxically, success in life is often a product of creativity and individuality; some of the wealthiest people are distinctly different from the banality of normality. It seems that modern society encourages the mundane, but I believe this is changing. The Internet has ushered in a new era of self-expression. Social networking sites allow people to share ideas and collaborate with others and produce fantastic results. As the access to information becomes even easier and commonplace, ignorance will no longer be a valid excuse. People will be under increased pressure to diverge from the path of average if they are to be seen and heard. My advice; seek out experiences as if they were gold. Use the individuality of others to mold and shape values, beliefs and knowledge into a worthy framework within which you feel at ease. Find, treasure and respect your “essence of I”; it is a part of everyone of us that can often become lost or confused in this chaotic world within which we live.

Evil is an intrinsic part of humanity, and it seems almost impossible to erradicate it from society without simultaneously removing a significant part of our human character. There will always be individuals whom seek to gain advantage over others through harmful means. Evil can take on many forms, depending upon the definition one uses to encapsulate the concept. For instance, the popular definition includes elements of malicious intent or actions that are designed to cause injury/distress to others. But what of the individual that accidentally causes harm to another, or whom takes a silent pleasure in seeing other’s misfortune? Here we enter a grey area, the distinction between good and evil blurring ever so slightly, preventing us from making a clear judgement on the topic.

Religion deals with this human disposition towards evil in a depressingy cynical manner. Rather than suggesting ways in which the problem can be overcome, religion instead proposes that evil or “sin” is an inevitable temptation (or a part of our character into which we are born) that can only be overcome with a conscious and directed effort. Invariably one will sin sometime in their life, whereupon the person should ask for forgiveness from their nominated deity. Again we see a shifting of responsibility away from the individual, with the religious hypothesis leaning on such concepts as demonic possession and lapses of faith as an explanation for the existence of evil (unwavering belief in the deity cures all manner of temptations and worldly concerns).

In its current form, religion does not offer a satisfactory explanation for the problem of evil. Humanity is relegated to the backseat in terms of moral responsibility, coerced into conformity through a subservence to the Church’s supposed ideals and ways of life. If our society is to break free of these shackles and embrace a humanistic future free from bigotry and conflict, moral guidance must be gained from within the individual. To this end, society should consider introducing moral education for its citizens, taking a lesson from the annals of history (specifically, ancient Greece with its celebration of individual philosophical growth).

Almost counter-intuitively, some of the earliest recorded philosophies actually advocated a utopian society that was atheistic in nature, and deeply rooted in humanistic, individually managed moral/intellectual growth. One such example is the discipline of Stoicism, founded in the 2nd century BC. This philosophical movement was perhaps one of the first true instances of humanism whereby personal growth was encouraged through introspection and control of destructive emotions (anger, violence etc). The stoic way was to detach oneself from the material world (similar to Buddhist traditions), a tenet that is aptly summarised through the following quote;

“Freedom is secured not by the fulfilling of one’s desires, but by the removal of desire.”

Epictetus

Returning to the problem of evil, Stoicism proposed that the presence of evil in the world is an inevitable fact due to ignorance. The premise of this argument is that a universal reason or logos, permeates throughout reality, and evil arises when individuals go against this reason. I believe what the Stoics mean here is that a universal morality exists, that being a ubiquitous guideline accessible to our reality through conscious deliberation and reflective thought. When individuals act contrary to this universal standard, it is through an ignorance of what the correct course of action actually is.

This stoic ethos is personally appealing because it seems to have a large humanistic component. Namely, all of humanity has the ability to grasp universal moral truths and overcome their ‘ignorance’ of the one true path towards moral enlightenment. Whether such truths actually exist is debatable, and the apathetic nature of Stoicism seems to depress the overall human experience (dulled down emotions, detachment from reality).

The ancient Greek notion of eudaimonia could be a more desirable philosophy by which to guide our moral lives. The basic translation of this term as ‘greatest happiness’ does not do it justice. It was first introduced by Socrates, whom outlined a basic version of the concept as comprising two components; virtue and knowledge. Socrates’ virtue was thus moral knowledge of good and evil, or having the psychological tools to reach the ultimate good. Subsequent students Plato and Aristotle expanded on this original idea of sustained happiness by adding layers of complexity. For example, Aristotle believed that human activity tends towards the experience of maximum eudaimonia, and to achieve that end it was though that one should cultivate rationality of judgement and ‘noble’ characteristics (honor, honesty, pride, friendliness). Epicurus again modified the definition of eudaimonia to be inclusive of pleasure, thus also changing the moral focus to one that maximises the wellbeing of the individual through satisfaction of desire (the argument here is that pleasure equates with goodness and pain with badness, thus the natural conclusion is to maximise positive feeling).

We see that the problem of evil has been dealt with in a wide variety of ways. Even in our modern world it seems that people are becoming angrier, impatient and destructive towards their fellow human beings. Looking at our track record thus far, it seems that the mantra of ‘fight fire with fire’ is being followed by many countries when determining their foreign policy. Modern incarnations of religious moral codes (an eye for an eye) have resulted in a new wave of crusades with theistic beliefs at the forefront once again.

The wisdom of our ancient ancestors is refreshing and surprising, given that commonsense suggests a positive relationship between knowledge and time (human progress increases with the passage of time). It is entirely possible that humanity has been following a false path towards moral enlightenment, and given the lack of progress from the religious front, perhaps a new approach is needed. By treating the problem of evil as one of cultural ignorance we stand to benefit on a high level. The whole judicial system could be re-imagined to one where offenders are actually rehabilitated through education, rather than simply breeding generations of hardened criminals. Treating evil as a form of improper judgement forces our society to take moral responsibility at the individual level, thus resulting in real and measurable changes for the better.

The monk sat meditating. Alone atop a sparsely vegetated outcrop, all external stimulus infusing psychic energy within his calm, receptive mind. Distractions merely added to his trance, assisting the meditative state to deepen and intensify. Without warning, the experience culminated unexpectedly with a fluttering of eyelids. The monk stood, content and empowered with newfound knowledge. He has achieved pure insight…

The term ‘insight’ is often attributed to such vivid descriptions of meditation and religious devotion. More specifically, religions such as Buddhism promote the concept of insight (vipassana) as a vital prerequisite for spiritual nirvana, or transcendence of the mind to a higher plane of existence. But does insight exist for the everyday folk of the world? Are the momentary flashes of inspiration and creativity part and parcel of the same phenomenon or are we missing out on something much more worthwhile? What neurological basis does this mental state have and how can its materialisation be ensured? These are the questions I would like to explore in this article.

Insight can be defined as the mental state whereby confusion and uncertainty are replaced with certainty, direction and confidence.  It has many alternative meanings and contexts regarding its use, ranging from a piece of obtained information to the psychological capacity to introspect objectively (as according to some external judge – introspection is by its very name subjective in nature). Perhaps the most fascinating and generally applicable context is one which can be described as ‘an instantaneous flash of brilliance’ or ‘a sudden clearing of murky intellect and intense feelings of accomplishment’. In short, insight (in the context which I am interested) is one which can be attributed to the genius’ of society, those that seemingly bring together tiny shreds of information and piece them together to solve a particularly challenging problem.

Archimedes is perhaps the most widely cited example of human insight. As the story goes, Archimedes was inspired by the displacement of water in his bathtub to formulate a theory of calculating the volume of an irregular object. This technique was of great empirical importance as it allowed a reliable measure of density (referred to as ‘purity’ in those ancient times, and arising from a more fiscal motivation such as gold purity). The climax of the story describes a naked Archimedes running wildly through the streets unable to control his excitement at this ‘Eureka’ moment. Whether the story is actually true or not has little bearing on the force of the argument presented; all of us have most likely experienced this moment at one point in our lives, and is best summarised by the overcoming of seemingly insurmountable odds to conquer a difficult obstacle or problem.

But where does this inspiration come from? It almost seems as though the ‘insightee’ is unaware of the mental efforts to arrive at a solution, perhaps feeling a little defeated after a day spent in vain. Insight then appears at an unexpected moment, almost as though the mind is working unconsciously and without direction, and offers a brilliant method for victory. The mind must have some unconscious ability to process and connect information regardless of our directed attention to achieve moments such as this. Seemingly unconnected pieces of information are re-routed and brought to our attention in the context of the previous problem. Thus could there be a neurobiological basis for insight? One that is able to facilitate a behind-the-scenes process?

Perhaps insight is encouraged by the physical storage and structure of neural networks. In the case of Archimedes, the solution was prompted by the mundane task of taking a bath; superficially unrelated to the problem, however the value of its properties inflated by a common neural pathway (low bathwater – insert leg – raised bathwater similar to volumes and matter in general). That is, the neural pathways activated by taking a bath are somehow similar to those activated by the rumination of the problem at hand. Alternatively, the unconscious mind may be able to draw basic cause and effect conclusions which are then boosted to the forefront of our minds if they are deemed to be useful (ie: are they immediately relevant to the task being performed). Whatever the case may be, it seems that at times, our unconscious minds are smarter than our conscious attention.

The real question is whether insight is an intangible state of mind (ala ‘getting into the zone’) that can be turned on and off (thus making it useful for extending humanity’s mental capabilities), or whether it is just a mental byproduct from overcoming a challenge (hormonal response designed to encourage such thinking in the future). Can the psychological concept of insight be applied via a manipulation of the subject’s composition (neuronally)  and environmental characteristics (conductive to achieving insight), or is it merely an evolved response that serves a (behaviourally) reinforcing purpose?

Undoutedly the agent’s environment plays a part in determining the likelihood of insight occurring. Taking into account personal preferences (does the person prefer quite spaces for thinking?) the characteristics of the environment could serve to hamper the induction of such a mental state if it is sufficiently irritating to the individual. Insight may also be closely linked with intelligence, and depending on your personal conception of this, neurological structure (if one purports a strictly biological basis of intelligence). If this postulate is taken at face value, we have the conclusion that the degree of intelligence is directly related to the likelihood of insight, and perhaps also to the ‘quality’ of the insightful event (ie: a measure of its brilliance in comparison to inputs such as the level of available information and difficulty of the problem).

But what of day to day insight, it seems to crop up in all sorts of situations. In this context, insight might require a grading scale as to its level of brilliance if its use is to be justified in more menial situations and circumstances. Think of that moment when you forget a particular word, and try as you might, cannot remember it for the life of you. Recall also that flash of insight where the answer is simply handed to you on a platter without any conscious need to retrieve it. Paradoxically, it seems that the harder we try to solve the problem, the more difficult it becomes. However, is this due to efficiency problems such as ‘bottlenecking’ of information transfer, personality traits such as performance anxiety/frustration or some underlying and unconscious process that is able to retrieve information without conscious direction?

Whatever the case may be, our scientific knowledge on the subject is distinctly lacking, therefore an empirical inquiry into the matter is more than warranted (if it hasn’t already been commissioned). Psychologically, the concept of insight could be tested experimentally by providing subjects with a problem to solve and manipulating  the level of information (eg ‘clues’) and its relatedness to the problem (with consideration taken to intelligence, perhaps two groups, high and low intelligence). This may help to uncover whether insight is a factor to do with information processing or something deeper. If science can learn how to artificially induce a mental state akin to insight, the benefits for a positive-futurist society would be grand indeed.

Closely tied to our conceptions of morality, conspiracy occurs when the truth is deliberately obscured. Conspiracy is often intimately involved with, and precipitated by political entities whom seek to minimise any negative repercussions of such truth becoming public knowledge. But what exactly does a conspiracy involve? According to numerous examples from popular culture, conspiracies arise from smaller, constituent and autonomous units within governmental bodies and/or military organisations, and usually involve some degree of ‘coverup’ or deliberate misinformation/clouding of actual events that have taken place. Such theories, while potentially having some credulous background, are for the most part ridiculed as neurotic fantasies that have no grounding in reality. How then do individuals maintain such obviously false ideas in the face of societal pressure? What are the characteristics of a ‘conspiracy theorist’ and how do these traits distinguish them from society as a whole? What do conspiracy theories tell us about human nature? These are the questions I would like to explore in this article.

As a child I was intensely fascinated with various theories regarding alien activity of earth. Surely a cliche in today’s world, but the alleged events that occurred in Roswell, Tunguska and Rendlesham Forest are a conspirator’s dream. Fortunately I no longer hold these events in any factual stead; rather, as I have aged and matured so too has my ability to examine evidence rationally (something that conspiracy theorists seem unable to accomplish). Introspection on my childhood motivations for believing these theories potentially reveals key characteristics of believers in conspiracy. Aliens were a subject of great personal fear as a young child, thus encouraging a sort of morbid fascination and desire to understand/explain (perhaps in an attempt to regain some control over these entities that could supposedly appear at will). Indeed, a fear of alien abduction seems to merely be the modern reincarnation of previous childhood fears, such as goblins and demons. Coupled with the ‘pseudo-science’ that accompanies conspiracy theories, it is no wonder that the young and otherwise impressionable are quickly mesmerised and enlisted into the cause. A strong emotional bond connects the beliefs with the evidence in an attempt to relieve uncomfortable feelings.

Conspiracy theories may act as a quasi-scientific attempt to explain the unknown, not too dissimilar to religion (and perhaps utilising the same neurological mechanisms).  While a child could be excused for believing such fantasies, it is intriguing how adults can maintain and perpetuate wild conspiracy beliefs without regret. Cognitive dissonance may act as an underlying regulator and maintainer of such beliefs, in that the more radical they become, the more they are subscribed to (in an attempt at minimising the psychological discomfort that internal hypocrisy brings). But where do these theories come from? Surely there must be at least some factual basis for their creation. Indeed there is, however the evidence is often mis-interpreted or there is sufficient cause for distrust in the credibility of the information ( in light of the deliverer’s past history). Therefore we have two main factors that can determine whether the information will be interpreted as a conspiracy; the level of trust an individual ascribes to the information source (taking into account that person’s past dealings with the agent and personality/presence of neurotic disorders) and the degree of ambiguity in the said events (personal interpretation different to that reported, perceptual experience sufficiently vivid to cause disbelief in alternate explanation).

To take the alleged alien craft crash landing at Roswell as a case in point, it becomes obvious where the conspiracy began to develop within the chronological timeframe of events and for what reasons. Roswell also demonstrates the importance of maintaining a trust in authority; the initial printing of ‘Flying Disc Recovered By USAF’ in a local newspaper was quickly retracted and replaced with a more menial and uninteresting ‘weather balloon’ explanation. Reportedly, this explanation was accepted by the people of the time and all claims of alien space craft forgotten about until the 1970s, some 30 years after the actual event. The conspiracy was revitalised by the efforts of a single individual (perhaps seeking his own ‘five minutes of fame’), thus demonstrating the power of one person’s belief supported by others in authority (the primary researcher, Friedman, was a nuclear physicist and respected writer). Coupled with convenient (in that it is ambiguous) and an aggressive interpretation of circumstantial evidence, the alleged incident at Roswell has since risen to global fame. Taken in the context of historical happenings at this period in history (aftermath of WW2, beginnings of Cold War – increase in military top secret projects) it is no wonder that imagination began to replace reality; people now had a means to attribute a cause and explanation to that which they clearly had no substantiated understanding of. There was also the catalyst for thinking that governments engaged in trickery what with the numerous special operations conducted in a clandestine manner and quickly covered up when things went awry (eg Bay of Pigs incident).

Thus the power of conspiracy has been demonstrated. Originating from just a single individual’s private beliefs, it seems as if the fable twinges a common thread within those susceptible. As epitomised by Mulder’s office poster in the X-Files, people ‘want to believe’. That is, the hypocrisy in maintaining such obviously false beliefs is downplayed through a conscious effort to misinterpret counter-evidence and emphasize minimalist details that support the theory. As aforementioned, the role of pseudo-science does wonders to support conspiracy theories and increase their attractiveness to those that would otherwise discount the proposition. By merging the harsh reality of science with the obvious fantasy that is the subject matter of most conspiracies, people have a semi-plausible framework within which to construct their theories and establish consistency for defending their position. It is a phenomenon that is quite similar to religion; the misuse and misinterpretation of “evidence” to satisfy the desire of humanity to regain control over the unexplainable and support a corrupted hidden agenda (distrust of authority).

There is little that distinguishes between the characteristics of conspiracy theorists and religious fundamentalists; both share a common bond in their singlemindedness and perceived superiority over the ‘disbelievers’. But their are subtle differences. Conspiracy theorists undertake a lifelong crusade to uncover the truth – an adversarial relationship develops where the theorist is elevated to a level of moral and intellectual superiority (at having uncovered the conspiracy and thwarted any attempts at deception). On the other hand, the religious seem to take their gospel at face value, perhaps at a deeper level and with a greater certainty than the theorists (perhaps due to the much longer history of religion and firm establishment within society). The point here is that while there may be such small differences between the two groups, the underlying psychological mechanisms could be quite similar; they certainly seem to be related due to the common grounding within our belief system.

Psychologically, conspiracies are thought to arise for a number of reasons. As already mentioned, the role of cognitive dissonance is one psychic mechanism that may perpetuate these beliefs in the face of overwhelming contradictory evidence. The psychoanalytic concept of projection is one theorised catalyst that is proposed to dictate the formulation of conspiracy theories. It is thought that the theorist subconsciously projects their own perceived vices onto the target in the form of conspiracy and deception. Thus the conspirator becomes an embodiment of what the theorist despises, regardless of the objective truth. The second leading psychological cause of conspiracy theory creation is one that involves a tendency to apply ‘rules of thumb’ to social events. Humans believe that significant events have significant causes, such as the death of a celebrity. There is no shortage of such occasions even in recent months what with the untimely death of Hollywood actors and local celebrities. Such events rock the foundation of our worldviews, often to such a large extent that artificial causes are attributed to reassure ourselves that the world is predictable (even if the resulting theory is so artificially complex that any plausibility quickly evaporates).

It is interesting to note that the capacity to form beliefs based on large amounts of imagination and very little fact is present within most of us. Take a moment to stop and think about what you thought the day the twin towers came down, or maybe when Princess Diana was killed. Did you formulate some radical postulations based on your own interpretations and hidden agendas? For the vast majority of us, time proves the ultimate ajudicator and acts to dismiss fanciful ideas out of hand. But for some, the attractiveness of having one up on their fellow citizen at having uncovered some secretive ulterior motive reinforces such beliefs until they become infused with the person’s sense of identity. The truth is nice to have, however some things in life definitely do not have explanations rooted in the deception of some higher power. Random events do happen, without any need for a hidden omnipresent force dictating events from behind the scenes.

PS: Elvis isn’t really dead, he’s hanging out with JFK at Area 51 where they faked the moon landings. Pardon me whilst I don my tin-foil hat, I think the CIA is using my television to perform mind control…

In a previous article, I discussed the possibility of a naturally occurring morality; one that emerges from interacting biological systems and is characterised by cooperative, selfless behaviours. Nature is replete with examples of such morality, in the form of in-group favouritism, cooperativity between species (symbiotic relationships) and the delicate interrelations between lower forms of life (cellular interaction). But we humans seem to have taken morality to a higher plane of existence, classifying behaviours and thoughts into a menagerie of distinct categories depending on the perceived level of good or bad done to external agents. Is morality a concept that is constant throughout the universe? If so, how could morality be defined in a philosophically ‘universal’ way, and how does it fit in with other universals? In addition, how can humans make the distinction between what is morally ‘good’ and ‘bad? These are the questions I would like to explore in this article.

When people speak about morality, they are usually referring to concepts of good and evil. Things that help and things that hinder. A simplistic dichotomy into which behaviours and thoughts can be assigned. Humans have a long history with this kind of morality. It is closely intertwined with religion, with early scriptures and the resulting beliefs providing the means to which populations could be taught the virtues of acting in positive ways. The defining feature of religious morality finds it footing with the lack of faith in the human capacity to act for the good of the many. Religions are laced with prejudicial put downs that seek to undermine our moral integrity. But they do touch on a twinge of truth; evolution has seen the creation of a (primarily) self-centred organism. Taking the cynical view, it can be argued that all human behaviour can be reduced to purely egotistical foundations.

Thus the problem becomes not one of definition, but of plausibility (in relation to humanity’s intrinsic capacity for acting in morally acceptable ways). Is religion correct in its assumptions regarding our moral ability? Are we born into a world of deterministic sin? Theistically, it seems that any conclusion can be supported via the means of unthinking faith. However, before this religiosity is dismissed out of hand, it might be prudent to consider the underlying insight offered.

Evolution has shown that organisms are primarily interested in survival of the self (propagation of genetic material). This fits in with the religious view that humanity is fundamentally concerned with first-order, self-oriented consequences, ann raises the question of whether selfish behaviour should be considered immoral. But what of moral events such as altruism, cooperation and in-group behavioural patterns? These too can be reduced to the level of self-centered egoism, with the superficial layer of supposed generosity stripped away to more meager foundations.

Morality then becomes a way of a means to an end, that end being the fulfillment of some personal requirement. Self initiated sacrifice (altruism) elevates one’s social standing, and provides the source for that ‘warm, fuzzy feeling’ we all know and love. Here we have dual modes of satiation, one that is external to the agent (increasing power, status) and one that is internal (evolutionary mechanism for rewarding cooperation). Religious cynicism is again supported, in that humans seem to have great difficulty in performing authentic moral acts. Perhaps our problem here lies not in the theistic stalker, laughing gleefully at our attempts to grasp at some sort of intrinsic human goodness, but rather in our use of the word ‘authentic’. If one makes an allowance and conceeds that humans could simply lack the faculties for connotation-free morality, and instead put forward the proposition that moral behaviours are instead measured by their main direction of action (directed inwards; selfishly or outwards; altruistically), we can arrival at a usable conceptualisation.

Reconvening, we now have a new operational definition of morality. Moral action is thus characterised by the focus of its attention (inward vs outward) as opposed to a polarised ‘good vs evil’, which manages to evade the controversy introduced by theism and evolutionary biology (two unlikely allies!). The resulting consequence is that we have a kind of morality which is not defined by its degree of ‘ correctness’, which from any perspective is entirely relative. However, if we are to arrive at a meaningful and usable moral universal that is applicable to human society, we need to at least consider this problem of evil and good.

How can an act be defined as morally right or wrong? Considering this question alone conjours up a large degree of uncertainty and subjectivity. In the context of the golden rule (do unto others as you would have done unto yourself), we arrive at even murkier waters; what of the psychotic or sadist whom prefers what society would consider abnormal treatment? In such a situation could ‘normally’ unacceptable behaviour be construed as morally correct? It is prudent to discuss the plausibility of defining morality in terms of universals that are not dependent upon subjective interpretation if this confusion is to be avoided.

Once again we have returned to the issue of objectively assessing an act for its moral content. Intuitively, evil acts cause harm to others and good acts result in benefits. But again we are falling far short of the region encapsulated by morality; specifically, that acts can seem superficially evil yet arise from fundamentally good intentions. And thus we find a useful identifier (in the form of intention) that is worthy of assessing the moral worth of actions.

Unfortunately we are held back by the impervious nature of the assessing medium. Intention can only be ascertained through introspection, and to a lesser degree, psychometric testing. Intention can even be illusive to the individual, if their judgement is clouded by mental illness, biological deformity or an unconscious repression of internal causality (deferring responsibility away from the individual). Therefore, with such a slippery method of assessment regarding the authenticity and nature of the moral act, it seems difficult that morality could ever be construed as a universal.

Universals are exactly what their name connotes; properties of the world in which we inhabit that are experienced across reality. That is to say, morality could be classed as a universal due to its generality amoung our species and its quality of superceeding characterising and distinguishing features (in terms of mundane, everyday experience). If one is to class morality under the category of universals, one should modify the definition to incorporate features that are non-specific and objective. Herein lies the problem with morality; it is such a variable phenomenon, with large fluctuations in individual perspective. From this point there are two main options available given current knowledge on the subject. Democratically, the qualities of a universal morality could be determined through majority vote. Alternatively, a select group of individuals or one definitive authority could propose and define a universal concept of morality. One is left with few options on how to proceed.

If a universal conceptualisation of morality is to be proposed, an individual perspective is the only avenue left with the tools we have at our disposal. We have already discussed the possibility of internal vs external morality (bowing to pressures that dictate human morality is indivisibly selfish, and removing the focus from good vs evil considerations). This, combined with a weighted system that emphasises not the degree of goodness, but rather the consideration of the self versus others, results in a useful measure of morality (for example, there will always be a small percentage of internal focus). But what are we using as the basis for our measurement? Intention has already proved to be elusive, as is objective observation of acts (moral behaviours can be reliant on internal reasoning to determine their moral worth, some behaviours go unobserved or can be ambiguous to an external agent). Discounting the possibility of a technological breakthrough enabling direct thought observation (and the ethical considerations such an invasion of privacy would bring), it seems difficult on how we can proceed.

Perhaps it is best to simply take a leap of faith, believing in humanity’s ability to make judgements regarding moral behaviour. Instead of cynically throwing away our intrinsic abilities (which surely do vary in effectiveness within the population), we should trust that at least some of us would have the insight to make the call. With morality, the buck definitely stops with the individual, which is a fact that most people can have a hard time swallowing. Moral responsibility definitely rests with the persons involved, and in combination with a universally expansive definition, makes for some interesting assertations of blame, not to mention a pressuring force to educate the populace on the virtues of fostering introspective skills.

After returning from a year-long hiatus to the United Kingdom and continental Europe, I thought it would be prudent to share my experiences. Having caught the travel bug several years ago when visiting the UK for the first time, a year long overseas working holiday seemed like a dream come true. What I didn’t envisage was the effects of this experience on cognitions, specifically, the feelings of displacement, disorientation and dissatisfaction. In this article I aim to examine the effects of a changing environment on the human perceptual experience, as it relates to overseas, out-group exposure and the psychological mechanisms underlying these cognitive fluctuations.

It seems that the human need to belong runs deeper than most would care to admit. Having discounted any possibility of ‘homesickness’ prior to arrival in the UK, I was surprised to find myself unwittingly (or perhaps conforming to unconscious social expectation – but we aren’t psychoanalysts here!) experiencing the characteristic symptomatology of overall depression, including sub-signs of negative affect, longing for a return home and feelings concurrent with social ostracism. This struck me as odd, in that if one is aware of an impending event, surely this awareness predisposes one to a lesser effect simply through mental preparation and conscious deflection of the expected symptoms. The fact that negative feelings were still experienced despite such awareness causes an alternative etiology for the phenomenon of homesickness. Indeed, it offers a unique insight into the human condition; at a superficial level our dependency on consistency and familiarity, and at a deeper, more fundamental level, a possible interpretation of underlying cognitive processes involved in making sense of the world and responding to stimuli.

Taken at face value, a change in an individual’s usual physical and social environment displays the human reliance on group stability. From an evolutionary perspective, the prospect of travel to new and unfamiliar territories (and potential groups of other humans) is a altogether risky affair. On the one hand, the individual (or group) could possibly face death or injury through anthropogenic means or from the physical environment. On the other hand, a lack of change reduces stimulation genetically (through interbreeding with biologically related group members), cognitively (reduced problem solving, mental stagnation once initial challenges relating to the environment are overcome) and socially (exposure to familiar sights and sounds reduces the capacity for growth in language and, ipsofacto, culture). In addition, the reduction of physical resources through consumption and degradation of the land via over-farming (hunting) is another reason for moving beyond the confines of what is safe and comfortable. As the need for biological sustenance outranks all other human requirements (according to Maslow’s hierarchy), inductively it seems plausible that this could be the main motivating factor why human groups migrate and risk everything for the sake of exploring the unconquered territories of terra incognito. 

The mere fact that we do, and have (as shown throughout history) uprooted our familiar ties and trundled off in search of a better existence seems to make the aforementioned argument a moot point. It is not something to be debated, it is merely something that humans just do. Evolution favours travel, with the potential benefits outweighing the risks by far. The promise of greener pastures on the other side is almost enough to guarantee success. The cognitive stimulation such travel brings may also improve the future chances of success in this operation through learnt experiences and the conquering of challenges, as facilitated by human ingenuity.

But what of the social considerations when travelling? Are our out-group prejudices so intense that the very notion of travel to unchartered waters causes waves of anxiety? Are we fearing the unknown, our ability to adapt and integrate or the possibility that we may not make it out alive and survive to propagate our genes? Is personality a factor in predicting an individual’s performance (in terms of adaptation to the new environment, integration with a new group and success at forging new relationships)? From personal experience, perhaps a combination of all these factors and more.

We can begin to piece together a rough working model of travel and its effects on an individual’s social and emotional stability/wellbeing. The change in a social and physical environment seems to predict the activation of certain evolutionary survival mechanisms that are mediated by several conditions of the travel undertaken. Such conditions could involve; similarity of the target country to the country of origin (in terms of culture, language, ethnic diversity, political values etc),  social support to the individual (group size when travelling, facilities to make contact with group members left behind), personality characteristics of the individual (impulsive, extroverted vs introverted, attachment style, confidence) and cognitive ability to integrate and adapt (language skills, intelligence, social ability). Thus we have a (predicted) linear relationship whereby an increase in the degree of change (measured on a multitude of variables such as physical characteristics, social aspects, perceptual similarities) from the original environment to the target environment causes a resultant change in the psychological distress of the individual (either increased or decreased dependent upon the characteristics of the mediating variables).

Perceptually, travel also seems to have an effect on the salience and characteristics of the experience. In this instance we have deeper cognitive processes that activate which influence the human sensory experience on a fundamental level. The model employed here is one of stimulus-response, handed down through evolutionary means from a distant ancestor. Direct observation of perceptual distortion while travelling is apparent when visiting a unique location. Personally, I would describe the experience as an increase in arousal to one of hyper-vigilance. Compared to subsequent visits to the same location, the original seems somehow different in a perceptual sense. Colours, smells, sounds and tastes are all vividly unique. Details are stored in memory that are ignored and discounted after the first event. In essence, the second visit to a place seems to change the initial memory. It almost seems like a different place.

While I am unsure as to whether this is experienced by anyone apart from myself, evolutionarily it makes intuitive sense. The automation of a hyper-vigilant mental state would prove invaluable when placed in a new environment. Details spring forth and are accentuated without conscious effort, thus improving the organism’s chances of survival. When applied to modern situations, however, it is not only disorientating, but also very disconcerting (at least in my experience).

Moving back to social aspects of travel, I have found it to be both simultaneously a gift and a curse. Travel has enabled an increased understanding and appreciation of different cultures, ways of life and alternative methods for getting things done. In the same vein, however, it has instilled a distinct feeling of unease and dissatisfaction with things I once held dear. Some things you simply take for granted or fail to take notice of and challenge. In this sense, exposure to other cultures is liberating; especially in Europe where individuality is encouraged (mainly in the UK) and people expect more (resulting in a greater number of opportunities for those that work hard to gain rewards and recognition). The Australian way of life, unfortunately, is one that is intolerant of success and uniqueness. Stereotypical attitudes are abundant, and it is frustrating to know that there is a better way of living out there.

Perhaps this is one of the social benefits of travel; the more group members that do it increases the chances of changing ways of life towards more tolerant and efficient methods. Are we headed towards a world-culture where diversity is replaced with (cultural) conformity? Is this ethically viable or warranted? Could it do more harm than good? It seems to me that there would be some positive aspects for a global conglomerate of culture. Then again, the main attraction of travel lies in the experience of the foreign and unknown. To remove that would be to remove part of the human longing for exploration and a source of cognitive, social and physical stimulation. Perhaps instead we should encourage travel in society’s younger generations, exposing them to such experiences and encouraging internal change based on better ways of doing things. After all, we are the ones that will be running the country someday.

Many of us take the capacity to sense the world for granted. Sight, smell, touch, taste and hearing combine to paint an uninterrupted picture of the technicolour apparition we call reality. Such lucid representations are what we use to define objects in space, plan actions and manipulate our environment. However, reality isn’t all that it’s cracked up to be. Namely, our role in defining the universe in which we live is much greater than we think. Humanity, through the use of sensory organs and the resulting interpretation of physical events, succeeds in weaving a scientific tapestry of theory and experimentation. This textile masterpiece may be large enough to ‘cover all bases’ (in terms of explaining the underlying etiology of observations), however it might not be made of the right material. With what certainty do scientific observations carry a sufficient portion of objectivity? What role does the human mind and its modulation of sensory input have in creating reality? What constitutes objective fact and how can we be sure that science is ‘on the right track’ with its model of empirical experimentation? Most importantly, is science at the cusp of an empirical ‘dark age’ where the limitations of perception fundamentally hamper the steady march of theoretical progress? These are the questions I would like to explore in this article.

The main assumption underlying scientific methodology is that the five sensory modalities employed by the human body are, by and large, uniformly employed. That is, despite small individual fluctuations in fidelity, the performance of the human senses is mostly equal. Visual acuity and auditory perception are sources of potential variance, however the advent of certain medical technologies has circumnavigated and nullified most of these disadvantages (glasses and hearing aids, respectively). In some instances, such interventions may even improve the individual’s sensory experience, superseding ‘normal’ ranges through the use of further refined instruments. Such is the case with modern science as the realm of classical observation becomes subverted by the need for new, revolutionary methods designed to observe both the very big and the very small. Satellites loaded with all manner of detection equipment have become our eyes for the ultra-macro; NASA’s COBE orbiter gave us the first view of early universal structure via detection of the cosmic microwave background radiation (CMB). Likewise, scanning probe microscopy (SPM) enabled scientists to observe on the atomic scale, below the threshold of visible light. In effect, we have extended and supplemented our ability to perceive reality.

But are these innovations also improving the objective quality of observations, or are we being led into a false sense of security? Are we becoming comfortable with the idea that what we see constitutes what is really ‘out there’? Human senses are notoriously prone to error. In addition, machines are only as good as their creator. Put another way, artificial intelligence has not yet superseded the human ‘home grown’ alternative. Therefore, can we rely on a human-made, artificial extension of perception with which to make observations? Surely we are compounding the innate inaccuracies, introducing a successive error rate with each additional sensory enhancement. Not to mention the interpretation of such observations and the role of theory in whittling down alternatives.

Consensus cannot be reached on whether what I perceive is anything like what you perceive. Is my perception of the colour green the same as yours? Empirically and philosophically, we are not yet at a position to determine with any objectivity whether this question is true. We can examine brain structure and compare regions of functional activity, however the ability to directly extract and record aspects of meaning/consciousness is still firmly in the realms of science-fiction. The best we can do is simply compare and contrast our experiences through the medium of language (which introduces its own set of limitations).As aforementioned, the human sensory experience can, at times, become lost in translation.

Specifically, the ability of our minds to disentangle the information overload that unrelentingly flows through mental channels can wane due to a variety of influences. Internally, the quality of sensory inputs is governed at a fundamental level by biological constraints. Millions of years of evolution has resulted in a vast toolkit of sensory automation. Vision, for example, has developed in such a way as to become a totally unconscious and reflexive phenomenon. The biological structure of individual retinal cells predisposes them to respond to certain types of movement, shapes and colours. Likewise, the organisation of neurons within regions of the brain, such as the primary visual cortex in the occipital lobe, processes information with pre-defined mannerisms. In the case of vision, the vast majority of processing is done automatically, thus reducing the overall level of awareness and direct control the conscious mind has over the sensory system. The conclusion here is that we are limited by physical structure rather than differences in conscious discrimination.

The retina acts as the both the primary source of input as well as a first-order processor of visual information In brief, photons are absorbed by receptors on the back wall of the eye. These incoming packets of energy are absorbed by special proteins (rods – light intensity, cones – colour) and trigger action potentials in attached neurons. Low level processing is accomplished by a lateral organisation of retinal cells; ganglionic neurons are able to communicate with their neighbours and influence the likelihood of their signal transmission. Cells communicating in this manner facilitates basic feature recognition (specifically, edges/light and dark discrepancies) and motion detection.

As with all the sensory modalities, information is then transmitted to the thalamus, a primitive brain structure that acts as a communications ‘hub’; its proximity to the brain stem (mid and hind brains) ensures that reflexes are privy to visual input prior to the conscious awareness. The lateral geniculate nucleus is the region of the thalamus which splits incoming visual input into three main signals; (M, P and K). Interestingly, these channels stream inputs into signals with unique properties (eg exclusively colour, motion etc). In addition, the cross lateralisation of visual input is a common feature of human brains. Left and right fields of view are diverted at the optic chiasm and processed on common hemispheres (left field of view from both eyes processed on the right side of the brain). One theory as to why this system develops is to minimise the impact of uni-lateral hemispheric damage – the ‘dual brain’ hypothesis (each hemisphere can act as an independent agent, reconciling and supplementing reductions in function due to damage).

We seem to lazily fall back on these automated subsystems with enthusiasm, never fully appreciating and flexing the full capabilities of sensory appendages. Micheal Frayn, in his book ‘The Human Touch’ demonstrates this point aptly;

“Slowly, as you force yourself to observe and not to take for granted what seems so familiar, everything becomes much more complicated…That simple blueness that you imagined yourself seeing turns out to have been interpreted, like everything else, from the shifting, uncertain material on offer” Frayn, 2006, p26

Of course, we are all blissfully ignorant of these finer details when it comes to interpreting the sensory input gathered by our bodies. The consciousness acts ‘with what it’s got’, without a care as to the authenticity or objectivity of the observations. We can observe this first hand in a myriad of different ways; ways in which the unreal is treated as if it were real. Hallucinations are just one mechanism where the brain is fooled. While we know such things are false, to a degree (depending upon the etiology, eg schizophrenia), such visual disturbances nonetheless are able to provoke physiological and emotional reactions. In summary, the biological (and automated) component of perception very much determines how we react to, and observe, the external world. In combination with the human mind (consciousness), which introduces a whole new menagerie of cognitive baggage, a large amount of uncertainty is injected into our perceptual experience.

Expanding outwards from this biological launchpad, it seems plausible that the qualities which make up the human sensory experience should have an effect on how we define the world empirically. Scientific endeavour labours to quantify reality and strip away the superfluous extras leaving only constitutive and fundamental elements. In order to accomplish this task, humanity employs the use of empirical observation. The segway between biological foundations of perception and the paradigm of scientific observation involves a similarity in sensory limitation. Classical observation was limited by ‘naked’ human senses. As the bulk of human knowledge grew, so too did the need to extend and improve methods of observation. Consequently, science is now possibly realising the limitation of the human mind to digest an overwhelming plethora of information.

Currently, science is restricted by the development of technology. Progress is only maintained through the ingenuity of the human mind to solve biological disadvantages of observation. Finely tuned microscopes tap into quantum effects in order to measure individual atoms. Large radio-telescope arrays link together for an eagle’s eye view of the heavens. But as our methods and tools for observing grow in complexity, so too does the degree of abstract reasoning that is required to grasp the implications of their findings. Quantum theory is one such warning indicator.

Like a lighthouse sweeps the night sky and signals impending danger, quantum physics, or more precisely, humanity’s inability to agree on any one consensus which accurately models reality, could be telling us something. Perhaps we are becoming too reliant on our tools of observation, using them as a crutch in a vain attempt to avoid our biological limitations. Is this a hallmark of our detachment from observation? Quantum ‘spookiness’ could simply be the result of a fundamental limitation of the human mind to internally represent and perceive increasingly abstract observations. Desperately trying to consume the reams of information that result from rapid progress and intense observation, scientific paradigms become increasingly specialised and diverged, increasing the degree of inter-departmental bureaucracy. It now takes a lifetime of training to even grasp the basics of current physical theory, let alone the time taken to dissect observations and truly grasp their essence.

In a sense, science is at a crossroads. One pathway leads to an empirical dead end; humanity has exhausted every possible route of explanation. The other involves either artificial augmentation (in essence, AI that can do the thinking for us) or a fundamental restructuring of how science conducts its business. Science is in danger of information overload; the limitations introduced by a generation of unrelenting technical advancement and increasingly complex tools with which to observe has taken its toll. Empirical progress is stalling, possibly due to a lack of understanding by those doing the observing. Science is detaching from its observations at an alarming rate, and if we aren’t careful, in danger of loosing sight of what the game is all about. The quest for knowledge and understanding of the universe in which we live.

Morality is a phenomenon that permeates through both society as a whole and also individually via the consciousness of independent entities. It is a force that regularly influences our behaviour and is experienced (in some form or another) universally, species-wide. Intuitively, morality seems to be at the very least, a sufficient condition for the creation of human groups. Without it, co-operation between individuals would be non-existent. But does morality run deeper? Is it, in fact, a necessary condition of group formation and a naturally emergent phenomenon that stems from the interaction of replicating systems? Or can morality only be experienced by organisms operating on a higher plane of existence – those that have the required faculties with which to weigh up pros and cons, engage in moral decision making and other empathic endeavors (related to theory of mind)?

The resolution to this question depends entirely on how one defines the term. If we take morality to encompass the act of mentally engaging in self-reflective thought as a means with which to guide observable behaviours (acting in either selfish or selfless interests), then the answer to our question is yes, morality seems to be inescapably and exclusively linked only to humanity. However, if we twinge this definition and look at the etiology of morality – where this term draws its roots and how it developed over time, one finds that even the co-operative behaviours of primitive organisms could be said to construe some sort of basic morality. If we delve even deeper and ask how such behaviours came to be, we find that the answer is not quite so obvious. Can a basic version of morality (observable through cooperative behaviours) result as a natural consequence of interactions beyond the singular?

When viewed from this perspective, cooperation and altruism seem highly unlikely; a system of individually competing organisms, logically, would evolve to favour the individual rather than the group. This question is especially prudent when studying cooperative behaviours in bacteria or more complex, multicellular forms of life, as they lack a consciousness capable of considering delayed rewards or benefits from selfless acts

In relation to humanity, why are some individuals born altruistic while others take advantage without cause for guilt? How can ‘goodness’ evolve in biological systems when it runs counter to the benefit of the individual? These are the questions I would like to explore in this article.

Morality, in the traditional, philosophical sense is often constructed in a way that describes the meta-cognitions humans experience in creating rules for appropriate (or inappropriate) behaviour (inclusive of mental activity). Morality can take on a vast array of flavours; evil at one extreme, goodness at the other. We use our sense of morality in order to plan and justify our thoughts and actions, incorporating it into our mental representations of how the world functions and conveys meaning. Morality is a dynamic; it changes with the flow of time, the composition of society and the maturity of the individual. We use it not only to evaluate the intentions and behaviours of ourselves, but also of others. In this sense, morality is an overarching egoistic ‘book of rules’ which the consciousness consults in order to determine whether harm or good is being done. Thus, it seeps into many of our mental sub-compartments; decision making, behavioural modification, information processing, emotional response/interpretation and mental planning (‘future thought’) to name a few.

As morality entertains such a privileged omni-presence, humanity has, understandably, long sought to not only provide standardised ‘rules of engagement’ regarding moral conduct but has also attempted to explain the underlying psychological processes and development of our moral capabilities. Religion, thus, could perhaps be the first of such attempts at explanation. It certainly contains many of the idiosyncrasies of morality and proposes a theistic basis for human moral capability. Religion removes ultimate moral responsibility from the individual, instead placing it upon the shoulders of a higher authority – god. The individual is tasked with simple obedience to the moral creeds passed down from those privileged few who are ‘touched’ with divine inspiration.

But this view does morality no justice. Certainly, if one does not subscribe to theistic beliefs then morality is in trouble; by this extreme positioning, morality is synonymous with religion and one definitely cannot live without the other.

Conversely (and reassuringly), in modern society we have seen that morality does exist in individuals whom lack spirituality. It has been reaffirmed as an intrinsically human trait with deeper roots than the scripture of religious texts. Moral understanding has matured beyond the point of appealing to a higher being and has reattached itself firmly to the human mind. The problem with this newfound interpretation is that in order for morality to be considered as a naturally emergent product of biological systems, moral evolution is a necessary requirement. Put simply, natural examples of moral systems (consisting of cooperative behaviour and within group preference) must be observable in the natural environment. Moral evolution must be a naturally occurring phenomenon.

A thought experiment known as the “Prisoner’s dilemma” summarises succinctly the inherent problems with the natural evolution of mutually cooperative behaviour. This scenario consists of two parties, prisoners, whom are seeking an early release from jail. They are given the choice of either a) betraying their cellmate and walking free while the other has their sentence increased – ‘defecting’ or b) staying silent and mutually receiving a shorter sentence – ‘cooperating’. It becomes immediately apparent that in order for both parties to benefit, both should remain silent and enjoy a reduced incarceration period. Unfortunately, and also the catalyst for the terming of this scenario as a dilemma, the real equilibrium point is for both parties to betray. Here, the pay-off is the largest – walking free while your partner in crime remains behind with an increased sentence. In the case of humans, it seems that some sort of meta-analysis has to be done, a nth-order degree of separation (thinking about thinking about thinking), with the most dominant stratagem resulting in betrayal by both parties.

Here we have an example of the end product; an advanced kind of morality resulting from social pressures and their influence on overall outcome (should I betray or cooperate – do I trust this person?). In order to look at the development of morality from its more primal roots, it is prudent to examine research in the field of evolutionary biology. One such empirical investigation (conducted by Aviles, 2002that is representative of the field involves the mathematical simulation of interacting organisms. Modern computers lend themselves naturally to the task of genetic simulation. Due to the iterative nature of evolution, thousands of successive generations live, breed and die in the time it takes the computer’s CPU to crunch through the required functions. Aviles (2002) took this approach and created a mathematical model that begins at t = 0 and follows pre-defined rules of reproduction, genetic mutation and group formation. The numerical details are irrelevant; suffice to say that cooperative behaviours emerged in combination with ‘cheaters’ and ‘freeloaders’. Thus we see the dichotomous appearance of a basic kind of morality that has evolved spontaneously and naturally, even though the individual may suffer a ‘fitness’ penalty. More on this later.

“[the results] suggest that the negative effect that freeloaders have on group productivity (by failing to contribute to communal activities and by making groups too large) should be sufficient to maintain cooperation under a broad range of realistic conditions even among nonrelatives and even in the presence of relatively steep fitness costs of cooperation” Aviles, (2002).

Are these results translatable to reality? It is all well and good to speak of digital simulations with vastly simplified models guiding synthetic behaviour; the real test comes in observation of naturally occurring forms of life. Discussion by Kreft and Bonhoeffer (2005) lends support to the reality of single-celled cooperation, going so far as suggesting that “micro-organisms are ever more widely recognized as social”. Surely an exaggerated caricature of the more common definition of ‘socialness’, however the analogy is appropriate. Kreft et al effectively summarise the leading research in this field, and put forward the resounding view that single-celled organisms can evolve to show altruistic (cooperative) behaviours. We should hope so; otherwise the multicellularity which led to the evolution of humanity would have nullified our species’ development before it even started!

But what happened to those pesky mutations that evolved to look out for themselves? Defectors (choosing not to cooperate) and cheaters (choosing to take advantage of altruists) are also naturally emergent. Counter-intuitively, such groups are shown to be kept in their place by the cooperators. Too many cheaters, and the group fails through exploitation. The key lies in the dynamic nature of this process. Aviles (2002) found that in every simulation, the number of cheaters was kept in control by the dynamics of the group. A natural equilibrium developed, with the total group size fluctuating according to the number of cheaters versus cooperators. In situations where cheaters ruled; the group size dropped dramatically, resulting in a lack of productive work and reduced reproductive rates. Thus, the number of cheaters is kept in check by the welfare of the group. It’s almost a love/hate relationship; the system hates exploiters, but in saying that, it also tolerates their existence (in sufficient numbers).

Extrapolating from these conclusions, a logical outcome would be the universal adoption of cooperative behaviours. There are prime examples of this in nature; bee and ant colonies, migratory birds, various aquatic species, even humans (to an extent) all work together towards the common good. The reason why we don’t see this more often, I believe, is due to convergent evolution – different species solved the same problem from different approaches. Take flight for example – this has been solved separate times in history by both birds and insects. The likelihood of cooperation is also affected by external factors; evolutionary ‘pressures’ that can guide the flow of genetic development. The physical structure of the individual, environmental changes and resource scarcity are all examples of such factors that can influence whether members of the same species work together.

Humanity is a prime example; intrinsically we seem to have a sense of inner morality and tendency to cooperate when the conditions suit. The addition of consciousness complicates morality somewhat, in that we think about what others might do in the same situation, defer to group norms/expectations, conform to our own premeditated moral guidelines and are paralyzed by indecisiveness. We also factor in environmental conditions, manipulating situations through false displays of ‘pseudo-morality’ to ensure our survival in the event of resource scarcity. But when the conditions are just so, humanity does seem to pull up its trousers and bind together as a singular, collective organism. When push comes to shove humanity can work in unison. However just as bacteria evolve cheaters and freeloaders, so to does humanity give birth to individuals that seem to lack a sense of moral guidance.

Morality must be a universal trait, a naturally emergent phenomenon that predisposes organisms to cooperate towards the common good. But just as moral ‘goodness’ evolves naturally, so too does immorality. Naturally emergent cheaters and freeloaders are an intrinsic part of the evolution of biological systems. Translating these results to the plight of humanity, it becomes apparent that such individual traits are also naturally occurring in society. Genetically, and to a lesser extent, environmentally, traits from both ends of the moral scale will always be a part of human society. This surely has implications for the plans of a futurist society, relying solely on humanistic principles. Moral equilibrium is ensured, at least biologically, for the better or worse. Whether we can physically change the course of natural evolution and produce a purely cooperative species is a question that can only be answered outside the realms of philosophy.

When people attempt to describe their sense of self, what are they actually incorporating into the resultant definition? Personality is perhaps the most common conception of self, with vast amounts of empirical validation. However, our sense of self runs deeper than such superficial descriptions of behavioural traits. The self is an amalgamation of all that is contained within the mind; a magnificent average of every synaptic transmission and neuronal network. Like consciousness, it is an emergent phenomenon (the sum is greater than the parts). But unlike the conscious, self ceases to be when individual components are removed or modified. For example, consciousness is virtually unchanged (in the sense of what it defines – directed, controlled thought) with the removal of successive faculties. We can remove physical brain structures such as the amygdala and still utilise our capacities for consciousness, albeit loosing a portion of the informative inputs. However the self is a broader term, describing the current mental state of ‘what is’. It is both a snapshot of the descriptive, providing a broad overview of what we are at time t, and prescriptive, in that the sense of self has an influence over how behaviours are actioned and information is processed.

In this article I intend to firstly describe the basis of ‘traditional’ measures of the self; empirical measures of personality and cognition. Secondly I will provide a neuro-psychological outline of the various brain structures that could be biologically responsible for eliciting our perceptions of self. Finally, I wish to propose the view that our sense of self is dynamic, fluctuating daily based on experience and discuss how this could affect our preconceived notions of introspection.

Personality is perhaps one of the most measured variables in psychology. It is certainly one of the most well-known, through its portrayal in popular science as well as self-help psychology. Personality could also be said to comprise a major part of our sense of self, in that the way in which we respond to and process external stimuli (both physically and mentally) has major effects on who we are as an entity. Personality is also incredibly varied; whether due to genetics, environment or a combination of both. For this reason, psychological study of personality takes on a wide variety of forms.

The lexical hypothesis, proposed by Francis Galton in the 19th century, became the first stepping stone from which the field of personality psychometrics was launched. Galton’s posit was that the sum of human language, its vocabulary (lexicon), contains the necessary ingredients from which personality can be measured. During the 20th century, others expanded on this hypothesis and refined Galton’s technique through the use of Factor Analysis (a mathematical model that summarises common variance into factors). Methodological and statistical criticisms of this method aside, the lexical hypothesis proved to be useful in classifying individuals into categories of personality. However this model is purely descriptive; it simply summarises information, extracting no deeper meaning or providing background theory with which to explain the etiology of such traits. Those wishing to learn more about descriptive measures of personality can find this information under the headings ‘The Big Five Inventory’ (OCEAN) and Hans Eysencks Three Factor model (PEN).

Neuropsychological methods of defining psychology are less reliant on statistical methods and utilise a posteriori knowledge (as opposed to the lexical hypothesis which relies on reasoning/deduction). Thus, such theories have a solid empirical background with first-order experimental evidence to provide support to the conclusions reached. One such theory is the BIS/BAS (behavioural inhibition/activation system). Proposed by Gray (1982), the BIS/BAS conception of personality builds upon individual differences in cortical activity in order to arrive at the observable differences in behaviour. Such a revision of personality turns the tables on traditional methods of research in this area, moving away from superficially describing the traits to explaining the underlying causality. Experimental evidence has lent support to this model through direct observation of cortical activity (functional MRI scans). Addicts and sensation seekers are found to have high scores on behavioural activation (associated with increased per-frontal lobe activity), while introverts score high on behavioural inhibition. This seems to match up with our intuitive preconceptions of these personality groupings; sensation seekers are quick to action, in short they tend to act first and think later. Conversely, introverts act more cautiously, adhering to a policy of ‘looking before they leap’. Therefore, while not encapsulating as wide a variety of individual personality factors as the ‘Big Five’, the BIS/BAS model and others based on neurobiological foundations seem to be tapping into a more fundamental, materialistic/reductionist view of behavioural traits. The conclusion here is that directly observable events and the resulting individual differences ipso facto arise from specific regions in the brain.

Delving deeper into this neurology, the sense of self may have developed as a means to an end; the end in this case is predicting the behaviour of others. Therefore, our sense of self and consciousness may have evolved as a way of internally simulating how our social competitors think, feel and act. V. Ramachandran (M.D.), in his Edge.org exclusive essay, calls upon his neurological experience and knowledge of neuroanatomy to provide a unique insight into the physiological basis of self. Mirror neurons are thought to act as mimicking simulators of external agents, in that they show activity both performing a task and while observing someone else performing the same task. It is argued that such neuronal conglomerates evolved due to social pressures; a method of second guessing the possible future actions of others. Thus, the ability to direct these networks inwards was an added bonus. The human capacity for constructing a valid theory of mind also gifted us with the ability to scrutinise the self from a meta-perspective (an almost ‘out-of-body’ experience ala a ‘Jimeny the Cricket’ style conscience).

Mirror neurons also act as empathy meters; firing across synaptic events during moments of emotional significance. In effect, our ability to recognise the feelings of others stems from a neuronal structure that actually elicits such feelings within the self. Our sense of self, thus, is inescapably intertwined with that of other agents’ self. Like it or not, biological dependence on the group has resulted in the formation of neurological triggers which fire spontaneously and without our consent. In effect, the intangible self can be influenced by other intangibles, such as emotional displays. We view the world through ‘rose coloured glasses’ with an emphasis on theorizing the actions of others through how we would respond in the same situation.

So far we have examined the role of personality in explaining a portion of what the term ‘self’ conveys. In addition, a biological basis for self has been introduced which suggests that both personality and the neurological capacity for introspection are both anatomically definable features of the brain. But what else are we referring to when we speak of having a sense of self? Surely we are not doing this construct justice if all that it contains is differences in behavioural disposition and anatomical structure.

Indeed, the sense of self is dynamic. Informational inputs constantly modify and update our knowledge banks, which in turn, have ramifications for self. Intelligence, emotional lability, preferences, group identity, proprioreception (spatial awareness); the list is endless. Although some of these categories of self may be collapsible into higher order factors (personality could incorporate preference and group behaviour), it is arguable that to do so would result in the loss of information. The point here is that to look at the bigger picture may obscure the finer details that can lead to further enlightenment on what we truly mean when we discuss self.

Are you the same person you were 10 years ago? In most cases, if not all, the answer will be no. Core traits may remain relatively stable, such as temperament, however arguably, individuals change and grow over time. Thus, their sense of self changes as well, some people may become more attuned to their sense of self than others, developing a close relationship through introspective analysis. Others, sadly, seem to lack this ability of meta-cognition; thinking about thinking, asking the questions of ‘why’, ‘who am I’ and ‘how did I come to be’. I believe this has implications for the growth of humanity as a species.

Is a state of societal eudaimonia sustainable in a population that has varying levels of ‘selfness’? If self is linked to the ability to simulate the minds of others, which is also dependent upon both neurological structure (leading to genetic mutation possibly reducing or modifying such capacities) and empathic responses, the answer to this question is a resounding no. Whether due to nature or nurture, society will always have individuals whom are more self-aware than others, and as a result, more attentive and aware of the mental states of others. A lack of compassion for the welfare of others coupled with an inability to analyse the self with any semblance of drive and purpose spells doom for a harmonious society. Individuals lacking in self will refuse, through ignorance, to grow and become socially aware.

Perhaps collectivism is the answer; forcing groups to co-habitate may introduce an increased appreciation for theory of mind. If the basis of this process is mainly biological (as it would seem to be), such a policy would be social suicide. The answer could dwell in the education system. Introducing children to the mental pleasures of psychology and at a deeper level, philosophy, may result in the recognisation of the importance of self-reflection. The question here is not only whether students will grasp these concepts with any enthusiasm, but also if such traits can be taught via traditional methods. More research must be conducted into the nature of the self if we are to have an answer to this quandry. Is self related directly to biology (we are stuck with what we have) or can it be instilled via psycho-education and a modification of environment?

Self will always remain a mystery due to its dynamic and varied nature. It is with hope that we look to science and encourage its attempts to pin down the details on this elusive subject. Even if this quest fails to produce a universal theory of self, perhaps it will be successful in shedding at least some light onto the murky waters of self-awareness. In doing so, psychology stands to benefit both from a philosophical and a clinical perspective, increasing our knowledge of the causality underlying disorders of the self (body dysmorphia, depression/suicide, self-harming) .

If you haven’t already done so, take a moment now to begin your journey of self discovery; you might just find something you never knew was there!

Most of us would like to think that we are independent agents that are in control of our destiny. After all, free-will is one of the unique phenomena that humanity can claim as its own – a fundamental part of our cognitive toolkit. Experimental evidence, in the form of neurological imaging has been interpreted as an attack on mental freedom. Studies that highlight the possibility of unconscious activity preceding the conscious ‘will to act’ seem to almost sink the arguments from non-determinists (libertarians). In this article I plan to outline this controversial research and offer an alternative interpretation; one which does not infringe on our abilities to act independent and of our own accord. I would then like to explore some of the situations where free-will could be ‘missing in action’ and suggest that the frequency at which this occurs is larger than expected.

A seminal investigation conducted by Libet et al (1983) first challenged (empirically) our preconceived notions of free-will. The setup consisted of an electroencephalograph (EEG, measuring overall electrical potentials through the scalp) connected to the subject and a large clock with markings denoting various time periods. Subjects were required to simply flick their wrist whenever a feeling urged them to do so. The researchers were particularly interested in the “Bereitschaftspotential” or readiness potential; a signature EEG pattern of activity that signals the beginning of volitional initiation of movement. Put simply, the RP is an measurable spike in electrical activity from the pre-motor region of the cerebral cortex – a mental preparatory action that put the wheels of movement into action.

Results of this experiment indicated that the RP significantly preceded the subjects’ reported sensations of conscious awareness. That is, the act of wrist flicking seemed to precede conscious awareness of said act. While the actual delay between RP detection and conscious registration of intent to move was small (by our standards), the half a second gap was more than enough to assert that a measurable difference had occurred. Libet interpreted these findings as having vast implications for free-will. It was argued that since electrical activity preceded conscious awareness of the intent to move, free-will to initiate movement (Libet allowed free-will to control movements already in progress, that is, modify their path or act as a final ‘veto’ in allowing or disallowing it to occur) was non-existent.

Many have taken the time to respond to Libet’s initial experiment. Daniel Dennet (in his book Freedom Evolves) provides an apt summary of the main criticisms. The most salient rebuttal comes in the form of signal delay. Consciousness is notoriously slow in comparison to the automated mental processes that act behind the scenes. Take the sensation of pain, for example. Initial stimulation of the nerve cells must firstly reach sufficient levels for an action potential to fire, causing dendrites to flood ions into the synaptic gap. The second-order neuron then receives these chemical messengers, modifying its electrical charge and causing another action potential to fire along its myelinated axon. Now, taking into account the length that this signal must travel (at anywhere from 1-10m/s), it will then arrive at the thalamus, the brain’s sensory ‘hub’ where it is then routed to consciousness. Consequently, there is a measurable gap between the external event and conscious awareness; perhaps made even larger if the signal is small (low pain) or the mind is distracted. In this instance, electrical activity is also taking place and preceding consciousness. Arguably the same phenomenon could be occurring in the Libet experiment.

Delays are inevitably introduced when consciousness is involved in the equation. The brain is composed of a conglomerate of specialised compartments, each communicating with its neighbours and performing its own part of the process in turn. Evolution has drafted brains that act automatic first, and conscious second. Consequently, the automatic gains priority over the directed. Reflexes and instincts act to save our skins long before we are even aware of the problem. Naturally, electrical activity in the brain could thus precede conscious awareness.

In the Libet experiment, the experimental design itself could be misleading. Libet seems to equate his manipulation of consciousness timing with free-will, when in actual fact, the agent has already decided freely that they will follow instructions. What I am trying to say here is that free-will does not have to act as an initiator to every movement; rather it acts to ‘set the stage’ for events and authorises the operation to go ahead. When told to move voluntarily, the agent’s will makes the decision to either comply or rebel. Compliance causes the agent to authorise movement, but the specifics are left up to chance. Perhaps a random input generator (quantum indeterminacy?) provides the catalyst with which this initial order combines to create the RP and eventual movement. Conscious registration of this fact only occurs once the RP is already starting to form.

Looking at things from this perspective, consciousness seems to play a constant game of ‘catch-up’ with the automated processes in our brains. Our will is content to act as a global authority, leaving the more menial and mundane tasks up to our brain’s automated sub-compartments. Therefore, free-will is very much alive and kicking, albeit sometimes taking a back-seat to the unconscious.

We have begun by exploring the nature of free-will and how it links in with consciousness. But what of these unconscious instincts that seek to override our sense of direction and seek to regress humanity back to its more animalistic and primitive ancestry? Such instincts act covertly; sneakily acting whilst our will is otherwise indisposed. Left unabated, the agent that gives themselves completely to urges and evolutionary drives could be said to be devoid of free-will, or at the very least, somewhat lacking compared to more ‘aware’ individuals. Take sexual arousal, for instance. Like it or not, our bodies act on impulse, removing free-will from the equation with simplistic stimulus:response conditioning processes. Try as we might, sexual arousal (if allowed to follow its course) acts immediately upon visual or physical stimulation. It is only when the consciousness kicks into gear and yanks on the leash attached to our unconscious that control is regained. Eventually, with enough training, it may be possible to override these primitive responses, but the conscious effort required to sustain such a project would be psychically draining.

Society also seeks to rob us of our free-will. People are pushed and pulled by group norms, expectations of others and the messages that are constantly bombarding us on a daily basis. Rather than encouraging individualism, modern society is instead urging us to follow trends. Advertising is crafted in a way that the individual may even be fooled into thinking that they are arriving at decisions of their own volition (subliminal messaging), when in actual fact, it is simply tapping into some basic human need for survival (food, sex, shelter/security etc).

Ironically, science itself could also be said to be reducing the amount of free-will we can exert. Scientific progress seeks to make the world deterministic; that is, totally predictable through increasingly accurate theories. While the jury is still out as to whether ‘ultimate’ accuracy in prediction will ever occur (arguably, there is not enough bits of information in the universe with which to construct a computer powerful enough to complete such a task) science is coming closer to a deterministic framework whereby the paths of individual particles can be predicted. Quantum physics is but the next hurdle to be overcome in this quest for omniscience. If the inherent randomness that lies within quantum processes is ever fully explained, perhaps we will be at a place (at least scientifically) to model a individual’s future action based on a number of initial variables.

What could this mean for the nature of free-will? If past experiments are anything to go by (Libet et al), it will rock our sense of self to the core. Are we but behaviouristic automatons as the psychologist Skinner proposed? Delving deeper into the world of the quanta, will we ever be able to realistically model and predict the paths of individual particles and thus the future course of the entire system? Perhaps the Heisenberg Uncertainty Principle will spare us from this bleak fate. The indivisible randomness of the quantum wave function could potentially be the final insurmountable obstacle that neurological researchers and philosophers alike will never be able to conquer.

While I am all for scientific progress and increasing the bulk of human knowledge, perhaps we are jumping the gun with this free-will stuff. Perhaps some things are better left mysterious and unexplained. A defeatist attitude if ever I saw one, but it could be justified. After all, how would you feel if you knew every action was decided before you were even a twinkle in your father’s eye? Would life even be worth living? Sure, but it would take alot of reflection and a personality that could either deny or reconcile the feelings of unease that such a proposition brings.

They were right; ignorance really is bliss.

Compartmentalisation of consciousness