You are currently browsing the category archive for the ‘Humanism’ category.

The transhumanist movement continues to gain momentum through recognition by mainstream media and a ever-burgeoning army of empricists, free thinkers and rationalists. Recently,  the Australian incarnation of 60 Minutes interviewed David Sinclair, a biologist whom has identified  the potentially life-extending properties of resveratrol. All this attention has sought to swell the awareness of transhumanism within the general community, most notably due to the inherently appealing nature of anti-senescent interventions. But what of the neurological side of transhumanism, specifically the artificial augmentation of our natural mental ability with implantable neurocircuitry? Does research in this area create moral questions regarding its implementation, or should we be embracing technological upgrades with open arms? Is it morally wrong to enhance the brain without effort on the individual level (IE: are such methods just plain lazy)? These are the questions I would like to investigate in this article.

An emerging transhumanist e-zine, H+ Magazine, outlines several avenues currently under exploration by researchers, who aim to improve the cognitive ability of the human brain through artificial enhancement. The primary area of focus at present (from an empirical point of view) lies in memory enhancement. The Innerspace Foundation (IF) is a not-for-profit organisation attempting to lead the charge in this area, with two main prizes offered to researchers whom can 1)  successfully create a device which can circumvent the traditional learning process and 2) create a device which facilitates the extension of natural memory.

Pete Estep, chairman of IF, was interviewed by H+ magazine in relation to the foundation’s vision as to what kind of device that satisfied their award criteria might look like. Pete believes the emergence of this industry involves ‘baby steps’ of achieving successful interfaces between biological and non-biological components. Electronic forms of learning, Pete believes, are certainly non-traditional, but still a valid possibility and stand to revolutionise the human intellect in terms of capacity and quality of retrieval.

Fortunately, we seem to already made progress on those ‘baby steps’ regarding the interface between brain and technology. Various neuroheadset products are poised to be released commercially in the coming months. For example, the EPOC headset utilises EEG technology to recognise brainwave activity that corresponds to various physical actions such as facial expression and intent to move a limb. With concentrated effort and training, the operator can reliably reproduce the necessary EEG pattern to activate individual commands within the headset. These commands can then be mapped to an external device and various tasks able to be performed remotely.

Having said this, such devices are still very much ‘baby’ in their steps. The actual stream of consciousness has not yet been decoded; the secrets of the brain are still very much a mystery. Recognisation of individual brain patterns is a superficial solution to a profound problem. Getting back to Searle’s almost cliched Chinese Room thought experiment, we seem to be merely reading the symbols and decoding them, there is no actual understanding  and comprehension going on here.

Even if such a solution is possible, and a direct mind/machine interface achieved, one small part of me wonders if it really is such a good thing. I imagine such a feeling is similar to the one felt by the quintessential school teacher when handheld calculators became the norm within the educational curriculum. By condoning such neuro-shortcuts, are we simply being lazy? Are the technological upgrades promised by transhumanism removing too much of the human element?

On a broader scale, I believe these concerns are elucidated by a societal shift towards passivity. Television is the numero-uno offender with a captive audience of billions. The invasion of neurological enhancements may seek to only increase the exploitation of our attention with television programs beamed directly into brains. Rain, hail or shine, passive reception of entertainment would be accessible 24 hours a day. Likewise, augmentation of memory and circumvention of traditional learning processes may forge a society of ultimate convenience – slaves to a ‘Matrix-style’ mainframe salivating over their next neural-upload ‘hit’.

But having said all this, by examining the previous example of the humble calculator it seems that if such technological breakthroughs are used as an extensor rather than a crutch, humanity may just benefit from the transhumanist revolution. I believe any technology aiming to enhance natural neurological processing power must only be used as such; a method to raise the bar of creativity and ingenuity, not simply a new avenue for bombarding the brain with more direct modes of passive entertainment. Availability must also be society-wide, in order to allow every human being to reach their true potential.

Of course, the flow-on effects of such technology on socio-economic status, intelligence, individuality, politics; practically every facet of human society, are certainly unknown and unpredictable. If used with extension and enhancement as a philosophy, transhumanism can usher in a new explosion of human ingenuity. If a more superficial ethos is adopted, it may only succeed in ushering a new dark ages. It’s the timeless battle between good (transcendence) and evil (exploitation, laziness). Perhaps a topic for a future article, but certainly food for thought.

Evil is an intrinsic part of humanity, and it seems almost impossible to erradicate it from society without simultaneously removing a significant part of our human character. There will always be individuals whom seek to gain advantage over others through harmful means. Evil can take on many forms, depending upon the definition one uses to encapsulate the concept. For instance, the popular definition includes elements of malicious intent or actions that are designed to cause injury/distress to others. But what of the individual that accidentally causes harm to another, or whom takes a silent pleasure in seeing other’s misfortune? Here we enter a grey area, the distinction between good and evil blurring ever so slightly, preventing us from making a clear judgement on the topic.

Religion deals with this human disposition towards evil in a depressingy cynical manner. Rather than suggesting ways in which the problem can be overcome, religion instead proposes that evil or “sin” is an inevitable temptation (or a part of our character into which we are born) that can only be overcome with a conscious and directed effort. Invariably one will sin sometime in their life, whereupon the person should ask for forgiveness from their nominated deity. Again we see a shifting of responsibility away from the individual, with the religious hypothesis leaning on such concepts as demonic possession and lapses of faith as an explanation for the existence of evil (unwavering belief in the deity cures all manner of temptations and worldly concerns).

In its current form, religion does not offer a satisfactory explanation for the problem of evil. Humanity is relegated to the backseat in terms of moral responsibility, coerced into conformity through a subservence to the Church’s supposed ideals and ways of life. If our society is to break free of these shackles and embrace a humanistic future free from bigotry and conflict, moral guidance must be gained from within the individual. To this end, society should consider introducing moral education for its citizens, taking a lesson from the annals of history (specifically, ancient Greece with its celebration of individual philosophical growth).

Almost counter-intuitively, some of the earliest recorded philosophies actually advocated a utopian society that was atheistic in nature, and deeply rooted in humanistic, individually managed moral/intellectual growth. One such example is the discipline of Stoicism, founded in the 2nd century BC. This philosophical movement was perhaps one of the first true instances of humanism whereby personal growth was encouraged through introspection and control of destructive emotions (anger, violence etc). The stoic way was to detach oneself from the material world (similar to Buddhist traditions), a tenet that is aptly summarised through the following quote;

“Freedom is secured not by the fulfilling of one’s desires, but by the removal of desire.”

Epictetus

Returning to the problem of evil, Stoicism proposed that the presence of evil in the world is an inevitable fact due to ignorance. The premise of this argument is that a universal reason or logos, permeates throughout reality, and evil arises when individuals go against this reason. I believe what the Stoics mean here is that a universal morality exists, that being a ubiquitous guideline accessible to our reality through conscious deliberation and reflective thought. When individuals act contrary to this universal standard, it is through an ignorance of what the correct course of action actually is.

This stoic ethos is personally appealing because it seems to have a large humanistic component. Namely, all of humanity has the ability to grasp universal moral truths and overcome their ‘ignorance’ of the one true path towards moral enlightenment. Whether such truths actually exist is debatable, and the apathetic nature of Stoicism seems to depress the overall human experience (dulled down emotions, detachment from reality).

The ancient Greek notion of eudaimonia could be a more desirable philosophy by which to guide our moral lives. The basic translation of this term as ‘greatest happiness’ does not do it justice. It was first introduced by Socrates, whom outlined a basic version of the concept as comprising two components; virtue and knowledge. Socrates’ virtue was thus moral knowledge of good and evil, or having the psychological tools to reach the ultimate good. Subsequent students Plato and Aristotle expanded on this original idea of sustained happiness by adding layers of complexity. For example, Aristotle believed that human activity tends towards the experience of maximum eudaimonia, and to achieve that end it was though that one should cultivate rationality of judgement and ‘noble’ characteristics (honor, honesty, pride, friendliness). Epicurus again modified the definition of eudaimonia to be inclusive of pleasure, thus also changing the moral focus to one that maximises the wellbeing of the individual through satisfaction of desire (the argument here is that pleasure equates with goodness and pain with badness, thus the natural conclusion is to maximise positive feeling).

We see that the problem of evil has been dealt with in a wide variety of ways. Even in our modern world it seems that people are becoming angrier, impatient and destructive towards their fellow human beings. Looking at our track record thus far, it seems that the mantra of ‘fight fire with fire’ is being followed by many countries when determining their foreign policy. Modern incarnations of religious moral codes (an eye for an eye) have resulted in a new wave of crusades with theistic beliefs at the forefront once again.

The wisdom of our ancient ancestors is refreshing and surprising, given that commonsense suggests a positive relationship between knowledge and time (human progress increases with the passage of time). It is entirely possible that humanity has been following a false path towards moral enlightenment, and given the lack of progress from the religious front, perhaps a new approach is needed. By treating the problem of evil as one of cultural ignorance we stand to benefit on a high level. The whole judicial system could be re-imagined to one where offenders are actually rehabilitated through education, rather than simply breeding generations of hardened criminals. Treating evil as a form of improper judgement forces our society to take moral responsibility at the individual level, thus resulting in real and measurable changes for the better.

A recurring theme and technological prediction of futurists is one in which human intelligence supersedes that of the previous generation through artificial enhancement. This is a popular topic on the Positive Futurist website maintained by Dick Pelletier, and one which provides food for thought. Mr Pelletier outlines a near future (2030s) where a combination of nanotechnology and insight into the inner workings of the human brain facilitate an exponential growth of intelligence. While the accuracy of such a prediction is open to debate (specifically the technological possibilities of successful development within the given timeframe), if such a rosy future did come to fruition what would be the consequences on society? Specifically, would an increase of average intelligence necessarily result in an overall improvement to quality of life? If so, which areas would be mostly affected (eg morality, socio-economic status)? These are the questions I would like to explore in this article.

The main argument provided by futurists is that technological advances relating to nano-scale devices will soon be realised and implemented throughout society. By utilising these tiny automatons to the largest extent possible, it is thought that both disease and aging could be eradicated by the middle of this century. This is due to the utility of nanobots, specifically their ability to carry out pre-programmed tasks in a collective and automated fashion without any conscious awareness on behalf of the host. In essence, nano devices could act as a controllable extension of the human body, giving health professionals the power to monitor and treat throughout the organisms lifespan. But the controllers of these instruments need to know what to target and how to best direct their actions; a point of possible sabotage to the futurists’ plan. In all likelihood, however, such problems will only prove to serve as temporary hindrances and should be overcome through extensive testing and development phases.

Assuming that a) such technology is possible and b) it can be controlled to produce the desired results, the future looks bright for humanity. By further extending nanotechnology with cutting edge neurological insight, it is feasible that intelligence can be artificially increased. The possibility of artificial intelligence and the development of an interface with the human mind almost ensures a future filled with rapid growth. To this end, an event aptly named the ‘technological singularity’ has been proposed, which outlines the extension of human ability through aritificial means. The singularity allows for innovation to exceed the rate of development; in short, humankind could advance (technologically) faster than the rate of input. While the plausibility of such an event is open to debate, it does sound feasible that artificial intelligence could assist us to develop new and exciting breakthroughs in science. If conscious, self-directed intelligence were to be artificially created this may assist humanity even further; perhaps the design of specific minds would be possible (need a physical breakthrough – just create an artificial Einstein). Such an idea hinges totally on the ability of neuroscientists to unlock the secrets of the human brain and allow the manipulation or ‘tailoring’ of specific abilities.

While the jury is still out debating the details of how such a feat will be made technologically possible, a rough outline of the methodologies involved in artificial augmentation could be enlightening. Already we are seeing the effects of a society increasingly driven by information systems. People want to know more in a shorter time, in other words, increase efficiency and volume. To compensate for the already torrential hordes of information available on various mediums (the internet springs to mind) humanity relies increasingly on ways to filter, absorb and understand stimuli. We are seeing not only a trend in artificial aids (search engines, database software, larger networks) but also a changing pattern in the way we scan and retain information. Internet users are now forced to make quick decisions and scan superficially at high speed to obtain information that would otherwise be lost amidst the backlog of detail. Perhaps this is one way in which humanity is guiding the course of evolution and retraining the minds basic instincts away from more primitive methods of information gathering (perhaps it also explains our parents’ ineptitude for anything related to the IT world!) This could be one of the first targets for augmentation; increasing the speed of information transfer via programmed algorithms that fuse our natural biological mechanisms of searching with the power of logical, machine-coded functions. Imagine being able to combine the biological capacity to effortlessly scan and recognise facial features with the speed of computerised programming.

How would such technology influence the structure of society today? The first assumption that must be taken is the universal implementation/adoption of such technologies by society. Undoubtedly there will be certain populations whom refuse for whatever reason, most likely due to a perceived conflict with their belief system. It is important to preserve and respect such individuality, even if it means that these populations will be left behind in terms of intellectual enlightenment. Critics of future societies and futurists in general argue that a schism will develop, akin to the rising disparities in wealth distribution present within today’s society. In counter-argument, I would respond that an increase in intelligence would likewise cause a global rise in morality. While this relationship is entirely speculative, it is plausible to suggest that a person’s level of moral goodness is at least related (if not directly) to their intelligence.

Of course, there are notable exceptions to this rule whereby intelligent people have suffered from moral ineptitude, however an increased neurological understanding and a practical implementation of ‘designer’ augmentations (as it relates to improving morality) would negate the possibility of a majority ‘superclass’ whom persecutes groups of ‘naturals’. At the very worst, there may be a period of unrest at the implementation of such technology while the majority of the population catches up (in terms of perfecting the implantation/augmentation techniques and achieving the desired level of moral output). Such innovations may even act as a catalyst for developing a philosophically sound model of universal morality; something which would in turn, allow the next generation of neurological ‘upgrades’ to implement.

Perhaps we are already in the midst of our future society. Our planet’s declining environment may hasten the development of such augmentation to improve our chances of survival. Whether this process involves the discarding of our physical bodies for a more impervious, intangible machine-based life or otherwise remains to be seen. With the internet’s rising popularity and increasing complexity, a virtual ‘Matrix-esque’ world in which such programs could live might not be so far-fetched after all. Whatever the future holds, it is certainly an exciting time in which to live. Hopefully humanity can overcome the challenges of the future in a positive way and without too much disruption to our technological progress.

The monk sat meditating. Alone atop a sparsely vegetated outcrop, all external stimulus infusing psychic energy within his calm, receptive mind. Distractions merely added to his trance, assisting the meditative state to deepen and intensify. Without warning, the experience culminated unexpectedly with a fluttering of eyelids. The monk stood, content and empowered with newfound knowledge. He has achieved pure insight…

The term ‘insight’ is often attributed to such vivid descriptions of meditation and religious devotion. More specifically, religions such as Buddhism promote the concept of insight (vipassana) as a vital prerequisite for spiritual nirvana, or transcendence of the mind to a higher plane of existence. But does insight exist for the everyday folk of the world? Are the momentary flashes of inspiration and creativity part and parcel of the same phenomenon or are we missing out on something much more worthwhile? What neurological basis does this mental state have and how can its materialisation be ensured? These are the questions I would like to explore in this article.

Insight can be defined as the mental state whereby confusion and uncertainty are replaced with certainty, direction and confidence.  It has many alternative meanings and contexts regarding its use, ranging from a piece of obtained information to the psychological capacity to introspect objectively (as according to some external judge – introspection is by its very name subjective in nature). Perhaps the most fascinating and generally applicable context is one which can be described as ‘an instantaneous flash of brilliance’ or ‘a sudden clearing of murky intellect and intense feelings of accomplishment’. In short, insight (in the context which I am interested) is one which can be attributed to the genius’ of society, those that seemingly bring together tiny shreds of information and piece them together to solve a particularly challenging problem.

Archimedes is perhaps the most widely cited example of human insight. As the story goes, Archimedes was inspired by the displacement of water in his bathtub to formulate a theory of calculating the volume of an irregular object. This technique was of great empirical importance as it allowed a reliable measure of density (referred to as ‘purity’ in those ancient times, and arising from a more fiscal motivation such as gold purity). The climax of the story describes a naked Archimedes running wildly through the streets unable to control his excitement at this ‘Eureka’ moment. Whether the story is actually true or not has little bearing on the force of the argument presented; all of us have most likely experienced this moment at one point in our lives, and is best summarised by the overcoming of seemingly insurmountable odds to conquer a difficult obstacle or problem.

But where does this inspiration come from? It almost seems as though the ‘insightee’ is unaware of the mental efforts to arrive at a solution, perhaps feeling a little defeated after a day spent in vain. Insight then appears at an unexpected moment, almost as though the mind is working unconsciously and without direction, and offers a brilliant method for victory. The mind must have some unconscious ability to process and connect information regardless of our directed attention to achieve moments such as this. Seemingly unconnected pieces of information are re-routed and brought to our attention in the context of the previous problem. Thus could there be a neurobiological basis for insight? One that is able to facilitate a behind-the-scenes process?

Perhaps insight is encouraged by the physical storage and structure of neural networks. In the case of Archimedes, the solution was prompted by the mundane task of taking a bath; superficially unrelated to the problem, however the value of its properties inflated by a common neural pathway (low bathwater – insert leg – raised bathwater similar to volumes and matter in general). That is, the neural pathways activated by taking a bath are somehow similar to those activated by the rumination of the problem at hand. Alternatively, the unconscious mind may be able to draw basic cause and effect conclusions which are then boosted to the forefront of our minds if they are deemed to be useful (ie: are they immediately relevant to the task being performed). Whatever the case may be, it seems that at times, our unconscious minds are smarter than our conscious attention.

The real question is whether insight is an intangible state of mind (ala ‘getting into the zone’) that can be turned on and off (thus making it useful for extending humanity’s mental capabilities), or whether it is just a mental byproduct from overcoming a challenge (hormonal response designed to encourage such thinking in the future). Can the psychological concept of insight be applied via a manipulation of the subject’s composition (neuronally)  and environmental characteristics (conductive to achieving insight), or is it merely an evolved response that serves a (behaviourally) reinforcing purpose?

Undoutedly the agent’s environment plays a part in determining the likelihood of insight occurring. Taking into account personal preferences (does the person prefer quite spaces for thinking?) the characteristics of the environment could serve to hamper the induction of such a mental state if it is sufficiently irritating to the individual. Insight may also be closely linked with intelligence, and depending on your personal conception of this, neurological structure (if one purports a strictly biological basis of intelligence). If this postulate is taken at face value, we have the conclusion that the degree of intelligence is directly related to the likelihood of insight, and perhaps also to the ‘quality’ of the insightful event (ie: a measure of its brilliance in comparison to inputs such as the level of available information and difficulty of the problem).

But what of day to day insight, it seems to crop up in all sorts of situations. In this context, insight might require a grading scale as to its level of brilliance if its use is to be justified in more menial situations and circumstances. Think of that moment when you forget a particular word, and try as you might, cannot remember it for the life of you. Recall also that flash of insight where the answer is simply handed to you on a platter without any conscious need to retrieve it. Paradoxically, it seems that the harder we try to solve the problem, the more difficult it becomes. However, is this due to efficiency problems such as ‘bottlenecking’ of information transfer, personality traits such as performance anxiety/frustration or some underlying and unconscious process that is able to retrieve information without conscious direction?

Whatever the case may be, our scientific knowledge on the subject is distinctly lacking, therefore an empirical inquiry into the matter is more than warranted (if it hasn’t already been commissioned). Psychologically, the concept of insight could be tested experimentally by providing subjects with a problem to solve and manipulating  the level of information (eg ‘clues’) and its relatedness to the problem (with consideration taken to intelligence, perhaps two groups, high and low intelligence). This may help to uncover whether insight is a factor to do with information processing or something deeper. If science can learn how to artificially induce a mental state akin to insight, the benefits for a positive-futurist society would be grand indeed.

The topic of free-will is one of the largest problems facing modern philosophers. An increasing empirical onslaught has done little to alleviate these murky waters. In actuality, each scientific breakthrough has resulted in greater philosophical confusion, whether it be due to an impractical knowledge base that is needed to interpret these results or counter-intuitive outcomes (RP signal, brain activity precedes conscious action). My own attempts to shed some light onto this matter are equally feeble, which has precipated the creation of the present article. What is the causal nature of the universe? Is each action determined and directly predictable from a sufficiently detailed starting point or is there a degree of inherent uncertainty? How can we reconcile the observation that free-will appears to be a valid characteristic of humanity with mounting scientific evidence to the contrary (eg Grand Unified Theory)? These are the questions I would like to discuss.

‘Emergent’ seems to be the latest buzzword in popular science. While the word is appealing when describing how complexity can arise from relatively humble beginnings, it does very little to actually explain the underlying process. These two states are simply presented on a platter, the lining of which is composed of fanciful ’emergent’ conjourings. While there is an underlying science behind the process involving dynamic systems (modelled on biological growth and movement), there does seem to be an element of hand waving and mystique.

This state of affairs does nothing to help current philosophical floundering. Intuitively, free-will is an attractive feature of the universe. People feel comfortable knowing that they have a degree of control over the course of their life. A loss of such control could even be construed as a faciliator of mental illness (depression, bipolar disorder). Therefore, the attempts of science to develop a unified theory of complete causal prediction seems to undermine our very nature as human beings. Certainly, some would embrace the notion of a deterministic universe with open arms, happy to put uncertainty to an end. However, one would do well (from a Eudamonic point of view) to cognitively reframe anxiety regarding the future to an expectation of suprise and anticipation at the unknown.

While humanity is firmly divided over their preference for a predictable or uncertain universe, the problem remains that we appear to have a causally determined universe with individual freedom of choice and action. Quantum theory has undermined determinism and causality to an extent, with the phenomenon of spontaneous vaccuum energy supporting the possibility of events occuring without any obvious cause. Such evidence is snapped up happily by proponents of free-will with little regard as to its real-world plausibility.This is another example of philosophical hand-waving, where the real problem involves a form of question begging; that is, a circular argument with the premise requiring a proof of itself in order to remain valid! For example, the following argument is often used;

  1. Assume quantum fluctuations really are indeterminate in nature (underlying causality ala ‘String Theory’ not applicable).
  2. Free-will requires indeterminacy as a physical prerequisite.
  3. Quantum fluctuations are responsible for free-will.

 To give credit where it is due, the actual arguments used are more defined than that which is outlined above, however the basic structure is similar. Basic premises can be outlined and postulates put forward describing the possible form of neurological free will, however as with most developing fields the supporting evidence is skimp at best. And to make matters worse, quantum theory has shown that human intuition is often not the best method of attempting an explaination.

 However, if we work with what we have, perhaps something useful will result. This includes such informal accounts such as anecdotal evidence. The consideration of such evidence has led to the creation of two ‘maxims’ that seem to summarise the evidence presented in regards to determinsm and free-will.

Maxim one. The degree of determinism within a system is reliant upon the scale of measurement; a macro form of measurement results in a predominantly deterministic outcome, while a micro form of measurement results in an outcome that is predominantly ‘free’ or unpredictable. What this is saying is that determinism and freedom can be directly reconciled and coexist within the same construct of reality. Rather than existing as two distinctly separate entities, these universal characteristics should be reconceptualised as two extremities on a sliding scale of some fundamental quality. Akin to Einstein’s General Relativity, the notions of determinism and freedom are also relative to the observer. In other words, how we examine the fabric of reality (large or small scale) results in a worldview that is either free or constrained by predictability. Specifically, quantum scale measurements allow for an indeterministic universe, while larger scale phenomenon are increasingly easier to predict (with a corresponding decrease in the accuracy in the measurement tool). In short, determinism (or free-will) is not a physical property of the universe, but a characteristic of perception and an artifact of the mesaurement method used. While this maxim seems commonsensical and almost obvious, I believe the idea that both determinism and free-will are reconcilable features of this universe is a valid proposition that warrants further investigation.

Maxim Two: Indeterminacy and free-will are naturally occuring results that emerge from the complex interaction of a sufficient number of interacting deterministic systems (actual mechanisms unknown). Once again we are falling back on the explanatory scapegoat of ’emergence’, however its use is partially justified (in the light of empirical developments). For example, investigations into fractal patterns and the modelling of chaotic systems seems to justify the existence of emergent complexity. Fractals are generated from a finite set of definable equations and result in an intensely complicated geometric figure with infinite regress, the surface features undulating with each magnification (interestingly, fractal patterns are a naturally occuring feature in the physical world, and can result from biological growth patterns and magnetic field lines). Chaos is a similar phenomemon, beginning from reasonably humble initial circumstances, and due to an amalgamation of interferring variables results in an overall system of indeterminacy and unpredictability (eg weather patterns). Perhaps this is the mechanism of human consciousness of freedom of will; individual (and deterministic) neurons contribute enmasse to an overall emergent system that is unpredictable. As a side note, such a position also supports the possibility of artificial intelligence; build something that is sufficiently complex and ‘human-like’ consciousness and freedom will result.

The two maxims proposed may seem to be quite obvious on cursory inspection, however it can be argued that the proposal of a universe in which determinism and freedom of will form two alternative interpretations of a common, underlying reality is unique. Philisophically, the topic is difficult to investigate and discuss due to limitations on empirical knowledge and an increasing requirement for specialised technical insight into the field.

The ultimate goal of modern empiricism is to reduce reality to a strictly deterministic foundation. In keeping with this aim, experimentation hopes to arrive at physical laws of nature that are increasingly accurate and versatile in their generality. Quantum theory has since put this inexorable march on hold while futile attempts are made to circumvent the obstacle that is the uncertainty principle.

Yet perhaps there is a light at the end of the tunnel, however dim the journey may be. Science may yet produce a grand unified theory that reduces free-will to causally valid, ubiquitous determinism. More than likely, as theories of free-will become closer to explaining the etiology of this entity, we will find a clear and individually applicable answer receding frustratingly into the distance. From a humanistic perspective, it is hoped that some degree of freedom will be preserved in this way. After all, the freedom to act independently and an uncertainty of the future is what makes life worth living!

The period of 470-1000AD encompassed what is now popularly referred to as the medieval ‘dark age’. During this time, human civilisation in the West saw a stagnation of not only culture but society itself. It was a time of great persecution, societal uncertainty and religious fanaticism. It cannot be helped that similarities seem to arise between this tumultuous period and that which we experience today. Some have even proposed that we stand on the brink of a new era, one that is set to repeat the stagnation of the medieval dark ages albeit with a more modern flavour. Current worldly happenings seem to support such a conclusion. If we are at such a point in the history of modern civilisation, what form would a ‘new dark age’ take? What factors are conspiring against humanity to usher in a period of uncertainty and danger? Do dark ages occur in predictable cycles, and if so, should we embrace rather than fear this possible development? These are the questions I would like to discuss in this article.

Historically, the dark ages were only labelled so in retrospect by scholars reflecting upon the past and embracing humanistic principles. It is with such observations that we cast our aspersions upon the society of today. Even so, humanity struggles for an objective opinion, for it can be argued that every great civilisation wishes to live within a defining period of history. Keeping such a proposition in mind, it is nevertheless convincing to proffer the opinion that we are heading towards a defining societal moment. A great tension seems to be brewing; on the one hand there is the increasing dichotomy between religion and science, with sharply drawn battle lines and an unflinching ideology. On the other we have mounting evidence suggesting that the planet is on the verge of environmental collapse. It may only be a matter of time before these factors destabilise the dynamic system that is modern society past its engineered limits.

Modern society seems to have an unhealthy obsession with routine and predictability. The uncertainty that these potential disasters foster act to challenge this obsession, to the point that we seek reassurance. Problems arise when this reassurance takes the form of fanatical (and untenable) religious, philosophical or empirical belief structures. Such beliefs stand out like a signalling light house, the search beam symbolising stability and certainty in stark contrast to the murky, dangerous waters of modern society. But just as the light house guards from the danger of rocks, so too does the pillar of belief warn against corruption. For it is, sadly, intrinsic human nature to take advantage of every situation (to guarantee the survival of oneself through power and influence), and in combination with personality, (propensity towards exploitation of others) beliefs can be twisted to ensure personal gain or the elimination of opposition. It seems that such a phenomenon could be acting today. Religion provides a suitable system upon which to relieve mental anguish and distress at the state of the world (reassurance that . So too does science, as it proscribes the fallacies of following spiritual belief and a similarly blind ‘faith’ in securing a technological solution to humanity’s problems. In that respect, empiricism and religion are quite similar (much to their mutual chagrin).

In such a system we see that de-stablisation is inevitable; a handful of belief structure emerge from the chaos as dominant and compete for control. Progressively extreme positions are adopted (spurred on by manipulators exploiting for personal gain), which in turn sets up the participants for escalating levels of conflict. Our loyalty to the group that aims to secure its survival, ultimately (and ironically) leads to the demise of all involved. It is our lack of tolerance and subservience to evolutionary mechanisms, coupled with a lack of insight into both our internal nature as a person and social interactions that precipitates such a conclusion.

This brings the article to its midpoint, and the suggestion that three main factors are responsible for the development of a new dark ages.

Human belief systems

As argued above, humans have an intrinsic desire to subscribe to certain world views and spiritual beliefs. Whether due to a more fundamental need for explanation in the face of the unknown (being prepared for the unexpected) or simply the attraction of social groupings and initiation into a new hierarchy, the fact remains that humans like to organise their beliefs according to a certain structure. When other groups are encountered whose beliefs differ in some respect, the inevitable outcome is either submission (of the weaker group) or conflict. Perhaps an appropriate maxim that sums up this phenomenon is ‘if you can’t convert them, kill them’. Thus we see at one level, our beliefs acting as a catalyst for conflict with other groups of people. At a higher level, such beliefs are then modified or interpreted in varying ways so as to justify the acts committed, reassuring the group of its moral standing (the enemy is sub human, ‘infidels’, wartime propaganda etc). Belief is also a tool that is used to create a sense of identity, which is another feature that conscious beings seem to require. Those that are lacking in individuality and guidance take to belief systems in order to perhaps gain stability within their lives. Without identity we are operating at a reduced capacity, nothing more than automatons responding to stimuli, so in this respect, belief can form a useful method for providing motivation and structure to an individual. Problems arise when beliefs become so corrupted or conflict so great that any act can be justified without cause for long-term planning; only the complete destruction of the enemy is a viable outcome. The conflict spirals out of control and precipitates major change; another risk factor for ensuring the New Dark Age is a plausible reality.

Economic/Political Collapse

Numerous socio-economic experiments have been conducted over the few millenia that organised civilisation has existed on this planet, with varying degrees of success. Democracy seems to be the greatest windfall to modern politics, ushering in a new era of liberation and equity. But has its time come to an end? Some would argue that the masses need control if certain standards are to be maintained. While a small proportion of society would be capable of living under such an arrangement, the reality that some large swath of the population cannot co-exist without the need for social management and punitive methods calls into question the ultimate success of our political system. Communism failed spectacularly, most notably for its potential for abuse through corruption and dictatorship. Here we have the unfortunate state of affairs that those who come into power are also those whom lack the qualities that one would expect from a ruler. Islamic states don’t even enter the picture; the main aim of such societal systems is the establishment of a theocratic state that is perhaps even more susceptible to abuse (the combination of corrupted beliefs that justify atrocities and unification of church with state causing conflict with other populations whose beliefs differ).

Is democracy and capitalism running our planet into the ground? Some would point to recent stockmarket collapse and record inflation as a sign that yes, perhaps human greed is allowed too much leeway. Others merely shake their heads and point to the cyclical nature of the economy; “it’s just a small downturn that will soon be corrected” they proclaim. Mounting evidence seems to counter such a proposition, as rising interest rates, property prices and living costs force the population to work more, and own less. Is our present system of political control and economic growth sustainable? Judging by recent world events, perhaps not, thus precipitating another factor that could lead to the establishment of a new dark age.

Ecological Destruction

Tied closely to the policies implemented by modern politics and economic propensities is the phenomenon of ‘global warming’, or more broadly, the lack of respect for our biosphere. It seems almost unbelievable that humanity has turned a blind eye to the mounting problems occurring within our planet. While global warming has arguments both for and against, I doubt that any respectable empiricist, or indeed, responsible citizen, could refute that humanity has implemented some questionable environmental practices in the name of progress. Some may argue that the things we take for granted (even the laptop upon which I type this article) would not have been possible without such practices. But when the fate of the human race hangs in the balance, surely this is a high price to pay in such a high stakes game. Human nature surely plays a part in this oversight; our brains are wired to consider the now, as opposed to the might or could. By focusing on the present in such a way, the immediate survival of the individual (and the group) is ensured. Long term thought is not useful in the context of a tribal society where life is a daily struggle. Again we are hampered by more primitive mechanisms that have exceeded their usefulness. In short, humanity has advanced a sufficiently rapid pace that has since overtaken the ability of our faculties to adapt. Stuck with a game of catchup (that most neglect to see the value or importance of) society is falling short of the skills it needs to deal with the challenges that lay ahead. The destruction of this planet, coupled with our inability to reliably plan and deal with future events could (in combination with previous factors such as deliberate political/economic oversight of the problem) precipitate a new dark age in society.

But is a new dark age all doom and gloom? Certainly it will be a time of mass change and potential for great catastrophe, but an emergence out the other side could herald a new civilisation that is well equipped to deal with and manage the challenges of an uncertain future. Looking towards the future, one can’t help but feel a sense of trepidation. Over population, dwindling resources and an increasing schism between religion and science are all contributing towards a great change in the structure of society. While it would be immoral to condone and encourage such a period in light of the monumental loss of order, perhaps it is ‘part of the grand plan’ so to speak in keeping humanity in check and ensuring that the Earth maintains its capacity of life. In effect, humanity is a parasite that has suitably infected its host, resulting in the eventual collapse of its life-giving organs. Perhaps a new dark age will provide the cleansing of mind and spirit that humanity needs to refocus its efforts on the things that really matter; that being every individual attaining individual perfection and living as the best they can possibly be.

When people attempt to describe their sense of self, what are they actually incorporating into the resultant definition? Personality is perhaps the most common conception of self, with vast amounts of empirical validation. However, our sense of self runs deeper than such superficial descriptions of behavioural traits. The self is an amalgamation of all that is contained within the mind; a magnificent average of every synaptic transmission and neuronal network. Like consciousness, it is an emergent phenomenon (the sum is greater than the parts). But unlike the conscious, self ceases to be when individual components are removed or modified. For example, consciousness is virtually unchanged (in the sense of what it defines – directed, controlled thought) with the removal of successive faculties. We can remove physical brain structures such as the amygdala and still utilise our capacities for consciousness, albeit loosing a portion of the informative inputs. However the self is a broader term, describing the current mental state of ‘what is’. It is both a snapshot of the descriptive, providing a broad overview of what we are at time t, and prescriptive, in that the sense of self has an influence over how behaviours are actioned and information is processed.

In this article I intend to firstly describe the basis of ‘traditional’ measures of the self; empirical measures of personality and cognition. Secondly I will provide a neuro-psychological outline of the various brain structures that could be biologically responsible for eliciting our perceptions of self. Finally, I wish to propose the view that our sense of self is dynamic, fluctuating daily based on experience and discuss how this could affect our preconceived notions of introspection.

Personality is perhaps one of the most measured variables in psychology. It is certainly one of the most well-known, through its portrayal in popular science as well as self-help psychology. Personality could also be said to comprise a major part of our sense of self, in that the way in which we respond to and process external stimuli (both physically and mentally) has major effects on who we are as an entity. Personality is also incredibly varied; whether due to genetics, environment or a combination of both. For this reason, psychological study of personality takes on a wide variety of forms.

The lexical hypothesis, proposed by Francis Galton in the 19th century, became the first stepping stone from which the field of personality psychometrics was launched. Galton’s posit was that the sum of human language, its vocabulary (lexicon), contains the necessary ingredients from which personality can be measured. During the 20th century, others expanded on this hypothesis and refined Galton’s technique through the use of Factor Analysis (a mathematical model that summarises common variance into factors). Methodological and statistical criticisms of this method aside, the lexical hypothesis proved to be useful in classifying individuals into categories of personality. However this model is purely descriptive; it simply summarises information, extracting no deeper meaning or providing background theory with which to explain the etiology of such traits. Those wishing to learn more about descriptive measures of personality can find this information under the headings ‘The Big Five Inventory’ (OCEAN) and Hans Eysencks Three Factor model (PEN).

Neuropsychological methods of defining psychology are less reliant on statistical methods and utilise a posteriori knowledge (as opposed to the lexical hypothesis which relies on reasoning/deduction). Thus, such theories have a solid empirical background with first-order experimental evidence to provide support to the conclusions reached. One such theory is the BIS/BAS (behavioural inhibition/activation system). Proposed by Gray (1982), the BIS/BAS conception of personality builds upon individual differences in cortical activity in order to arrive at the observable differences in behaviour. Such a revision of personality turns the tables on traditional methods of research in this area, moving away from superficially describing the traits to explaining the underlying causality. Experimental evidence has lent support to this model through direct observation of cortical activity (functional MRI scans). Addicts and sensation seekers are found to have high scores on behavioural activation (associated with increased per-frontal lobe activity), while introverts score high on behavioural inhibition. This seems to match up with our intuitive preconceptions of these personality groupings; sensation seekers are quick to action, in short they tend to act first and think later. Conversely, introverts act more cautiously, adhering to a policy of ‘looking before they leap’. Therefore, while not encapsulating as wide a variety of individual personality factors as the ‘Big Five’, the BIS/BAS model and others based on neurobiological foundations seem to be tapping into a more fundamental, materialistic/reductionist view of behavioural traits. The conclusion here is that directly observable events and the resulting individual differences ipso facto arise from specific regions in the brain.

Delving deeper into this neurology, the sense of self may have developed as a means to an end; the end in this case is predicting the behaviour of others. Therefore, our sense of self and consciousness may have evolved as a way of internally simulating how our social competitors think, feel and act. V. Ramachandran (M.D.), in his Edge.org exclusive essay, calls upon his neurological experience and knowledge of neuroanatomy to provide a unique insight into the physiological basis of self. Mirror neurons are thought to act as mimicking simulators of external agents, in that they show activity both performing a task and while observing someone else performing the same task. It is argued that such neuronal conglomerates evolved due to social pressures; a method of second guessing the possible future actions of others. Thus, the ability to direct these networks inwards was an added bonus. The human capacity for constructing a valid theory of mind also gifted us with the ability to scrutinise the self from a meta-perspective (an almost ‘out-of-body’ experience ala a ‘Jimeny the Cricket’ style conscience).

Mirror neurons also act as empathy meters; firing across synaptic events during moments of emotional significance. In effect, our ability to recognise the feelings of others stems from a neuronal structure that actually elicits such feelings within the self. Our sense of self, thus, is inescapably intertwined with that of other agents’ self. Like it or not, biological dependence on the group has resulted in the formation of neurological triggers which fire spontaneously and without our consent. In effect, the intangible self can be influenced by other intangibles, such as emotional displays. We view the world through ‘rose coloured glasses’ with an emphasis on theorizing the actions of others through how we would respond in the same situation.

So far we have examined the role of personality in explaining a portion of what the term ‘self’ conveys. In addition, a biological basis for self has been introduced which suggests that both personality and the neurological capacity for introspection are both anatomically definable features of the brain. But what else are we referring to when we speak of having a sense of self? Surely we are not doing this construct justice if all that it contains is differences in behavioural disposition and anatomical structure.

Indeed, the sense of self is dynamic. Informational inputs constantly modify and update our knowledge banks, which in turn, have ramifications for self. Intelligence, emotional lability, preferences, group identity, proprioreception (spatial awareness); the list is endless. Although some of these categories of self may be collapsible into higher order factors (personality could incorporate preference and group behaviour), it is arguable that to do so would result in the loss of information. The point here is that to look at the bigger picture may obscure the finer details that can lead to further enlightenment on what we truly mean when we discuss self.

Are you the same person you were 10 years ago? In most cases, if not all, the answer will be no. Core traits may remain relatively stable, such as temperament, however arguably, individuals change and grow over time. Thus, their sense of self changes as well, some people may become more attuned to their sense of self than others, developing a close relationship through introspective analysis. Others, sadly, seem to lack this ability of meta-cognition; thinking about thinking, asking the questions of ‘why’, ‘who am I’ and ‘how did I come to be’. I believe this has implications for the growth of humanity as a species.

Is a state of societal eudaimonia sustainable in a population that has varying levels of ‘selfness’? If self is linked to the ability to simulate the minds of others, which is also dependent upon both neurological structure (leading to genetic mutation possibly reducing or modifying such capacities) and empathic responses, the answer to this question is a resounding no. Whether due to nature or nurture, society will always have individuals whom are more self-aware than others, and as a result, more attentive and aware of the mental states of others. A lack of compassion for the welfare of others coupled with an inability to analyse the self with any semblance of drive and purpose spells doom for a harmonious society. Individuals lacking in self will refuse, through ignorance, to grow and become socially aware.

Perhaps collectivism is the answer; forcing groups to co-habitate may introduce an increased appreciation for theory of mind. If the basis of this process is mainly biological (as it would seem to be), such a policy would be social suicide. The answer could dwell in the education system. Introducing children to the mental pleasures of psychology and at a deeper level, philosophy, may result in the recognisation of the importance of self-reflection. The question here is not only whether students will grasp these concepts with any enthusiasm, but also if such traits can be taught via traditional methods. More research must be conducted into the nature of the self if we are to have an answer to this quandry. Is self related directly to biology (we are stuck with what we have) or can it be instilled via psycho-education and a modification of environment?

Self will always remain a mystery due to its dynamic and varied nature. It is with hope that we look to science and encourage its attempts to pin down the details on this elusive subject. Even if this quest fails to produce a universal theory of self, perhaps it will be successful in shedding at least some light onto the murky waters of self-awareness. In doing so, psychology stands to benefit both from a philosophical and a clinical perspective, increasing our knowledge of the causality underlying disorders of the self (body dysmorphia, depression/suicide, self-harming) .

If you haven’t already done so, take a moment now to begin your journey of self discovery; you might just find something you never knew was there!

In contrast to our recent discussions on religious extremism, transhumanism offers an alternative position that is no less radical yet potentially rewarding. The ideology of transhumanism is comparable to secular humanism in that both advocate the importance of individuality and personal growth. However, where these two positions diverge is in regards to the future of human evolution. In this article I would like to firstly offer a broad definition of transhumanism, followed by the arguments both for and against its implementation. Finally, I would like to discuss the possibility of society adopting a transhumanist position in order to fully realise our human potential.

Transhumanism proposes that in order to take advantage of our natural abilities, a complete embracing of technological progress is necessary. Specifically, and where this position differs from the more conservative and broader topic of humanism, transhumanists believe that self- enhancement to achieve this goal through the use of emerging technology is entirely justifiable. The details of such modifications include a large variety of breakthrough technologies; transhumanists vary individually based on personal preference although the end goal is similar. Cryogenics, mind-digitalisation, genetic engineering and bionic enhancement are all possible methods proposed to usher in a ‘post-human’ era.

A secondary goal (and flowing as a consequence from the first) of transhumanism is the elimination of human suffering and inadequacies. By removing mental and physical inequalities through a process of self-directed evolution (enhancement or prenatal genetic screening/selection) the transhumanist argues that social divides will also be eliminated. Specifically, an improvement of human faculties through cybernetic augmentation is thought to eliminate the gap between intellect. It puts society on an equally intelligent footing. Likewise, the genetic engineering approach hopes to select intellect and physical prowess either pre-birth or post-birth through genetic modification. Mind-transfer or digitalisation proposes to extend both our lifespans (indefinitely) and our mental capacities. The trade-off here is our loss of the physical.

Many transhumanists regard such enhancements as not only natural, but necessary if humanity is to truly understand the world in which we live. They argue that the natural process of evolution and ‘old-fashioned’ practice/training is too slow to equip us with the necessary skills with which to undertake research in the future. One example is space travel. Human bodies are arguably not designed for prolonged exposure to the rigors of space. Bones become brittle and radiation vastly increases the chances of cancer developing (not to mention the unknown psychological and physiological effects of permanent space-habitation). Eliminating such ‘weaknesses’ would allow humans to more efficiently conquer space by removing the need for costly habitation modules and protective shielding. But does self-augmentation create more problems than it solves?

Certainly, from a moral point of view, there are a multitude of arguments levelled at transhumanism. While the majority of these arguments hold merit, I intend to argue that once the initial opposition based on emotional responses is exposed, the core principles of transhumanism really can improve the quality of life for many disadvantaged people on this planet. While the attacks on transhumanism come in many different forms I will instead be concentrating on the moral implications of endorsing this position.

The threat to morality posed by transhumanism has been levelled by both the theistic and the scientific community alike. This argument postulates that 1) ‘contempt of the flesh’ is immoral in the sense that rejecting our natural form and processes is also a rejection of god’s power and intent and 2) rather than removing divides, transhumanism will actually operate in reverse, creating increased discrepancies between those with the ability to improve and those that don’t – the creation of a ‘super-class’ of human and vast disparities in wielded power. The first point is easy enough to dismiss (from an atheistic point of view). Delving deeper, philosophical naturalism, to a degree, proposes that natural effects arrive from natural causes, thus the introduction here of artificial causes results in artificial effects. The problem lies within us being created from natural ‘stuff’ therefore how can we predict with any accuracy or confidence the outcome of unnatural processes? The second point proposes that democracy itself may be threatened by transhumanism. The potential for abuse by the emergent ‘superhuman’ class is easy enough to see. The only rebuttal hope I offer here is that surely self-improvement would aim to not only improve rational faculties, but also emotional – humans would naturally seek to improve our ability to empathise, cooperate and generally act in a morally acceptable manner.

The divide between the intellectually/physically rich and poor can only be closed if transhumanism is enacted uniformly. Unfortunately, the capitalist society in which we live most likely ensures that only the monetarily rich will benefit. Since money does not necessarily equate with moral goodness and intelligence, we are thus in dire straits as transhumanist ideology will quickly be abandoned in the pursuit of dominance and power. Therefore, transhumanism is probably the world’s most dangerous idea (Fukuyama, 2004). The potential for great evil is dizzying. Fortunately, the reverse is also true.

Elimination of inequality is a noble goal of transhumanism. It also attainable from two main angles of attack. Through the means – universal adoption of technology that removes the necessary conditions for suffering to occur (eg disability, sustenance, shelter – uploaded minds stored on digital media) and through the ends – augmentation and improvement that creates superior organisms that live harmoniously. Perhaps this is a necessary step in order for humanity to fully realise its potential; taking charge of our species’ destiny in a more directed and controlled manner than blind evolution can ever hope to achieve.

But arguably, the transhumanism dream is already happening. Society, in a way, is habituating us to the changes that must occur if transhumanism is to be adopted. Psychologically and philosophically, the ideas are out there and being debated regularly. The details, while not finalised, are being worked over and improved using (mostly) rational methodology. Internet and other wireless communications methods have begun the process of ‘disembodiment’ that the digitalisation of human minds surely requires. The internet has facilitated an exponential growth of non-traditional social interaction, existing mostly on the digital plain. Thus, we are already developing the necessary mindsets and modifications to etiquette that transhumanism requires. Cosmetic surgery, while not altogether a morally appropriate example (due to its use and abuse) is also a moving trend in society towards self-modification. On the other hand, negative examples such as psychological disorders such as self-harming and anorexia are salient reminders of how these trends can manifest themselves in untoward ways.

Therefore, the fate of transhumanism rests squarely on its ability to tread carefully across the moral tightrope; too liberal and abuse is inevitable. Too conservative and its full potential is unrealised. Left-wing supporters of transhumanism (Marvin Minsky et al) are, unfortunately, the main public face of this ideology. Their ideas are too liberal, and dangerous if used as a springboard for implementing transhumanist principles. Such examples only serve to highlight the potential for this position to be abused for personal gain. Aging scientists desperate to continue life without the frailties of decaying flesh. They look to the future like a boy dreams of one day living the space-age tales of science-fiction novels. This is not what transhumanism is supposed to be about. It is the practical realisation of a humanist life philosophy; how we could possibly use the technological tools at our disposal to create a utopian society and encourage exponential individualistic growth.

Unfortunately many obstacles remain in the path of a future where humanity transgresses its shortcomings. Morally, the question comes down to a simplistic decision. Why should we be afraid to improve what we currently leave to chance? Surely it is ‘more moral’ to realise the potential of every individual, rather than leaving it down to the throw of a dice. Allowing a child to live a life of disability and suffering as opposed to one where all opportunities are open to them has to be morally acceptable. The only uncertainty in this equation is whether the means justifies the ends.

Transhumanist ideals must be regulated and monitored if they are to be implemented appropriately and uniformly. Just as there are people now who chose not to embrace modern technology, so too will there be people who chose not to augment themselves with improvements. Such people must be respected if transhumanism is to be morally just, and does not delegate groups of people to lower levels of status or the exhibits of future-museums. Just as liberty was used to create a choice to proceed with technological advancement, so too must the liberty of those who chose not to be protected and cherished. After all, the creation of diversity is what makes us human in the first place. To sacrifice that for the sake of ‘progress’ would be a travesty and ideological genocide of the worst kind.

Extremism of today comes in many forms, not only the clear-cut religious fundamentalist type that is so often portrayed in popular media. Strong atheism could be described in the same vein as religious extremism; they both share similar traits (vocal opposition to conflicting views, ‘fundamentalist’ leaders who are unwavering in their views, use of strong, emotive and persuasive language). Many atheists would balk at being compared to theists, and to a small degree they have a valid point of exception. Atheists do tolerate and consider opposing views with more appreciation than religion seems to. But these views must be from their own group; generally, ‘strong’ atheists will only validate the arguments of other atheists or non-affiliated (with a religion) individuals. This is comparable to the Christian considering an alternative viewpoint on minor biblical details but neglecting to even acknowledge the opinion of the skeptic. Atheists are more similar to their religious brethren than either side would care to admit. In this article I intend to present and argue that there are two main processes at work, interacting to perpetuate extremism on both sides of the theistic debate.

Cognitive dissonance is a psychological phenomenon that acts automatically and unconsciously to reduce tension between conflicting thoughts. It is thought to occur when the individual experiences two contrasting cognitions and also when behaviour does not match such beliefs. The theory broadly defines ‘cognitions’ as any mental event (emotion, attitude, belief). Dissonance, or tension between cognitions, is inherently unpleasant for human brains. Mental unpleasantness thus forms the motivation to reduce dissonance by either filtration (ignoring, denying, reducing) or modification (changing cognitions to increase consistency). Humans dislike inconsistency because we are hard-wired to trust our mental processes; they form the primary source of evidence in our dealings with the outside world. Hypocrisy not only causes inefficiencies in the system due to inaction and conscious internal debating but also overwhelms our information processing capabilities. In short, mental hypocrisy prevents us from knowing with certainty what is happening both outside and inside our brains.

Group behaviour encapsulates a myriad of interconnected theories; the end result is a very complicated process. Such behaviours can be simplified if one takes a step back from the theoretical details and considers the overall picture. Groups evolved thousands of years ago out of our ancient lineage to the primates. Primates are social animals, therefore it makes sense that all species that are closely related share similar behavioural properties. Humans have vastly improved upon the rudimentary social groupings evident in primate populations, but the basics are all there. “Social grooming” in primates closely mirrors our own close relationships relationships, however we tend to augment physical contact with communication. Keeping relationships functioning can be hard work, as large percentages of the individual’s time is wasted maintaining friendships and ‘grooming’ others. As a way of improving the efficiency of this process, groups may have begun to form that included individual’s with similar beliefs about the world. This reduced the amount of time needed to maintain loyalty as group norms (unwritten rules of conduct and belief) gradually arose to police the attitudes and behaviours of members, ensuring their similarity.

Evolution soon saw the advantage of group-like behaviour, as many hands make light work. Hunting and gathering became much easier and more efficient overall (perhaps too efficient as nomadic tribes cleared out entire areas before moving on to greener pastures – a sign of things to come?). Thus the fundamental human need to belong was initiated into our evolutionary makeup. Humans began to fear ostracism by the group, as quite often it meant certain death, either from competing tribes or the hardships faced finding food and shelter alone. We now have the basic outline of group behaviour; the group maintains similar beliefs and worldviews through an evolutionary predisposition (group norms and ostracism) to minimise infighting and improve the individual’s chances of survival (through the survival of the group). Overall cohesiveness and efficiency is the end product; in effect, individuals sacrifice some of their freedom for the good of the many.

Group identification could be construed as a defense mechanism that protects against assimilation by competing groups. Individuals soon learnt to identify their fellow group members through physical characteristics (race, gender, skin colour, clothing) and also commonly-held beliefs. Outgroups were a threat, whether it be attempts to claim resources and territory or mating partners from their neighbours. Thus, when physical combat was too risky (and it often was), a psychological subterfuge evolved. Outgroup beliefs attempted to erode the ingroup and mass convert members to its cause. This explains the harshness with which we treat traitors, even in modern times (often carrying the death penalty not to mention the social stigmatisation).

It is my proposition that the relationship between group identification and extremism is moderated by the degree of cognitive dissonance experienced by the individual. More specifically, the more a person identifies with the group’s beliefs and practices (at the highest level becoming totally enmeshed within the group, loosing a sense of self) is not directly predictive of extremism. Rather, the individual must also be predisposed to ‘filtration’ and ‘modification’ of dissonant cognitions for extremist positions to be adopted. This explains individuals who identify strongly with a chosen group but are content to live their lives without disturbing the beliefs of others and are open to at least consider conflicting ideas. Extremists, on the other hand, are not at all open to criticism of their group. A ‘hyper-drive’ of their group protecting instincts instead kicks in, fueled by active denial or modification of incoming cognitions that are dissonant with their own. Hence the phrase ‘like talking to a brick wall’. Extremists simply cannot receive criticisms in their unedited form; their brains act automatically to shield their beliefs from corruption, thus ensuring the protection and perpetuation of the group.

I believe this is apparent especially in the Middle East conflict. Suicide bombers make the ultimate sacrifice; total group identification mixed with cognitive dissonance reactions and a corrupted interpretation of a religion is recipe for disaster. Any religion that gives promises of an adulterous afterlife with multiple partners in exchange for the mass killing of largely innocent people is, in my opinion, not religious but rather evil incarnate misconstrued. This theory can be applied to any religious group, be it religious or otherwise. To an extent, the atheist movement can also be said to suffer from a degree of extremism, although I hope that the majority of its members are both openminded and empirical enough to maintain their professionalism and consider all the evidence, not just that which happens to agree with pre-established beliefs. That being said, it is also a timely reminder to such persons that the instinctual and automatic nature of human though processes should be made conscious if the evidence is said to be considered with a pure methodology.

There may not be much that we as a society can do in order to overcome the maddeningly obnoxious frustration that cognitive dissonance breeds. There is nothing more pitiful and wasteful than an individual that ‘goes down with their ship’ so to speak. The human mind is a beautiful creation; some people need to realise its potential and use it to the degree that it is designed for. To do nothing is not just a waste of one’s mental capabilities, but also the potentials of all the minds in the said group. Religious (and secondarily atheist) extremism needs to look beyond its own group and towards the future if the ‘zombification’ that cognitive dissonance breeds is to be overthrown.

But how could this be done practically? Is it acceptable, even moral and ethical, to allow extremist groups to continue down their self-destructive paths until their beliefs are so twisted, warped and modified (in order to accommodate rising dissonant cognitions) that they end up doing more harm than good? Some might argue we are already at this point with the Middle Eastern situation, therefore with the level of carnage that area is currently experiencing, to do nothing would be construed as morally unacceptable. Thus we all share the responsibility for this situation; thinkers, dreamers, philosophers and scientists alike must work together and identify the factors that underlie extremism and put in place a plan for action that can reconcile group differences and embrace humanity in a sea of tolerance. Perhaps we should re-double our efforts to identify extra-terrestrial life; nothing unites humanity like an threat from outer-space (well it works in the movies!).

Secular humanism is fast becoming one of the most popular idealogical fads of this age. An increasing unrest is brewing within the world’s intellectual elite as religion and atheism go head to head. As we stand at this crossroads, it is important to take a moment and reflect upon what this trend means to a modern society. In this article I aim to examine the current conflict between atheism and theism, and how this is dividing the opposing parties towards an increasing fundamentalism. Secondly, I also wish to introduce the life philosophy of Secular Humanism, an alternative value system that allows for spirituality and intellectual skepticism to co-exist.

Teleological thought processes seem to dominate human thought, as we attempt to look beyond what is in front of us and seek some deeper meaning or absolute truth about the world. ‘Stronger’ religions gain footholds among the populace which then snowball and spread like contagion throughout the minds of the world. In this context, a strong religion is one that 1) seems plausible to the agent, 2) appeals to human nature and 3) easily passed between people. Weak religions, by way of contrast, could be likened to cults; ideas that appeal to a small group of deluded individuals and involve overly complex ritualistic ceremonies (reducing its appeal through a lack of understanding). Thus religion as we know it is a natural emergent outcome of this process; easily communicable between individuals and groups alike, regardless of nationality or ethnicity and fiercely infectious and appealing to the inner human need for explaining the unknown.

Spirituality is undoubtedly an intrinsically human characteristic, dating back to the birth of civilisation. Therefore, it seems illogical to try and deny that which comes as second nature. It can be argued, however, that religion in its most pure and authentic form is becoming increasingly scarce. The core principles of religion are not to blame. Rather, spirituality is a human trait that should be protected at all costs. It is the distortion of religion by those in power that creates problems. The Dark Ages in medieval Europe is a prime example of such corruption. During this period of cultural and intellectual stagnation, religion came to be recognised as a source of power and control over a populace. Tapping into and exploiting the human ‘soft-spot’ for spirituality not only changed the way in which religion was taught, but created a fusion of church and state. Fortunately this has been revised in most (I use this word with emphasis due to the presence of Middle Eastern governments based on a interpretation of religion) modern constitutions and a separation of church and state is recognised as not only fair/just, but also the ethical and morally correct thing to do.

In more recent times, the rising rate of education and promotion of scientific principles has culminated in an emerging trend towards strong atheism; that is, explicitly declared, proud atheism with individuals actively asserting their disbelief in god(s) and general rejection of traditional religious ritual. Strong atheism has been spearheaded (most prominently) by the biologist Richard Dawkins and philosopher Daniel Dennet, two very vocal advocates of disbelief. While their methods and tone could be construed as (ironically) verging on the fundamentalist, it has been argued that such a strong stance is necessary in order to counter the matching (and disturbing) rise in fundamentalist religiosity. I propose that it is no coincidence that this increase (particularly radical Islamic groups) is occurring in third world countries that lag behind the Western world. Original religious teachings are becoming distorted as the evil power of theism is once again realised and abused by those in authority.

Aethism is finally becoming ‘fashionable’ (for lack of a better word). While the concept has existed since ancient Greece (indeed, Aristotle was executed for his disbelief in the Greek gods), those who spoke out against it were met with unflinching retribution. This is where we really get to the crux of the issue with religion; the way in which it can be corrupted to play out the delusions of a powerful few, and the way in which its teachings are often taken literally. Adding to the problem is religion’s unwavering stance against criticism and introspection. This is where modern society comes in, with its rising distaste for those which do not have the courage to look inward and accept the possibility of error. The education system (to a degree) promotes a healthy skepticism and questioning attitude which is finally causing a critical mass of doubters to turn around and challenge the monopoly that religion has held over our minds for so long.

There are those of us who seem to have been born with a natural deference to atheism, while others sit in the middle content to hold some belief but doubting the minor details, and finally the fundamentalists who are indoctrinated at an early age. It is to this middle group that this article appeals. Secular Humanism is not only a collection of ideas and philosophical stances, but rather matches the ability of religion to provide a framework upon which to guide conduct. Some of us seem to require such structure within our belief systems, as it seems to be human nature to hold a cynical attitude towards the behaviour of others and our own capacities for self control.

Secular Humanism was founded in 1980 by Paul Kurtz, with the original declaration undergoing several revisions and now supported by a plethora of leading intellectuals and scientists. It is an amalgamation of all things ‘science’ and intellectual; a guide to living created by smart people, for smart people who want the structure and organisation of a religion, but also desire the freedom to criticise, revise and generally act in an inquisitive manner.

Ten main principles form the basis of the Humanist declaration. None are unexpected, having been selected for their universality and applicability with a scientific ethos in mind. Secular Humanism promotes ideals of;

  • Free inquiry
  • Separation of church and state
  • The ideal of freedom
  • Ethics based on critical intelligence
  • Moral education
  • Religious skepticism
  • Reason
  • Science and technology
  • Evolution
  • Education

All are self explanatory, therefore I will not go into the finer details. Suffice to say, the nub of the proposition is that humans should have the fundamental right to choose the course of their lives. Children should not be ‘born’ into a religion; essentially, every person is born an implicit atheist (they have no knowledge of religion therefore cannot make an informed choice regarding their affiliation). Equally important principles of Humanism are the freedom to critically evaluate and also empowering the individual to make their own moral decisions.

Predictably, the first counter-blow from religion comes in the form of a cynical attack; “People are incapable of making their own moral choices, religion is needed in order for people to behave morally”. This argument equates religion itself with morality, which is simply not true. Religious advocates should be gracious enough to exert the same level of faith to their fellow humans that they do to a faceless, silent god.

Certainly, there are those in society who do lack the level of freedom required of adopting the Secular Humanist position. This lack of freedom predisposes them to commit crimes, ruminate over inappropriate thoughts and otherwise act in malicious ways towards society. Whether due to biological malformations or environmental upbringing (or a combination of both) such individuals simply cannot be held responsible (in the sense that they are free to chose the course of their actions) for the crimes they commit, therefore they should not be granted such freedom in the first place.

I am not advocating a policy of preemptive incarceration, but rather a change in mindset from lumping such people together in institutions (and arguably increasing the problem through intense exposure to other like-minded individuals for long periods of time) to re-educating them and assisting them to live a harmonious life.

But is this so called ‘rise of the atheists’ without its share of doom and gloom? We must tread carefully, or risk an increasing divide of the intellectually ‘rich and poor’. Those that can adopt the Humanist position freely and without reservation must ensure and respect the freedom of those who do not wish to participate. Human diversity, even when it results in the negative, is worth preserving at all costs. Without it, there would be no critical opinion, no discussion and a stagnation of society. Opposition breeds improvement, and Secular Humanism is only too willing to hear and learn from the criticisms that the disgruntled have to offer. The days of fundamentalist religions are numbered. Secular Humanism is at the forefront of this war, empowering society to question and challenging it to grow into maturity.