The transhumanist movement continues to gain momentum through recognition by mainstream media and a ever-burgeoning army of empricists, free thinkers and rationalists. Recently,  the Australian incarnation of 60 Minutes interviewed David Sinclair, a biologist whom has identified  the potentially life-extending properties of resveratrol. All this attention has sought to swell the awareness of transhumanism within the general community, most notably due to the inherently appealing nature of anti-senescent interventions. But what of the neurological side of transhumanism, specifically the artificial augmentation of our natural mental ability with implantable neurocircuitry? Does research in this area create moral questions regarding its implementation, or should we be embracing technological upgrades with open arms? Is it morally wrong to enhance the brain without effort on the individual level (IE: are such methods just plain lazy)? These are the questions I would like to investigate in this article.

An emerging transhumanist e-zine, H+ Magazine, outlines several avenues currently under exploration by researchers, who aim to improve the cognitive ability of the human brain through artificial enhancement. The primary area of focus at present (from an empirical point of view) lies in memory enhancement. The Innerspace Foundation (IF) is a not-for-profit organisation attempting to lead the charge in this area, with two main prizes offered to researchers whom can 1)  successfully create a device which can circumvent the traditional learning process and 2) create a device which facilitates the extension of natural memory.

Pete Estep, chairman of IF, was interviewed by H+ magazine in relation to the foundation’s vision as to what kind of device that satisfied their award criteria might look like. Pete believes the emergence of this industry involves ‘baby steps’ of achieving successful interfaces between biological and non-biological components. Electronic forms of learning, Pete believes, are certainly non-traditional, but still a valid possibility and stand to revolutionise the human intellect in terms of capacity and quality of retrieval.

Fortunately, we seem to already made progress on those ‘baby steps’ regarding the interface between brain and technology. Various neuroheadset products are poised to be released commercially in the coming months. For example, the EPOC headset utilises EEG technology to recognise brainwave activity that corresponds to various physical actions such as facial expression and intent to move a limb. With concentrated effort and training, the operator can reliably reproduce the necessary EEG pattern to activate individual commands within the headset. These commands can then be mapped to an external device and various tasks able to be performed remotely.

Having said this, such devices are still very much ‘baby’ in their steps. The actual stream of consciousness has not yet been decoded; the secrets of the brain are still very much a mystery. Recognisation of individual brain patterns is a superficial solution to a profound problem. Getting back to Searle’s almost cliched Chinese Room thought experiment, we seem to be merely reading the symbols and decoding them, there is no actual understanding  and comprehension going on here.

Even if such a solution is possible, and a direct mind/machine interface achieved, one small part of me wonders if it really is such a good thing. I imagine such a feeling is similar to the one felt by the quintessential school teacher when handheld calculators became the norm within the educational curriculum. By condoning such neuro-shortcuts, are we simply being lazy? Are the technological upgrades promised by transhumanism removing too much of the human element?

On a broader scale, I believe these concerns are elucidated by a societal shift towards passivity. Television is the numero-uno offender with a captive audience of billions. The invasion of neurological enhancements may seek to only increase the exploitation of our attention with television programs beamed directly into brains. Rain, hail or shine, passive reception of entertainment would be accessible 24 hours a day. Likewise, augmentation of memory and circumvention of traditional learning processes may forge a society of ultimate convenience – slaves to a ‘Matrix-style’ mainframe salivating over their next neural-upload ‘hit’.

But having said all this, by examining the previous example of the humble calculator it seems that if such technological breakthroughs are used as an extensor rather than a crutch, humanity may just benefit from the transhumanist revolution. I believe any technology aiming to enhance natural neurological processing power must only be used as such; a method to raise the bar of creativity and ingenuity, not simply a new avenue for bombarding the brain with more direct modes of passive entertainment. Availability must also be society-wide, in order to allow every human being to reach their true potential.

Of course, the flow-on effects of such technology on socio-economic status, intelligence, individuality, politics; practically every facet of human society, are certainly unknown and unpredictable. If used with extension and enhancement as a philosophy, transhumanism can usher in a new explosion of human ingenuity. If a more superficial ethos is adopted, it may only succeed in ushering a new dark ages. It’s the timeless battle between good (transcendence) and evil (exploitation, laziness). Perhaps a topic for a future article, but certainly food for thought.

We are all fascinatingly unique beings. Our individuality not only defines who we are, but also binds us together as a society. Each individual contributes unique talents towards a collaborative pool of human endeavour, in effect, enabling modern civilisation to exist as it does today. We have the strange ability to simultaneously preserve an exclusive sense of self whilst also contributing to the greater good through cooperative effort – loosing a bit of our independence through conformity in the process. But what does this sense of self comprise of? How do we get to be the distinguished being that we are despite the best efforts of conformist group dynamics and how can we apply such insights towards the establishment of a future society that respects individual liberty?

The nature versus nurture debate has raged for decades, with little ground won on either side. Put simply, the schism formed between those whom subscribed to the ‘tabula rasa’ or blank slate approach (born with individuality) and those whom believed our uniqueness is a product of the environment in which we live. Like most debates in science, there is no definitive answer. In practice, both variables interact and combine to produce variation in the human condition. Therefore, the original question is no longer valid; it diverges from one of two polarised opposites to one of quantity (how much variation is attibutable to nature/nurture).

Twin and adoption studies have provided the bulk of empirical evidence in this case, and with good reason. Studies involving monozygotic twins allows researchers to control for heritability (nature) of certain behavioural traits. This group can then be compared to other twins reared separately (manipulation of environment) or a group of fraternal twins/adopted siblings (same environment, different genes). Of course, limitations are still introduced whereby an exhaustive list of and exerted control over every environmental variable is impossible. The interaction of genes with environment is another source of confusion, as is the expression of random traits which seem to have no correlation with either nature or nurture.

Can the study of personality offer any additional insight into the essence of individuality? The majority of theories within this paradigm of psychology are purely descriptive in nature. That is, they only serve to summarise a range of observable behaviours and nuances into key factors. The ‘Big Five’ Inventory is one illustrative example. By measuring an individual’s subscription to each area of personality (through responses to predetermined questions), it is thought that variation between people can be psychometrically measured and defined according to scores on five separate dimensions. By utilising mathematical techniques such as factor analysis, a plethora of personality measures have been developed. Each subjective interpretation of the mathematical results combined with cultural differences and experimental variation between samples has produced many similar theories that differ only in the labels applied to the measured core traits.

Other empirical theories attempt to improve on the superficiality of such descriptive scales by introducing biological (nature) fundamentals. One such example is the “BIS/BAS” measure. By attributing personality (specifically behavioural inhibition and activation) to variation in neurological structure and function, this theory expands upon more superficial explanations. Rather than simply summarising and describing dimensions of personality, neuro-biological theories allow causality to be attributed to underlying features of the individual’s physiology. In short, such theories propose that there exists a physical thing to which neuropsychologists can begin to attach the “essence of I”.

Not to be forgotten, enquiries into the effects of nurture, or one’s environment, on personal development have bore many relevant and intriguing fruits. Bronfrenbrenner’s Ecological Systems theory is one such empirical development that attempts to qualify the various influences (and their level of impact) on an individual’s development. The theory is ecological in nature due to the nested arrangement of its various ‘spheres of influence’. Each tier of the model corresponds to an environmental stage that is further removed from the direct experience of the individual. For example, the innermost Microsystem pertains to immediate factors, such as family, friends and neighbourhood. Further out, the Macrosystem defines influences such as culture and political climate; while not exerting a direct effect, these components of society still shape the way we think and behave.

But we seem to be only scratching the surface of what it actually means to be a unique individual. Rene Descartes was one of many philosophers with an opinion on where our sense of self originates. He postulated a particular kind of dualism, whereby the mind and body exist as two separate entities. The mind was though to influence the body (and vice versa) through the pineal gland (a small neurological structure that actually secretes hormones). Mind was also equated with ‘soul’, perhaps to justify the intangible nature of this seat of consciousness. Thus, such philosophies of mind seem to indirectly support the nature argument; humans have a soul, humans are born with souls, souls are intangible aspects of reality, therefore souls cannot be directly influenced by perceived events and experiences. However Descartes seemed to be intuitively aware of this limitation and built in a handy escape clause; the pineal gland. Revolutionary for its time, Descartes changed the way philosophers thought about the sense of self, and went so far as to suggest that the intangible soul operated on a bi-directional system (mind influences body, body influences mind).

The more one discusses self, the deeper and murkier the waters become. Self in the popular sense refers to mental activity distinct from our external reality and the minds of others (I doubt, I think, Therefore I am). However, self comprises a menagerie of summative sub-components, such as; identity, consciousness, free-will, self-actualisation, self-perception (esteem, confidence, body image) and moral identity, to name but a few. Philosophically and empirically, our sense of self has evolved markedly, seemingly following popular trends throughout the ages. Beginning with a very limited and crude sense of self within proto-human tribes, the concept of self has literally exploded to an extension of god’s will (theistic influences) and more recently, a more reductionist and materialist sense where individual expression and definition are a key tenet. Ironically, our sense of self would not have been possible without the existence of other ‘selves’ against which comparisons could be made and intellects clashed.

Inspiration is one of the most effective behavioural motivators. In this day and age it is difficult to ignore society’s pressures to conform. Paradoxically, success in life is often a product of creativity and individuality; some of the wealthiest people are distinctly different from the banality of normality. It seems that modern society encourages the mundane, but I believe this is changing. The Internet has ushered in a new era of self-expression. Social networking sites allow people to share ideas and collaborate with others and produce fantastic results. As the access to information becomes even easier and commonplace, ignorance will no longer be a valid excuse. People will be under increased pressure to diverge from the path of average if they are to be seen and heard. My advice; seek out experiences as if they were gold. Use the individuality of others to mold and shape values, beliefs and knowledge into a worthy framework within which you feel at ease. Find, treasure and respect your “essence of I”; it is a part of everyone of us that can often become lost or confused in this chaotic world within which we live.

It is not often that we think of events as isolated incidents separated by a vast divide in both physical and virtual distance. In our day to day existence with near instantaneous methods of communication and a pervasively global information network, significant events are easily taken note of. But when the distance separating the event from the recipient exceeds our Earthly bounds, an interesting phenomenon occurs. Even on the scale of the solar system, light from the Sun takes approximately 8 minutes to reach our sunny skies here on Earth. If the Sun happened to go supernova, we would have no acknowledgment of the fact until some 8 minutes after the event actually occurred. While not completely revolutionary, this concept has deeper ramifications if the distances are again increased to a Universal scale.

While we are accustomed to thinking of light as travelling at a fixed speed limit, it is not often that one thinks of gravity as a force that requires time to cross intergalactic distances. But indeed it does. Gravity waves propagate at the speed of light; slight perturbations on the surfaces of incredibly massive objects (eg neutron stars or binary star systems) act as the catalyst for these disturbances. Unimpeded by objects, gravity waves are able to pass through the Universe without effect. They act to warp the nature of spacetime, contracting and expanding distances between objects as the wave passes through that particular locality.

Here on Earth, information is similarly transferred quickly along the Internet and other communication pathways at an average of close to the speed of light. Delays only arise when traffic is heavy (pathways severed, technical problems, increased use). As the distances involved are relatively small in comparison to the speed of the transfer, communication between two points is practically instantaneous. But what if we slow down the speed of travel? Imagine the event occurs in an isolated region of desert. The message can only be transmitted via a physical carrier, thus mimicking the vast distances involved in an interstellar environment. Observer B waiting to receive the message thus has no knowledge of what has happened until that message arrives.

Revisiting the scenario of the Sun exploding, it seems strange that mammoth events in the Universe could occur without our immediate knowledge. It is strangely reminiscent of the Chinese proverb; does a falling tree make a sound if no one is around to listen? Cosmic events are particularly relevant in this respect, as they most certainly do have immense ramifications (‘making a noise’). If the Universe suddenly collapsed at the periphery (unlikely but considered for the purposes of this exercise), our tiny speck of a planet would not know about it until (possibly) many, many millions of years. It is even possible that parts of the distant Universe have already ‘ceased to exist’; the fabric of time and space from the epicentre of this great event expanding like a tidal wave of doom. What does this mean for a concept of Universal time? Surely it must not be dependent upon physical reality, for if it did, surely such a catastrophic event would signal the cessation of time across the entire cosmos. Rather, it would be a gradual process that rushes forth and eliminates regions of both space and time sequentially. The final remaining island of ‘reality’ would thus act as a steadily diminishing safe haven for the remaining inhabitants of the cosmos. Such an event would certainly make an interesting science-fiction story!

Einstein became intimately aware of this universal fact of locality, making it a central tenet in his grand Theory of Relativity. He even offered comments regarding this ‘principle of locality’ (which became a recognised physical law);

“The following idea characterises the relative independence of objects far apart in space (A and B): external influence on A has no direct influence on B; this is known as the Principle of Local Action, which is used consistently only in field theory.”

A horribly simplified description of relativity states that what I experience is not necessarily the same as what you will experience. Depending on how fast you are travelling and in what direction relative to myself (taking into account the speed and direction at which I am travelling), our experience of time and space will differ; quite markedly if we approach the speed of light. Even the flow of time is unaffected, as observers aboard objects travelling at high velocities experience a slowing notion of chronicity compared to their colleagues. It would be intriguing to experience this phenomenon first hand in order to determine if the flow is psychologically detectable. Perhaps it would be experienced as an exaggerated and inverted version of the overly clichéd ‘time flies when you’re having fun’.

Locality in Einstein’s sense is more about the immediate space surrounding objects rather than causes and their effects (although the two are undoubtedly interrelated). Planetary bodies, for instance, are thought to affect their immediate surroundings (locality) by warping the fabric of space. While the metaphor here is mainly for the benefit of visualisation rather than describing actual physical processes, orbiting bodies are described as locked into a perpetual spin, similar to the way in which a ball bearing revolves around a funnel. Reimagining Einstein’s notion of relativity and locality as causality (and the transmission of information between two points), the speed of light and gravity form the main policing forces in managing events in the Universe. Information can only travel some 300,000 km/s between points, and the presence of gravity can modify how that information is received (large masses can warp transmissions as in gravitational lensing and also influence how physical structures interact).

Quantum theory adds to the fray by further complicating matters of locality. Quantum entanglement, a phenomenon whereby an effect at Point A instantaneously influences Point B, seems to circumnavigate the principle of locality. Two points in space dance to the same tune, irrespective of the distances involved. Another quantum phenomenon that exists independently of local space is collapsing wave functions. While it is currently impossible to affirm whether this ‘wave’ actually exists and also what it means for the nature of reality (eg many worlds vs Copenhagen interpretation), if it is taken as a part of our reality then the act of collapse is surely a non-local phenomenon. There is no detectable delay in producing observable action. A kicked football does not pause while the wave function calculates probabilities and decides upon an appropriate trajectory. Likewise, individual photons seem just ‘know’ where to go; instantly forming the familiar refraction pattern behind a double-slit grating. The Universe at large simply arranges its particles in anticipation of these future events instantaneously, temptingly inviting notions of omniscience on its behalf.

Fortunately, our old-fashioned notions of cause and effect are preserved by quantum uncertainties. To commit the atrocious act of personifying the inanimate, it is as though Nature, through the laws of physics, protects our fragile Universe and our conceptions of it by limiting the amount of useful information we can extract from such a system. The Uncertainty Principle acts as the ubiquitous protectorate of information transfer, preventing instantaneous transfer between two points in space. This ‘safety barrier’ prevents us from extracting useful observations regarding entangled particles without the presence of a traditional message system (need to send the extracted measurements taken at Point A to Point B at light speed in order to make sense of the entangled particle). When we observe particles at a quantum level (spin, charge etc) this disturbs the quantum system irrevocably. Therefore the mere act of observing prevents us from using this system as a means of instantaneous communication.

Causality is still a feature of the Universe that needs in-depth explanation. At a higher level is the tireless battle between determinism and uncertainty (free-will). If every event is predetermined based on the collisions of atoms at the instant of the Big Bang, causality (and locality) is a moot point. Good news for reductionists whom hope to uncover a fundamental ‘theory of everything’ with equations to predict any outcome. If, on the other hand, the future really is uncertain, we certainly have a long way to go before an adequate explanation of how causality operates is proposed. Whichever camp one claims allegiance, local events are still isolated events whose effects travel at a fixed speed. One wonders what the more frustrating result of this is; not having knowledge about an important albeit distant event or realising that whatever happens is inevitable. The Universe may already have ended; but should we really care?

The supreme scale and vast expanse of the Universe is awe inspiring. Contemplation of its grandeur has been described as a type of scientific spiritualism; broadening the mind’s horizons in a vain attempt to grasp our place amongst such awesome magnitude. Containing some 200 billion stars (or 400 billion, depending on whom you ask), our relatively humble home in the Milky Way is but one of billions of other such homes for other countless billions of stars. Likewise, our small blue dot of a planet is but one of possible billions of similar planets spread throughout the Universe.

To think that we are alone in such a vast expanse of space is not only unlikely, but irrational. For eons, human egocentricism has blinkered ideology and spirituality. Our belief systems place humanity upon a pedestal, indicating implicitly that we are alone and incredibly unique. The most salient of which is the ‘Almagest’; Ptolemy’s Earth-centred view of the Universe.

While we may be unique, the tendency of belief systems to invoke meaning in our continued existence leaves no place for humility. The result of this human focussed Universe is one where our race arrogantly fosters its own importance. Consequently, the majority of the populace has little or no concern in cosmic contemplation, nor an appreciation of truly objective thought with the realisation that Earth and our intelligent civilisation does not give sole definition to the cosmos. The Universe will continue to exist as it always have whether we are around or not.

But to do otherwise would spell certain doom for our civilisation, and it is easy to see why humans have placed so much importance upon themselves in the grand scheme of things. The Earth is home to just one intelligent species, namely us. If the Neanderthals had survived, it surely would have been a different story (in terms of the composition of social groups). Groups seem to unite against common foes, therefore a planet with two or more intelligent species would distinguish less within themselves, and more between. Given the situation we find ourselves in as the undisputed lords of this planet, it is no wonder we attach such special significance to ourselves as a species (and to discrediting the idea that we are not alone in the Universe).

It seems as if humanity needs their self-esteem bolstered when faced with the harsh reality that our existence is trivial when compared to the likelihood of other forms of life and the grandeur of the Universe at large. Terror Management Theory is but one psychological hypothesis as to why this may be the case. The main postulate of this theory is that our mortality is the most salient factor throughout life. A tension is created because on the one hand, death is inevitable, and on the other, we are intimately aware of its approach yet desperately try to minimise its effects on our lives. Thus it is proposed that humanity attempts to minimise the terror associated with impending death through cultural and spiritual beliefs (afterlife, the notion of mind/body duality – the soul continues on after death). TMT puts an additional spin on the situation by suggesting cultural world-views, and the tendency for people to protect these values at all costs (reaffirming cultural beliefs by persecuting the views of others reduces the tension produced by death).

While the empirical validity of TMT is questionable (experimental evidence is decidedly lacking), human belief systems do express an arrogance that prevents a more holistic system from emerging. The Ptolemic view dominated scientific inquiry during the middle ages, most likely due to its adoption by the church. Having the Earth as the centre of the Universe coincided nicely with theological beliefs that humanity is the sole creation of god. It may also have improved the ‘scientific’ standing of theology in that it was apparently supported by theory. What the scholars of this period failed to realise was the principle of Occam’s Razor, that being the simpler the theory the better (if it still explains the same observations). The overly complicated Ptolemic system could explain the orbit of planetary bodies, at the expense of simplicity (via the addition of epicycles to explain the anomalous motion of planets).

Modern cosmology has thankfully overthrown such models, however the ideology remains. Perhaps hampered and weighed down by daily activities, people simply do not have the time to consider an existence outside of their own immediate experience. From an evolutionary perspective, an individual would risk death if thought processes were wasted on external contemplation, rather than a selfish and immediate satisfaction of biological needs. Now that society has progressed to a point where time can be spent on intellectual pursuits, it makes sense that outmoded beliefs regarding our standing in the Universe should be rectified.

But just how likely is the possibility of life elsewhere? Science-fiction has long been an inspiration in this regard, its tales of Martian invaders striking terror into generations of children. The first directed empirical venture in this area came about with the SETI conference at Green Bank, West Virginia in 1961. At this conference, not only were the efforts of radio-astronomers to detect foreign signals discussed in detail, but one particular formulation was also put forward. Known as the Drake Equation, it was aimed at quantifying and humanising the very large numbers that are thrown about when discussing intergalactic probabilities.

Basically the equation takes a series of values thought to contribute to the likelihood of intelligent life evolving, multiplying the probabilities together and outputting a single number; the projected number of intelligent civilisations in the galaxy. Of course, the majority of the numbers used are little more than educated guesses. However, even with conservative values, this number is above 1. Promising stuff.

Fortunately, with each astronomical advance these numbers are further refined, giving a (hopefully) more accurate picture of reality. The SETI project may have even found the first extra-terrestrial signal in 1977. Dubbed the ‘Wow!’ signal (based on the researcher’s margin comments on the printout sheet), this burst of activity bore all the hallmarks of artificial origin. Sadly, this result has not been replicated despite numerous attempts.

All hope is not lost. SETI has received a revitalising injection of funds from none other than Microsoft’s Paul Allen, as well as the immensely popular SETI@Home initiative which utilises distributed network technology to sort through the copious amounts of generated data. Opponents to SETI form two main camps; those whom believe it is a waste of funds better spent on more Earthly concerns (a valid point) and those whom perceive SETI as dangerous to our continued existence. The latter point is certainly plausible (albeit unlikely). The counter claim in this instance is that if such a civilisation did exist and was sufficiently advanced to travel intergalactic distances, the last thing on their mind would be the annihilation of our insignificant species.

The notion of Star Trek’s ‘Prime Directive’ seems the most likely situation to have unfolded thus far. Extra-terrestrial civilisations would most likely seek a policy of non-interference with our meager planet, perhaps actively disguising their transmissions in an attempt to hide their activity and prevent ‘cultural contamination’.

Now all we need is for the faster-than-light barrier to be crossed and the Vulcans will welcome us into the galactic society.

Evil is an intrinsic part of humanity, and it seems almost impossible to erradicate it from society without simultaneously removing a significant part of our human character. There will always be individuals whom seek to gain advantage over others through harmful means. Evil can take on many forms, depending upon the definition one uses to encapsulate the concept. For instance, the popular definition includes elements of malicious intent or actions that are designed to cause injury/distress to others. But what of the individual that accidentally causes harm to another, or whom takes a silent pleasure in seeing other’s misfortune? Here we enter a grey area, the distinction between good and evil blurring ever so slightly, preventing us from making a clear judgement on the topic.

Religion deals with this human disposition towards evil in a depressingy cynical manner. Rather than suggesting ways in which the problem can be overcome, religion instead proposes that evil or “sin” is an inevitable temptation (or a part of our character into which we are born) that can only be overcome with a conscious and directed effort. Invariably one will sin sometime in their life, whereupon the person should ask for forgiveness from their nominated deity. Again we see a shifting of responsibility away from the individual, with the religious hypothesis leaning on such concepts as demonic possession and lapses of faith as an explanation for the existence of evil (unwavering belief in the deity cures all manner of temptations and worldly concerns).

In its current form, religion does not offer a satisfactory explanation for the problem of evil. Humanity is relegated to the backseat in terms of moral responsibility, coerced into conformity through a subservence to the Church’s supposed ideals and ways of life. If our society is to break free of these shackles and embrace a humanistic future free from bigotry and conflict, moral guidance must be gained from within the individual. To this end, society should consider introducing moral education for its citizens, taking a lesson from the annals of history (specifically, ancient Greece with its celebration of individual philosophical growth).

Almost counter-intuitively, some of the earliest recorded philosophies actually advocated a utopian society that was atheistic in nature, and deeply rooted in humanistic, individually managed moral/intellectual growth. One such example is the discipline of Stoicism, founded in the 2nd century BC. This philosophical movement was perhaps one of the first true instances of humanism whereby personal growth was encouraged through introspection and control of destructive emotions (anger, violence etc). The stoic way was to detach oneself from the material world (similar to Buddhist traditions), a tenet that is aptly summarised through the following quote;

“Freedom is secured not by the fulfilling of one’s desires, but by the removal of desire.”

Epictetus

Returning to the problem of evil, Stoicism proposed that the presence of evil in the world is an inevitable fact due to ignorance. The premise of this argument is that a universal reason or logos, permeates throughout reality, and evil arises when individuals go against this reason. I believe what the Stoics mean here is that a universal morality exists, that being a ubiquitous guideline accessible to our reality through conscious deliberation and reflective thought. When individuals act contrary to this universal standard, it is through an ignorance of what the correct course of action actually is.

This stoic ethos is personally appealing because it seems to have a large humanistic component. Namely, all of humanity has the ability to grasp universal moral truths and overcome their ‘ignorance’ of the one true path towards moral enlightenment. Whether such truths actually exist is debatable, and the apathetic nature of Stoicism seems to depress the overall human experience (dulled down emotions, detachment from reality).

The ancient Greek notion of eudaimonia could be a more desirable philosophy by which to guide our moral lives. The basic translation of this term as ‘greatest happiness’ does not do it justice. It was first introduced by Socrates, whom outlined a basic version of the concept as comprising two components; virtue and knowledge. Socrates’ virtue was thus moral knowledge of good and evil, or having the psychological tools to reach the ultimate good. Subsequent students Plato and Aristotle expanded on this original idea of sustained happiness by adding layers of complexity. For example, Aristotle believed that human activity tends towards the experience of maximum eudaimonia, and to achieve that end it was though that one should cultivate rationality of judgement and ‘noble’ characteristics (honor, honesty, pride, friendliness). Epicurus again modified the definition of eudaimonia to be inclusive of pleasure, thus also changing the moral focus to one that maximises the wellbeing of the individual through satisfaction of desire (the argument here is that pleasure equates with goodness and pain with badness, thus the natural conclusion is to maximise positive feeling).

We see that the problem of evil has been dealt with in a wide variety of ways. Even in our modern world it seems that people are becoming angrier, impatient and destructive towards their fellow human beings. Looking at our track record thus far, it seems that the mantra of ‘fight fire with fire’ is being followed by many countries when determining their foreign policy. Modern incarnations of religious moral codes (an eye for an eye) have resulted in a new wave of crusades with theistic beliefs at the forefront once again.

The wisdom of our ancient ancestors is refreshing and surprising, given that commonsense suggests a positive relationship between knowledge and time (human progress increases with the passage of time). It is entirely possible that humanity has been following a false path towards moral enlightenment, and given the lack of progress from the religious front, perhaps a new approach is needed. By treating the problem of evil as one of cultural ignorance we stand to benefit on a high level. The whole judicial system could be re-imagined to one where offenders are actually rehabilitated through education, rather than simply breeding generations of hardened criminals. Treating evil as a form of improper judgement forces our society to take moral responsibility at the individual level, thus resulting in real and measurable changes for the better.

The human brain and the internet share a key feature in their layout; a web-like structure of individual nodes acting in unison to transmit information between physical locations. In brains we have neurons, comprised in turn of myelinated axons and dendrites. The internet is comprised of similar entities, with connections such as fibre optics and ethernet cabling acting as the mode of transport for information. Computers and routers act as gateways (boosting/re-routing) and originators of such information.

How can we describe the physical structure and complexity of these two networks? Does this offer any insight into their similarities and differences? What is the plausibility of a conscious Internet? These are the questions I would like to explore in this article.

At a very basic level, both networks are organic in nature (surprisingly, in the case of the Internet); that is, they are not the product of an ubiquitous ‘designer’ and are given the freedom to evolve as their environment sees fit. The Internet is given permission to grow without a directed plan. New nodes and capacity added haphazardly. The naturally evolved topology of the Internet is one that is distributed; the destruction of nodes has little effect on the overall operational effectiveness of the network. Each node has multiple connections, resulting in an intrinsic redundancy where traffic is automatically re-routed to the target destination via alternate paths.

We can observe a similar behaviour in the human brain. Neurological plasticity serves a function akin to the distributed nature of the Internet. Following injury to regions of the brain, adjacent areas can compensate for lost abilities by restructuring neuronal patterns. For example, injuries to the frontal cortex motor area can be minimised with adjacent regions ‘re-learning’ otherwise mundane tasks that have since been lost as a result of the injury. While such recoveries are entirely possibly with extensive rehabilitation, two key factors determine the likelihood and efficiency of the operation; the intensity of the injury (percentage of brain tissue destroyed, location of injury) and leading from this, the chronological length of recovery. These factors introduce the first discrepancy between these two networks.

Unlike the brain, the Internet is resilient to attacks on its infrastructure. Local downtime is a minor inconvenience as traffic moves around such bottlenecks by taking the next fastest path available. Destruction of multiple nodes has little effect on the overall web of information. Users may loose access to or experience slowness in certain areas, but compared to the remainder of possible locations (not to mention redundancies in content – simply obtain the information elsewhere) such lapses are just momentary inconveniences. But are we suffering from a lack of perspective when considering the similarities of the brain and the virtual world? Perhaps the problem is one related to a sense of scale. The destruction of nodes (computers) could instead be interpreted in the brain as the removal of individual neurons. If one takes this proposition then the differences begin to loose their lucidity.

An irrefutable difference, however, arises when one considers both the complexity and the purpose of the two networks. The brain contains some 100 billion neurons, whilst the Internet comprises a measly 1 billion users by comparison (with users roughly equating the number of nodes, or access terminals that are physically connected to the Internet). Brains are the direct product of evolution, created specifically to keep the organism alive in an unwelcoming and hostile living environment. The Internet, on the other hand, is designed to accommodate a never-ending torrent of expanding human knowledge.  Thus the dichotomy in purpose between these two networks is quite distinguished, with the brain focusing on reactionary and automated responses to stimuli while the Internet aims to store information and process requests for its extraction to the end user.

Again we can take a step back and consider the similarities of these two networks. Looking at topology, it is apparent that the distributed nature of the Internet is similar to the structure and redundancy of the human brain. In addition, the Internet is described as a ‘scale-free’ or power-law network, indicating that a small percentage of highly connected nodes accounts for a very large percentage of the overall traffic flow. In effect, a targeted attack on these nodes would be successful in totally destroying the network. The brain, by comparison, appears to be organised into distinct and compartmentalised regions. Target just a few or even one of these collections of cells and the whole network collapses.

It would be interesting to empirically investigate the hypothesis that the brain is also a scale-free network that is graphically represented via a power law. Targetting the thalamus for destruction, (which is a central hub through which sensory information is redirected) might have the same devastating effect on the brain as destroying the ICANN headquarters in the USA (responsible for domain name assignment).

As aforementioned, the purposes of these two networks are different, yet share the common bond of processing and transferring information. At such a superficial level we see that the brain and the Internet are merely storage and retrieval devices, upon which the user (or directed thought process) are sent on a journey through a virtual world towards their intended target (notwithstanding the inevitable sidetracks along the way!). Delving deeper, the differences in purpose act as a deterrent when one considers the plausibility of consciousness and self-awareness.

Which brings us to the cusp of the article. Could the Internet, given sufficient complexity, become a conscious entity in the same vein as the human brain? Almost immediately the hypothesis is dashed due to its rebellion against common sense. Surely it is impossible to propose that a communications network based upon binary machines and internet protocols could ever achieve a higher plane of existence. But the answer might not be as clear cut as one would like to believe. controversially, both networks could be controlled by indeterminate processes. The brain, at its very essence, is governed by quantum unpredictability. Likewise, activity on the Internet is directed by self-aware, indeterminate beings (which in turn, are the result of quantum processes). At what point does the flow of information over a sufficiently complex network result in an emergent complexity mots notably characterised by a self-aware intelligence? Just as neurons react to the incoming electrical pulses of information, so too do the computers of the internet pass along packets of data. Binary code is equated with action potentials; either information is transmitted or not.

Perhaps the most likely (and worrying) outcome in a futurist world would be the integration of an artificial self-aware intelligence with the Internet. Think Skynet from the Terminator franchise. In all possibility such an agent would have the tools at its disposal to highjack the Internet’s comprising nodes and reprogram them in such a fashion as to facilitate the growth of an even greater intelligence. The analogy here is if the linking of human minds were possible, the resulting intelligence would be great indeed – imagine a distributed network of humanity, each individual brain linked to thousands of others in a grand web of shared knowledge and experience.

Fortunately such a doomsday outlook is most likely constrained within the realms of science fiction. Reality tends to have a reassuring banality about it that prevents the products of human creativity from becoming something more solid and tangible. Whatever the case may be in regards to the future of artificial intelligence, the Internet will continue to grow in complexity and penetration. As end user technology improves, we take a continual step closer towards an emergent virtual consciousness, whether it be composed of ‘uploaded’ human minds or something more artificial in nature. Let’s just hope that a superior intelligence can find a use for humanity in such a future society.

A recurring theme and technological prediction of futurists is one in which human intelligence supersedes that of the previous generation through artificial enhancement. This is a popular topic on the Positive Futurist website maintained by Dick Pelletier, and one which provides food for thought. Mr Pelletier outlines a near future (2030s) where a combination of nanotechnology and insight into the inner workings of the human brain facilitate an exponential growth of intelligence. While the accuracy of such a prediction is open to debate (specifically the technological possibilities of successful development within the given timeframe), if such a rosy future did come to fruition what would be the consequences on society? Specifically, would an increase of average intelligence necessarily result in an overall improvement to quality of life? If so, which areas would be mostly affected (eg morality, socio-economic status)? These are the questions I would like to explore in this article.

The main argument provided by futurists is that technological advances relating to nano-scale devices will soon be realised and implemented throughout society. By utilising these tiny automatons to the largest extent possible, it is thought that both disease and aging could be eradicated by the middle of this century. This is due to the utility of nanobots, specifically their ability to carry out pre-programmed tasks in a collective and automated fashion without any conscious awareness on behalf of the host. In essence, nano devices could act as a controllable extension of the human body, giving health professionals the power to monitor and treat throughout the organisms lifespan. But the controllers of these instruments need to know what to target and how to best direct their actions; a point of possible sabotage to the futurists’ plan. In all likelihood, however, such problems will only prove to serve as temporary hindrances and should be overcome through extensive testing and development phases.

Assuming that a) such technology is possible and b) it can be controlled to produce the desired results, the future looks bright for humanity. By further extending nanotechnology with cutting edge neurological insight, it is feasible that intelligence can be artificially increased. The possibility of artificial intelligence and the development of an interface with the human mind almost ensures a future filled with rapid growth. To this end, an event aptly named the ‘technological singularity’ has been proposed, which outlines the extension of human ability through aritificial means. The singularity allows for innovation to exceed the rate of development; in short, humankind could advance (technologically) faster than the rate of input. While the plausibility of such an event is open to debate, it does sound feasible that artificial intelligence could assist us to develop new and exciting breakthroughs in science. If conscious, self-directed intelligence were to be artificially created this may assist humanity even further; perhaps the design of specific minds would be possible (need a physical breakthrough – just create an artificial Einstein). Such an idea hinges totally on the ability of neuroscientists to unlock the secrets of the human brain and allow the manipulation or ‘tailoring’ of specific abilities.

While the jury is still out debating the details of how such a feat will be made technologically possible, a rough outline of the methodologies involved in artificial augmentation could be enlightening. Already we are seeing the effects of a society increasingly driven by information systems. People want to know more in a shorter time, in other words, increase efficiency and volume. To compensate for the already torrential hordes of information available on various mediums (the internet springs to mind) humanity relies increasingly on ways to filter, absorb and understand stimuli. We are seeing not only a trend in artificial aids (search engines, database software, larger networks) but also a changing pattern in the way we scan and retain information. Internet users are now forced to make quick decisions and scan superficially at high speed to obtain information that would otherwise be lost amidst the backlog of detail. Perhaps this is one way in which humanity is guiding the course of evolution and retraining the minds basic instincts away from more primitive methods of information gathering (perhaps it also explains our parents’ ineptitude for anything related to the IT world!) This could be one of the first targets for augmentation; increasing the speed of information transfer via programmed algorithms that fuse our natural biological mechanisms of searching with the power of logical, machine-coded functions. Imagine being able to combine the biological capacity to effortlessly scan and recognise facial features with the speed of computerised programming.

How would such technology influence the structure of society today? The first assumption that must be taken is the universal implementation/adoption of such technologies by society. Undoubtedly there will be certain populations whom refuse for whatever reason, most likely due to a perceived conflict with their belief system. It is important to preserve and respect such individuality, even if it means that these populations will be left behind in terms of intellectual enlightenment. Critics of future societies and futurists in general argue that a schism will develop, akin to the rising disparities in wealth distribution present within today’s society. In counter-argument, I would respond that an increase in intelligence would likewise cause a global rise in morality. While this relationship is entirely speculative, it is plausible to suggest that a person’s level of moral goodness is at least related (if not directly) to their intelligence.

Of course, there are notable exceptions to this rule whereby intelligent people have suffered from moral ineptitude, however an increased neurological understanding and a practical implementation of ‘designer’ augmentations (as it relates to improving morality) would negate the possibility of a majority ‘superclass’ whom persecutes groups of ‘naturals’. At the very worst, there may be a period of unrest at the implementation of such technology while the majority of the population catches up (in terms of perfecting the implantation/augmentation techniques and achieving the desired level of moral output). Such innovations may even act as a catalyst for developing a philosophically sound model of universal morality; something which would in turn, allow the next generation of neurological ‘upgrades’ to implement.

Perhaps we are already in the midst of our future society. Our planet’s declining environment may hasten the development of such augmentation to improve our chances of survival. Whether this process involves the discarding of our physical bodies for a more impervious, intangible machine-based life or otherwise remains to be seen. With the internet’s rising popularity and increasing complexity, a virtual ‘Matrix-esque’ world in which such programs could live might not be so far-fetched after all. Whatever the future holds, it is certainly an exciting time in which to live. Hopefully humanity can overcome the challenges of the future in a positive way and without too much disruption to our technological progress.

The monk sat meditating. Alone atop a sparsely vegetated outcrop, all external stimulus infusing psychic energy within his calm, receptive mind. Distractions merely added to his trance, assisting the meditative state to deepen and intensify. Without warning, the experience culminated unexpectedly with a fluttering of eyelids. The monk stood, content and empowered with newfound knowledge. He has achieved pure insight…

The term ‘insight’ is often attributed to such vivid descriptions of meditation and religious devotion. More specifically, religions such as Buddhism promote the concept of insight (vipassana) as a vital prerequisite for spiritual nirvana, or transcendence of the mind to a higher plane of existence. But does insight exist for the everyday folk of the world? Are the momentary flashes of inspiration and creativity part and parcel of the same phenomenon or are we missing out on something much more worthwhile? What neurological basis does this mental state have and how can its materialisation be ensured? These are the questions I would like to explore in this article.

Insight can be defined as the mental state whereby confusion and uncertainty are replaced with certainty, direction and confidence.  It has many alternative meanings and contexts regarding its use, ranging from a piece of obtained information to the psychological capacity to introspect objectively (as according to some external judge – introspection is by its very name subjective in nature). Perhaps the most fascinating and generally applicable context is one which can be described as ‘an instantaneous flash of brilliance’ or ‘a sudden clearing of murky intellect and intense feelings of accomplishment’. In short, insight (in the context which I am interested) is one which can be attributed to the genius’ of society, those that seemingly bring together tiny shreds of information and piece them together to solve a particularly challenging problem.

Archimedes is perhaps the most widely cited example of human insight. As the story goes, Archimedes was inspired by the displacement of water in his bathtub to formulate a theory of calculating the volume of an irregular object. This technique was of great empirical importance as it allowed a reliable measure of density (referred to as ‘purity’ in those ancient times, and arising from a more fiscal motivation such as gold purity). The climax of the story describes a naked Archimedes running wildly through the streets unable to control his excitement at this ‘Eureka’ moment. Whether the story is actually true or not has little bearing on the force of the argument presented; all of us have most likely experienced this moment at one point in our lives, and is best summarised by the overcoming of seemingly insurmountable odds to conquer a difficult obstacle or problem.

But where does this inspiration come from? It almost seems as though the ‘insightee’ is unaware of the mental efforts to arrive at a solution, perhaps feeling a little defeated after a day spent in vain. Insight then appears at an unexpected moment, almost as though the mind is working unconsciously and without direction, and offers a brilliant method for victory. The mind must have some unconscious ability to process and connect information regardless of our directed attention to achieve moments such as this. Seemingly unconnected pieces of information are re-routed and brought to our attention in the context of the previous problem. Thus could there be a neurobiological basis for insight? One that is able to facilitate a behind-the-scenes process?

Perhaps insight is encouraged by the physical storage and structure of neural networks. In the case of Archimedes, the solution was prompted by the mundane task of taking a bath; superficially unrelated to the problem, however the value of its properties inflated by a common neural pathway (low bathwater – insert leg – raised bathwater similar to volumes and matter in general). That is, the neural pathways activated by taking a bath are somehow similar to those activated by the rumination of the problem at hand. Alternatively, the unconscious mind may be able to draw basic cause and effect conclusions which are then boosted to the forefront of our minds if they are deemed to be useful (ie: are they immediately relevant to the task being performed). Whatever the case may be, it seems that at times, our unconscious minds are smarter than our conscious attention.

The real question is whether insight is an intangible state of mind (ala ‘getting into the zone’) that can be turned on and off (thus making it useful for extending humanity’s mental capabilities), or whether it is just a mental byproduct from overcoming a challenge (hormonal response designed to encourage such thinking in the future). Can the psychological concept of insight be applied via a manipulation of the subject’s composition (neuronally)  and environmental characteristics (conductive to achieving insight), or is it merely an evolved response that serves a (behaviourally) reinforcing purpose?

Undoutedly the agent’s environment plays a part in determining the likelihood of insight occurring. Taking into account personal preferences (does the person prefer quite spaces for thinking?) the characteristics of the environment could serve to hamper the induction of such a mental state if it is sufficiently irritating to the individual. Insight may also be closely linked with intelligence, and depending on your personal conception of this, neurological structure (if one purports a strictly biological basis of intelligence). If this postulate is taken at face value, we have the conclusion that the degree of intelligence is directly related to the likelihood of insight, and perhaps also to the ‘quality’ of the insightful event (ie: a measure of its brilliance in comparison to inputs such as the level of available information and difficulty of the problem).

But what of day to day insight, it seems to crop up in all sorts of situations. In this context, insight might require a grading scale as to its level of brilliance if its use is to be justified in more menial situations and circumstances. Think of that moment when you forget a particular word, and try as you might, cannot remember it for the life of you. Recall also that flash of insight where the answer is simply handed to you on a platter without any conscious need to retrieve it. Paradoxically, it seems that the harder we try to solve the problem, the more difficult it becomes. However, is this due to efficiency problems such as ‘bottlenecking’ of information transfer, personality traits such as performance anxiety/frustration or some underlying and unconscious process that is able to retrieve information without conscious direction?

Whatever the case may be, our scientific knowledge on the subject is distinctly lacking, therefore an empirical inquiry into the matter is more than warranted (if it hasn’t already been commissioned). Psychologically, the concept of insight could be tested experimentally by providing subjects with a problem to solve and manipulating  the level of information (eg ‘clues’) and its relatedness to the problem (with consideration taken to intelligence, perhaps two groups, high and low intelligence). This may help to uncover whether insight is a factor to do with information processing or something deeper. If science can learn how to artificially induce a mental state akin to insight, the benefits for a positive-futurist society would be grand indeed.

Closely tied to our conceptions of morality, conspiracy occurs when the truth is deliberately obscured. Conspiracy is often intimately involved with, and precipitated by political entities whom seek to minimise any negative repercussions of such truth becoming public knowledge. But what exactly does a conspiracy involve? According to numerous examples from popular culture, conspiracies arise from smaller, constituent and autonomous units within governmental bodies and/or military organisations, and usually involve some degree of ‘coverup’ or deliberate misinformation/clouding of actual events that have taken place. Such theories, while potentially having some credulous background, are for the most part ridiculed as neurotic fantasies that have no grounding in reality. How then do individuals maintain such obviously false ideas in the face of societal pressure? What are the characteristics of a ‘conspiracy theorist’ and how do these traits distinguish them from society as a whole? What do conspiracy theories tell us about human nature? These are the questions I would like to explore in this article.

As a child I was intensely fascinated with various theories regarding alien activity of earth. Surely a cliche in today’s world, but the alleged events that occurred in Roswell, Tunguska and Rendlesham Forest are a conspirator’s dream. Fortunately I no longer hold these events in any factual stead; rather, as I have aged and matured so too has my ability to examine evidence rationally (something that conspiracy theorists seem unable to accomplish). Introspection on my childhood motivations for believing these theories potentially reveals key characteristics of believers in conspiracy. Aliens were a subject of great personal fear as a young child, thus encouraging a sort of morbid fascination and desire to understand/explain (perhaps in an attempt to regain some control over these entities that could supposedly appear at will). Indeed, a fear of alien abduction seems to merely be the modern reincarnation of previous childhood fears, such as goblins and demons. Coupled with the ‘pseudo-science’ that accompanies conspiracy theories, it is no wonder that the young and otherwise impressionable are quickly mesmerised and enlisted into the cause. A strong emotional bond connects the beliefs with the evidence in an attempt to relieve uncomfortable feelings.

Conspiracy theories may act as a quasi-scientific attempt to explain the unknown, not too dissimilar to religion (and perhaps utilising the same neurological mechanisms).  While a child could be excused for believing such fantasies, it is intriguing how adults can maintain and perpetuate wild conspiracy beliefs without regret. Cognitive dissonance may act as an underlying regulator and maintainer of such beliefs, in that the more radical they become, the more they are subscribed to (in an attempt at minimising the psychological discomfort that internal hypocrisy brings). But where do these theories come from? Surely there must be at least some factual basis for their creation. Indeed there is, however the evidence is often mis-interpreted or there is sufficient cause for distrust in the credibility of the information ( in light of the deliverer’s past history). Therefore we have two main factors that can determine whether the information will be interpreted as a conspiracy; the level of trust an individual ascribes to the information source (taking into account that person’s past dealings with the agent and personality/presence of neurotic disorders) and the degree of ambiguity in the said events (personal interpretation different to that reported, perceptual experience sufficiently vivid to cause disbelief in alternate explanation).

To take the alleged alien craft crash landing at Roswell as a case in point, it becomes obvious where the conspiracy began to develop within the chronological timeframe of events and for what reasons. Roswell also demonstrates the importance of maintaining a trust in authority; the initial printing of ‘Flying Disc Recovered By USAF’ in a local newspaper was quickly retracted and replaced with a more menial and uninteresting ‘weather balloon’ explanation. Reportedly, this explanation was accepted by the people of the time and all claims of alien space craft forgotten about until the 1970s, some 30 years after the actual event. The conspiracy was revitalised by the efforts of a single individual (perhaps seeking his own ‘five minutes of fame’), thus demonstrating the power of one person’s belief supported by others in authority (the primary researcher, Friedman, was a nuclear physicist and respected writer). Coupled with convenient (in that it is ambiguous) and an aggressive interpretation of circumstantial evidence, the alleged incident at Roswell has since risen to global fame. Taken in the context of historical happenings at this period in history (aftermath of WW2, beginnings of Cold War – increase in military top secret projects) it is no wonder that imagination began to replace reality; people now had a means to attribute a cause and explanation to that which they clearly had no substantiated understanding of. There was also the catalyst for thinking that governments engaged in trickery what with the numerous special operations conducted in a clandestine manner and quickly covered up when things went awry (eg Bay of Pigs incident).

Thus the power of conspiracy has been demonstrated. Originating from just a single individual’s private beliefs, it seems as if the fable twinges a common thread within those susceptible. As epitomised by Mulder’s office poster in the X-Files, people ‘want to believe’. That is, the hypocrisy in maintaining such obviously false beliefs is downplayed through a conscious effort to misinterpret counter-evidence and emphasize minimalist details that support the theory. As aforementioned, the role of pseudo-science does wonders to support conspiracy theories and increase their attractiveness to those that would otherwise discount the proposition. By merging the harsh reality of science with the obvious fantasy that is the subject matter of most conspiracies, people have a semi-plausible framework within which to construct their theories and establish consistency for defending their position. It is a phenomenon that is quite similar to religion; the misuse and misinterpretation of “evidence” to satisfy the desire of humanity to regain control over the unexplainable and support a corrupted hidden agenda (distrust of authority).

There is little that distinguishes between the characteristics of conspiracy theorists and religious fundamentalists; both share a common bond in their singlemindedness and perceived superiority over the ‘disbelievers’. But their are subtle differences. Conspiracy theorists undertake a lifelong crusade to uncover the truth – an adversarial relationship develops where the theorist is elevated to a level of moral and intellectual superiority (at having uncovered the conspiracy and thwarted any attempts at deception). On the other hand, the religious seem to take their gospel at face value, perhaps at a deeper level and with a greater certainty than the theorists (perhaps due to the much longer history of religion and firm establishment within society). The point here is that while there may be such small differences between the two groups, the underlying psychological mechanisms could be quite similar; they certainly seem to be related due to the common grounding within our belief system.

Psychologically, conspiracies are thought to arise for a number of reasons. As already mentioned, the role of cognitive dissonance is one psychic mechanism that may perpetuate these beliefs in the face of overwhelming contradictory evidence. The psychoanalytic concept of projection is one theorised catalyst that is proposed to dictate the formulation of conspiracy theories. It is thought that the theorist subconsciously projects their own perceived vices onto the target in the form of conspiracy and deception. Thus the conspirator becomes an embodiment of what the theorist despises, regardless of the objective truth. The second leading psychological cause of conspiracy theory creation is one that involves a tendency to apply ‘rules of thumb’ to social events. Humans believe that significant events have significant causes, such as the death of a celebrity. There is no shortage of such occasions even in recent months what with the untimely death of Hollywood actors and local celebrities. Such events rock the foundation of our worldviews, often to such a large extent that artificial causes are attributed to reassure ourselves that the world is predictable (even if the resulting theory is so artificially complex that any plausibility quickly evaporates).

It is interesting to note that the capacity to form beliefs based on large amounts of imagination and very little fact is present within most of us. Take a moment to stop and think about what you thought the day the twin towers came down, or maybe when Princess Diana was killed. Did you formulate some radical postulations based on your own interpretations and hidden agendas? For the vast majority of us, time proves the ultimate ajudicator and acts to dismiss fanciful ideas out of hand. But for some, the attractiveness of having one up on their fellow citizen at having uncovered some secretive ulterior motive reinforces such beliefs until they become infused with the person’s sense of identity. The truth is nice to have, however some things in life definitely do not have explanations rooted in the deception of some higher power. Random events do happen, without any need for a hidden omnipresent force dictating events from behind the scenes.

PS: Elvis isn’t really dead, he’s hanging out with JFK at Area 51 where they faked the moon landings. Pardon me whilst I don my tin-foil hat, I think the CIA is using my television to perform mind control…

The topic of free-will is one of the largest problems facing modern philosophers. An increasing empirical onslaught has done little to alleviate these murky waters. In actuality, each scientific breakthrough has resulted in greater philosophical confusion, whether it be due to an impractical knowledge base that is needed to interpret these results or counter-intuitive outcomes (RP signal, brain activity precedes conscious action). My own attempts to shed some light onto this matter are equally feeble, which has precipated the creation of the present article. What is the causal nature of the universe? Is each action determined and directly predictable from a sufficiently detailed starting point or is there a degree of inherent uncertainty? How can we reconcile the observation that free-will appears to be a valid characteristic of humanity with mounting scientific evidence to the contrary (eg Grand Unified Theory)? These are the questions I would like to discuss.

‘Emergent’ seems to be the latest buzzword in popular science. While the word is appealing when describing how complexity can arise from relatively humble beginnings, it does very little to actually explain the underlying process. These two states are simply presented on a platter, the lining of which is composed of fanciful ’emergent’ conjourings. While there is an underlying science behind the process involving dynamic systems (modelled on biological growth and movement), there does seem to be an element of hand waving and mystique.

This state of affairs does nothing to help current philosophical floundering. Intuitively, free-will is an attractive feature of the universe. People feel comfortable knowing that they have a degree of control over the course of their life. A loss of such control could even be construed as a faciliator of mental illness (depression, bipolar disorder). Therefore, the attempts of science to develop a unified theory of complete causal prediction seems to undermine our very nature as human beings. Certainly, some would embrace the notion of a deterministic universe with open arms, happy to put uncertainty to an end. However, one would do well (from a Eudamonic point of view) to cognitively reframe anxiety regarding the future to an expectation of suprise and anticipation at the unknown.

While humanity is firmly divided over their preference for a predictable or uncertain universe, the problem remains that we appear to have a causally determined universe with individual freedom of choice and action. Quantum theory has undermined determinism and causality to an extent, with the phenomenon of spontaneous vaccuum energy supporting the possibility of events occuring without any obvious cause. Such evidence is snapped up happily by proponents of free-will with little regard as to its real-world plausibility.This is another example of philosophical hand-waving, where the real problem involves a form of question begging; that is, a circular argument with the premise requiring a proof of itself in order to remain valid! For example, the following argument is often used;

  1. Assume quantum fluctuations really are indeterminate in nature (underlying causality ala ‘String Theory’ not applicable).
  2. Free-will requires indeterminacy as a physical prerequisite.
  3. Quantum fluctuations are responsible for free-will.

 To give credit where it is due, the actual arguments used are more defined than that which is outlined above, however the basic structure is similar. Basic premises can be outlined and postulates put forward describing the possible form of neurological free will, however as with most developing fields the supporting evidence is skimp at best. And to make matters worse, quantum theory has shown that human intuition is often not the best method of attempting an explaination.

 However, if we work with what we have, perhaps something useful will result. This includes such informal accounts such as anecdotal evidence. The consideration of such evidence has led to the creation of two ‘maxims’ that seem to summarise the evidence presented in regards to determinsm and free-will.

Maxim one. The degree of determinism within a system is reliant upon the scale of measurement; a macro form of measurement results in a predominantly deterministic outcome, while a micro form of measurement results in an outcome that is predominantly ‘free’ or unpredictable. What this is saying is that determinism and freedom can be directly reconciled and coexist within the same construct of reality. Rather than existing as two distinctly separate entities, these universal characteristics should be reconceptualised as two extremities on a sliding scale of some fundamental quality. Akin to Einstein’s General Relativity, the notions of determinism and freedom are also relative to the observer. In other words, how we examine the fabric of reality (large or small scale) results in a worldview that is either free or constrained by predictability. Specifically, quantum scale measurements allow for an indeterministic universe, while larger scale phenomenon are increasingly easier to predict (with a corresponding decrease in the accuracy in the measurement tool). In short, determinism (or free-will) is not a physical property of the universe, but a characteristic of perception and an artifact of the mesaurement method used. While this maxim seems commonsensical and almost obvious, I believe the idea that both determinism and free-will are reconcilable features of this universe is a valid proposition that warrants further investigation.

Maxim Two: Indeterminacy and free-will are naturally occuring results that emerge from the complex interaction of a sufficient number of interacting deterministic systems (actual mechanisms unknown). Once again we are falling back on the explanatory scapegoat of ’emergence’, however its use is partially justified (in the light of empirical developments). For example, investigations into fractal patterns and the modelling of chaotic systems seems to justify the existence of emergent complexity. Fractals are generated from a finite set of definable equations and result in an intensely complicated geometric figure with infinite regress, the surface features undulating with each magnification (interestingly, fractal patterns are a naturally occuring feature in the physical world, and can result from biological growth patterns and magnetic field lines). Chaos is a similar phenomemon, beginning from reasonably humble initial circumstances, and due to an amalgamation of interferring variables results in an overall system of indeterminacy and unpredictability (eg weather patterns). Perhaps this is the mechanism of human consciousness of freedom of will; individual (and deterministic) neurons contribute enmasse to an overall emergent system that is unpredictable. As a side note, such a position also supports the possibility of artificial intelligence; build something that is sufficiently complex and ‘human-like’ consciousness and freedom will result.

The two maxims proposed may seem to be quite obvious on cursory inspection, however it can be argued that the proposal of a universe in which determinism and freedom of will form two alternative interpretations of a common, underlying reality is unique. Philisophically, the topic is difficult to investigate and discuss due to limitations on empirical knowledge and an increasing requirement for specialised technical insight into the field.

The ultimate goal of modern empiricism is to reduce reality to a strictly deterministic foundation. In keeping with this aim, experimentation hopes to arrive at physical laws of nature that are increasingly accurate and versatile in their generality. Quantum theory has since put this inexorable march on hold while futile attempts are made to circumvent the obstacle that is the uncertainty principle.

Yet perhaps there is a light at the end of the tunnel, however dim the journey may be. Science may yet produce a grand unified theory that reduces free-will to causally valid, ubiquitous determinism. More than likely, as theories of free-will become closer to explaining the etiology of this entity, we will find a clear and individually applicable answer receding frustratingly into the distance. From a humanistic perspective, it is hoped that some degree of freedom will be preserved in this way. After all, the freedom to act independently and an uncertainty of the future is what makes life worth living!