You are currently browsing the category archive for the ‘Evolution’ category.

The human brain and the internet share a key feature in their layout; a web-like structure of individual nodes acting in unison to transmit information between physical locations. In brains we have neurons, comprised in turn of myelinated axons and dendrites. The internet is comprised of similar entities, with connections such as fibre optics and ethernet cabling acting as the mode of transport for information. Computers and routers act as gateways (boosting/re-routing) and originators of such information.

How can we describe the physical structure and complexity of these two networks? Does this offer any insight into their similarities and differences? What is the plausibility of a conscious Internet? These are the questions I would like to explore in this article.

At a very basic level, both networks are organic in nature (surprisingly, in the case of the Internet); that is, they are not the product of an ubiquitous ‘designer’ and are given the freedom to evolve as their environment sees fit. The Internet is given permission to grow without a directed plan. New nodes and capacity added haphazardly. The naturally evolved topology of the Internet is one that is distributed; the destruction of nodes has little effect on the overall operational effectiveness of the network. Each node has multiple connections, resulting in an intrinsic redundancy where traffic is automatically re-routed to the target destination via alternate paths.

We can observe a similar behaviour in the human brain. Neurological plasticity serves a function akin to the distributed nature of the Internet. Following injury to regions of the brain, adjacent areas can compensate for lost abilities by restructuring neuronal patterns. For example, injuries to the frontal cortex motor area can be minimised with adjacent regions ‘re-learning’ otherwise mundane tasks that have since been lost as a result of the injury. While such recoveries are entirely possibly with extensive rehabilitation, two key factors determine the likelihood and efficiency of the operation; the intensity of the injury (percentage of brain tissue destroyed, location of injury) and leading from this, the chronological length of recovery. These factors introduce the first discrepancy between these two networks.

Unlike the brain, the Internet is resilient to attacks on its infrastructure. Local downtime is a minor inconvenience as traffic moves around such bottlenecks by taking the next fastest path available. Destruction of multiple nodes has little effect on the overall web of information. Users may loose access to or experience slowness in certain areas, but compared to the remainder of possible locations (not to mention redundancies in content – simply obtain the information elsewhere) such lapses are just momentary inconveniences. But are we suffering from a lack of perspective when considering the similarities of the brain and the virtual world? Perhaps the problem is one related to a sense of scale. The destruction of nodes (computers) could instead be interpreted in the brain as the removal of individual neurons. If one takes this proposition then the differences begin to loose their lucidity.

An irrefutable difference, however, arises when one considers both the complexity and the purpose of the two networks. The brain contains some 100 billion neurons, whilst the Internet comprises a measly 1 billion users by comparison (with users roughly equating the number of nodes, or access terminals that are physically connected to the Internet). Brains are the direct product of evolution, created specifically to keep the organism alive in an unwelcoming and hostile living environment. The Internet, on the other hand, is designed to accommodate a never-ending torrent of expanding human knowledge.  Thus the dichotomy in purpose between these two networks is quite distinguished, with the brain focusing on reactionary and automated responses to stimuli while the Internet aims to store information and process requests for its extraction to the end user.

Again we can take a step back and consider the similarities of these two networks. Looking at topology, it is apparent that the distributed nature of the Internet is similar to the structure and redundancy of the human brain. In addition, the Internet is described as a ‘scale-free’ or power-law network, indicating that a small percentage of highly connected nodes accounts for a very large percentage of the overall traffic flow. In effect, a targeted attack on these nodes would be successful in totally destroying the network. The brain, by comparison, appears to be organised into distinct and compartmentalised regions. Target just a few or even one of these collections of cells and the whole network collapses.

It would be interesting to empirically investigate the hypothesis that the brain is also a scale-free network that is graphically represented via a power law. Targetting the thalamus for destruction, (which is a central hub through which sensory information is redirected) might have the same devastating effect on the brain as destroying the ICANN headquarters in the USA (responsible for domain name assignment).

As aforementioned, the purposes of these two networks are different, yet share the common bond of processing and transferring information. At such a superficial level we see that the brain and the Internet are merely storage and retrieval devices, upon which the user (or directed thought process) are sent on a journey through a virtual world towards their intended target (notwithstanding the inevitable sidetracks along the way!). Delving deeper, the differences in purpose act as a deterrent when one considers the plausibility of consciousness and self-awareness.

Which brings us to the cusp of the article. Could the Internet, given sufficient complexity, become a conscious entity in the same vein as the human brain? Almost immediately the hypothesis is dashed due to its rebellion against common sense. Surely it is impossible to propose that a communications network based upon binary machines and internet protocols could ever achieve a higher plane of existence. But the answer might not be as clear cut as one would like to believe. controversially, both networks could be controlled by indeterminate processes. The brain, at its very essence, is governed by quantum unpredictability. Likewise, activity on the Internet is directed by self-aware, indeterminate beings (which in turn, are the result of quantum processes). At what point does the flow of information over a sufficiently complex network result in an emergent complexity mots notably characterised by a self-aware intelligence? Just as neurons react to the incoming electrical pulses of information, so too do the computers of the internet pass along packets of data. Binary code is equated with action potentials; either information is transmitted or not.

Perhaps the most likely (and worrying) outcome in a futurist world would be the integration of an artificial self-aware intelligence with the Internet. Think Skynet from the Terminator franchise. In all possibility such an agent would have the tools at its disposal to highjack the Internet’s comprising nodes and reprogram them in such a fashion as to facilitate the growth of an even greater intelligence. The analogy here is if the linking of human minds were possible, the resulting intelligence would be great indeed – imagine a distributed network of humanity, each individual brain linked to thousands of others in a grand web of shared knowledge and experience.

Fortunately such a doomsday outlook is most likely constrained within the realms of science fiction. Reality tends to have a reassuring banality about it that prevents the products of human creativity from becoming something more solid and tangible. Whatever the case may be in regards to the future of artificial intelligence, the Internet will continue to grow in complexity and penetration. As end user technology improves, we take a continual step closer towards an emergent virtual consciousness, whether it be composed of ‘uploaded’ human minds or something more artificial in nature. Let’s just hope that a superior intelligence can find a use for humanity in such a future society.

A recurring theme and technological prediction of futurists is one in which human intelligence supersedes that of the previous generation through artificial enhancement. This is a popular topic on the Positive Futurist website maintained by Dick Pelletier, and one which provides food for thought. Mr Pelletier outlines a near future (2030s) where a combination of nanotechnology and insight into the inner workings of the human brain facilitate an exponential growth of intelligence. While the accuracy of such a prediction is open to debate (specifically the technological possibilities of successful development within the given timeframe), if such a rosy future did come to fruition what would be the consequences on society? Specifically, would an increase of average intelligence necessarily result in an overall improvement to quality of life? If so, which areas would be mostly affected (eg morality, socio-economic status)? These are the questions I would like to explore in this article.

The main argument provided by futurists is that technological advances relating to nano-scale devices will soon be realised and implemented throughout society. By utilising these tiny automatons to the largest extent possible, it is thought that both disease and aging could be eradicated by the middle of this century. This is due to the utility of nanobots, specifically their ability to carry out pre-programmed tasks in a collective and automated fashion without any conscious awareness on behalf of the host. In essence, nano devices could act as a controllable extension of the human body, giving health professionals the power to monitor and treat throughout the organisms lifespan. But the controllers of these instruments need to know what to target and how to best direct their actions; a point of possible sabotage to the futurists’ plan. In all likelihood, however, such problems will only prove to serve as temporary hindrances and should be overcome through extensive testing and development phases.

Assuming that a) such technology is possible and b) it can be controlled to produce the desired results, the future looks bright for humanity. By further extending nanotechnology with cutting edge neurological insight, it is feasible that intelligence can be artificially increased. The possibility of artificial intelligence and the development of an interface with the human mind almost ensures a future filled with rapid growth. To this end, an event aptly named the ‘technological singularity’ has been proposed, which outlines the extension of human ability through aritificial means. The singularity allows for innovation to exceed the rate of development; in short, humankind could advance (technologically) faster than the rate of input. While the plausibility of such an event is open to debate, it does sound feasible that artificial intelligence could assist us to develop new and exciting breakthroughs in science. If conscious, self-directed intelligence were to be artificially created this may assist humanity even further; perhaps the design of specific minds would be possible (need a physical breakthrough – just create an artificial Einstein). Such an idea hinges totally on the ability of neuroscientists to unlock the secrets of the human brain and allow the manipulation or ‘tailoring’ of specific abilities.

While the jury is still out debating the details of how such a feat will be made technologically possible, a rough outline of the methodologies involved in artificial augmentation could be enlightening. Already we are seeing the effects of a society increasingly driven by information systems. People want to know more in a shorter time, in other words, increase efficiency and volume. To compensate for the already torrential hordes of information available on various mediums (the internet springs to mind) humanity relies increasingly on ways to filter, absorb and understand stimuli. We are seeing not only a trend in artificial aids (search engines, database software, larger networks) but also a changing pattern in the way we scan and retain information. Internet users are now forced to make quick decisions and scan superficially at high speed to obtain information that would otherwise be lost amidst the backlog of detail. Perhaps this is one way in which humanity is guiding the course of evolution and retraining the minds basic instincts away from more primitive methods of information gathering (perhaps it also explains our parents’ ineptitude for anything related to the IT world!) This could be one of the first targets for augmentation; increasing the speed of information transfer via programmed algorithms that fuse our natural biological mechanisms of searching with the power of logical, machine-coded functions. Imagine being able to combine the biological capacity to effortlessly scan and recognise facial features with the speed of computerised programming.

How would such technology influence the structure of society today? The first assumption that must be taken is the universal implementation/adoption of such technologies by society. Undoubtedly there will be certain populations whom refuse for whatever reason, most likely due to a perceived conflict with their belief system. It is important to preserve and respect such individuality, even if it means that these populations will be left behind in terms of intellectual enlightenment. Critics of future societies and futurists in general argue that a schism will develop, akin to the rising disparities in wealth distribution present within today’s society. In counter-argument, I would respond that an increase in intelligence would likewise cause a global rise in morality. While this relationship is entirely speculative, it is plausible to suggest that a person’s level of moral goodness is at least related (if not directly) to their intelligence.

Of course, there are notable exceptions to this rule whereby intelligent people have suffered from moral ineptitude, however an increased neurological understanding and a practical implementation of ‘designer’ augmentations (as it relates to improving morality) would negate the possibility of a majority ‘superclass’ whom persecutes groups of ‘naturals’. At the very worst, there may be a period of unrest at the implementation of such technology while the majority of the population catches up (in terms of perfecting the implantation/augmentation techniques and achieving the desired level of moral output). Such innovations may even act as a catalyst for developing a philosophically sound model of universal morality; something which would in turn, allow the next generation of neurological ‘upgrades’ to implement.

Perhaps we are already in the midst of our future society. Our planet’s declining environment may hasten the development of such augmentation to improve our chances of survival. Whether this process involves the discarding of our physical bodies for a more impervious, intangible machine-based life or otherwise remains to be seen. With the internet’s rising popularity and increasing complexity, a virtual ‘Matrix-esque’ world in which such programs could live might not be so far-fetched after all. Whatever the future holds, it is certainly an exciting time in which to live. Hopefully humanity can overcome the challenges of the future in a positive way and without too much disruption to our technological progress.

In a previous article, I discussed the possibility of a naturally occurring morality; one that emerges from interacting biological systems and is characterised by cooperative, selfless behaviours. Nature is replete with examples of such morality, in the form of in-group favouritism, cooperativity between species (symbiotic relationships) and the delicate interrelations between lower forms of life (cellular interaction). But we humans seem to have taken morality to a higher plane of existence, classifying behaviours and thoughts into a menagerie of distinct categories depending on the perceived level of good or bad done to external agents. Is morality a concept that is constant throughout the universe? If so, how could morality be defined in a philosophically ‘universal’ way, and how does it fit in with other universals? In addition, how can humans make the distinction between what is morally ‘good’ and ‘bad? These are the questions I would like to explore in this article.

When people speak about morality, they are usually referring to concepts of good and evil. Things that help and things that hinder. A simplistic dichotomy into which behaviours and thoughts can be assigned. Humans have a long history with this kind of morality. It is closely intertwined with religion, with early scriptures and the resulting beliefs providing the means to which populations could be taught the virtues of acting in positive ways. The defining feature of religious morality finds it footing with the lack of faith in the human capacity to act for the good of the many. Religions are laced with prejudicial put downs that seek to undermine our moral integrity. But they do touch on a twinge of truth; evolution has seen the creation of a (primarily) self-centred organism. Taking the cynical view, it can be argued that all human behaviour can be reduced to purely egotistical foundations.

Thus the problem becomes not one of definition, but of plausibility (in relation to humanity’s intrinsic capacity for acting in morally acceptable ways). Is religion correct in its assumptions regarding our moral ability? Are we born into a world of deterministic sin? Theistically, it seems that any conclusion can be supported via the means of unthinking faith. However, before this religiosity is dismissed out of hand, it might be prudent to consider the underlying insight offered.

Evolution has shown that organisms are primarily interested in survival of the self (propagation of genetic material). This fits in with the religious view that humanity is fundamentally concerned with first-order, self-oriented consequences, ann raises the question of whether selfish behaviour should be considered immoral. But what of moral events such as altruism, cooperation and in-group behavioural patterns? These too can be reduced to the level of self-centered egoism, with the superficial layer of supposed generosity stripped away to more meager foundations.

Morality then becomes a way of a means to an end, that end being the fulfillment of some personal requirement. Self initiated sacrifice (altruism) elevates one’s social standing, and provides the source for that ‘warm, fuzzy feeling’ we all know and love. Here we have dual modes of satiation, one that is external to the agent (increasing power, status) and one that is internal (evolutionary mechanism for rewarding cooperation). Religious cynicism is again supported, in that humans seem to have great difficulty in performing authentic moral acts. Perhaps our problem here lies not in the theistic stalker, laughing gleefully at our attempts to grasp at some sort of intrinsic human goodness, but rather in our use of the word ‘authentic’. If one makes an allowance and conceeds that humans could simply lack the faculties for connotation-free morality, and instead put forward the proposition that moral behaviours are instead measured by their main direction of action (directed inwards; selfishly or outwards; altruistically), we can arrival at a usable conceptualisation.

Reconvening, we now have a new operational definition of morality. Moral action is thus characterised by the focus of its attention (inward vs outward) as opposed to a polarised ‘good vs evil’, which manages to evade the controversy introduced by theism and evolutionary biology (two unlikely allies!). The resulting consequence is that we have a kind of morality which is not defined by its degree of ‘ correctness’, which from any perspective is entirely relative. However, if we are to arrive at a meaningful and usable moral universal that is applicable to human society, we need to at least consider this problem of evil and good.

How can an act be defined as morally right or wrong? Considering this question alone conjours up a large degree of uncertainty and subjectivity. In the context of the golden rule (do unto others as you would have done unto yourself), we arrive at even murkier waters; what of the psychotic or sadist whom prefers what society would consider abnormal treatment? In such a situation could ‘normally’ unacceptable behaviour be construed as morally correct? It is prudent to discuss the plausibility of defining morality in terms of universals that are not dependent upon subjective interpretation if this confusion is to be avoided.

Once again we have returned to the issue of objectively assessing an act for its moral content. Intuitively, evil acts cause harm to others and good acts result in benefits. But again we are falling far short of the region encapsulated by morality; specifically, that acts can seem superficially evil yet arise from fundamentally good intentions. And thus we find a useful identifier (in the form of intention) that is worthy of assessing the moral worth of actions.

Unfortunately we are held back by the impervious nature of the assessing medium. Intention can only be ascertained through introspection, and to a lesser degree, psychometric testing. Intention can even be illusive to the individual, if their judgement is clouded by mental illness, biological deformity or an unconscious repression of internal causality (deferring responsibility away from the individual). Therefore, with such a slippery method of assessment regarding the authenticity and nature of the moral act, it seems difficult that morality could ever be construed as a universal.

Universals are exactly what their name connotes; properties of the world in which we inhabit that are experienced across reality. That is to say, morality could be classed as a universal due to its generality amoung our species and its quality of superceeding characterising and distinguishing features (in terms of mundane, everyday experience). If one is to class morality under the category of universals, one should modify the definition to incorporate features that are non-specific and objective. Herein lies the problem with morality; it is such a variable phenomenon, with large fluctuations in individual perspective. From this point there are two main options available given current knowledge on the subject. Democratically, the qualities of a universal morality could be determined through majority vote. Alternatively, a select group of individuals or one definitive authority could propose and define a universal concept of morality. One is left with few options on how to proceed.

If a universal conceptualisation of morality is to be proposed, an individual perspective is the only avenue left with the tools we have at our disposal. We have already discussed the possibility of internal vs external morality (bowing to pressures that dictate human morality is indivisibly selfish, and removing the focus from good vs evil considerations). This, combined with a weighted system that emphasises not the degree of goodness, but rather the consideration of the self versus others, results in a useful measure of morality (for example, there will always be a small percentage of internal focus). But what are we using as the basis for our measurement? Intention has already proved to be elusive, as is objective observation of acts (moral behaviours can be reliant on internal reasoning to determine their moral worth, some behaviours go unobserved or can be ambiguous to an external agent). Discounting the possibility of a technological breakthrough enabling direct thought observation (and the ethical considerations such an invasion of privacy would bring), it seems difficult on how we can proceed.

Perhaps it is best to simply take a leap of faith, believing in humanity’s ability to make judgements regarding moral behaviour. Instead of cynically throwing away our intrinsic abilities (which surely do vary in effectiveness within the population), we should trust that at least some of us would have the insight to make the call. With morality, the buck definitely stops with the individual, which is a fact that most people can have a hard time swallowing. Moral responsibility definitely rests with the persons involved, and in combination with a universally expansive definition, makes for some interesting assertations of blame, not to mention a pressuring force to educate the populace on the virtues of fostering introspective skills.

Morality is a phenomenon that permeates through both society as a whole and also individually via the consciousness of independent entities. It is a force that regularly influences our behaviour and is experienced (in some form or another) universally, species-wide. Intuitively, morality seems to be at the very least, a sufficient condition for the creation of human groups. Without it, co-operation between individuals would be non-existent. But does morality run deeper? Is it, in fact, a necessary condition of group formation and a naturally emergent phenomenon that stems from the interaction of replicating systems? Or can morality only be experienced by organisms operating on a higher plane of existence – those that have the required faculties with which to weigh up pros and cons, engage in moral decision making and other empathic endeavors (related to theory of mind)?

The resolution to this question depends entirely on how one defines the term. If we take morality to encompass the act of mentally engaging in self-reflective thought as a means with which to guide observable behaviours (acting in either selfish or selfless interests), then the answer to our question is yes, morality seems to be inescapably and exclusively linked only to humanity. However, if we twinge this definition and look at the etiology of morality – where this term draws its roots and how it developed over time, one finds that even the co-operative behaviours of primitive organisms could be said to construe some sort of basic morality. If we delve even deeper and ask how such behaviours came to be, we find that the answer is not quite so obvious. Can a basic version of morality (observable through cooperative behaviours) result as a natural consequence of interactions beyond the singular?

When viewed from this perspective, cooperation and altruism seem highly unlikely; a system of individually competing organisms, logically, would evolve to favour the individual rather than the group. This question is especially prudent when studying cooperative behaviours in bacteria or more complex, multicellular forms of life, as they lack a consciousness capable of considering delayed rewards or benefits from selfless acts

In relation to humanity, why are some individuals born altruistic while others take advantage without cause for guilt? How can ‘goodness’ evolve in biological systems when it runs counter to the benefit of the individual? These are the questions I would like to explore in this article.

Morality, in the traditional, philosophical sense is often constructed in a way that describes the meta-cognitions humans experience in creating rules for appropriate (or inappropriate) behaviour (inclusive of mental activity). Morality can take on a vast array of flavours; evil at one extreme, goodness at the other. We use our sense of morality in order to plan and justify our thoughts and actions, incorporating it into our mental representations of how the world functions and conveys meaning. Morality is a dynamic; it changes with the flow of time, the composition of society and the maturity of the individual. We use it not only to evaluate the intentions and behaviours of ourselves, but also of others. In this sense, morality is an overarching egoistic ‘book of rules’ which the consciousness consults in order to determine whether harm or good is being done. Thus, it seeps into many of our mental sub-compartments; decision making, behavioural modification, information processing, emotional response/interpretation and mental planning (‘future thought’) to name a few.

As morality entertains such a privileged omni-presence, humanity has, understandably, long sought to not only provide standardised ‘rules of engagement’ regarding moral conduct but has also attempted to explain the underlying psychological processes and development of our moral capabilities. Religion, thus, could perhaps be the first of such attempts at explanation. It certainly contains many of the idiosyncrasies of morality and proposes a theistic basis for human moral capability. Religion removes ultimate moral responsibility from the individual, instead placing it upon the shoulders of a higher authority – god. The individual is tasked with simple obedience to the moral creeds passed down from those privileged few who are ‘touched’ with divine inspiration.

But this view does morality no justice. Certainly, if one does not subscribe to theistic beliefs then morality is in trouble; by this extreme positioning, morality is synonymous with religion and one definitely cannot live without the other.

Conversely (and reassuringly), in modern society we have seen that morality does exist in individuals whom lack spirituality. It has been reaffirmed as an intrinsically human trait with deeper roots than the scripture of religious texts. Moral understanding has matured beyond the point of appealing to a higher being and has reattached itself firmly to the human mind. The problem with this newfound interpretation is that in order for morality to be considered as a naturally emergent product of biological systems, moral evolution is a necessary requirement. Put simply, natural examples of moral systems (consisting of cooperative behaviour and within group preference) must be observable in the natural environment. Moral evolution must be a naturally occurring phenomenon.

A thought experiment known as the “Prisoner’s dilemma” summarises succinctly the inherent problems with the natural evolution of mutually cooperative behaviour. This scenario consists of two parties, prisoners, whom are seeking an early release from jail. They are given the choice of either a) betraying their cellmate and walking free while the other has their sentence increased – ‘defecting’ or b) staying silent and mutually receiving a shorter sentence – ‘cooperating’. It becomes immediately apparent that in order for both parties to benefit, both should remain silent and enjoy a reduced incarceration period. Unfortunately, and also the catalyst for the terming of this scenario as a dilemma, the real equilibrium point is for both parties to betray. Here, the pay-off is the largest – walking free while your partner in crime remains behind with an increased sentence. In the case of humans, it seems that some sort of meta-analysis has to be done, a nth-order degree of separation (thinking about thinking about thinking), with the most dominant stratagem resulting in betrayal by both parties.

Here we have an example of the end product; an advanced kind of morality resulting from social pressures and their influence on overall outcome (should I betray or cooperate – do I trust this person?). In order to look at the development of morality from its more primal roots, it is prudent to examine research in the field of evolutionary biology. One such empirical investigation (conducted by Aviles, 2002that is representative of the field involves the mathematical simulation of interacting organisms. Modern computers lend themselves naturally to the task of genetic simulation. Due to the iterative nature of evolution, thousands of successive generations live, breed and die in the time it takes the computer’s CPU to crunch through the required functions. Aviles (2002) took this approach and created a mathematical model that begins at t = 0 and follows pre-defined rules of reproduction, genetic mutation and group formation. The numerical details are irrelevant; suffice to say that cooperative behaviours emerged in combination with ‘cheaters’ and ‘freeloaders’. Thus we see the dichotomous appearance of a basic kind of morality that has evolved spontaneously and naturally, even though the individual may suffer a ‘fitness’ penalty. More on this later.

“[the results] suggest that the negative effect that freeloaders have on group productivity (by failing to contribute to communal activities and by making groups too large) should be sufficient to maintain cooperation under a broad range of realistic conditions even among nonrelatives and even in the presence of relatively steep fitness costs of cooperation” Aviles, (2002).

Are these results translatable to reality? It is all well and good to speak of digital simulations with vastly simplified models guiding synthetic behaviour; the real test comes in observation of naturally occurring forms of life. Discussion by Kreft and Bonhoeffer (2005) lends support to the reality of single-celled cooperation, going so far as suggesting that “micro-organisms are ever more widely recognized as social”. Surely an exaggerated caricature of the more common definition of ‘socialness’, however the analogy is appropriate. Kreft et al effectively summarise the leading research in this field, and put forward the resounding view that single-celled organisms can evolve to show altruistic (cooperative) behaviours. We should hope so; otherwise the multicellularity which led to the evolution of humanity would have nullified our species’ development before it even started!

But what happened to those pesky mutations that evolved to look out for themselves? Defectors (choosing not to cooperate) and cheaters (choosing to take advantage of altruists) are also naturally emergent. Counter-intuitively, such groups are shown to be kept in their place by the cooperators. Too many cheaters, and the group fails through exploitation. The key lies in the dynamic nature of this process. Aviles (2002) found that in every simulation, the number of cheaters was kept in control by the dynamics of the group. A natural equilibrium developed, with the total group size fluctuating according to the number of cheaters versus cooperators. In situations where cheaters ruled; the group size dropped dramatically, resulting in a lack of productive work and reduced reproductive rates. Thus, the number of cheaters is kept in check by the welfare of the group. It’s almost a love/hate relationship; the system hates exploiters, but in saying that, it also tolerates their existence (in sufficient numbers).

Extrapolating from these conclusions, a logical outcome would be the universal adoption of cooperative behaviours. There are prime examples of this in nature; bee and ant colonies, migratory birds, various aquatic species, even humans (to an extent) all work together towards the common good. The reason why we don’t see this more often, I believe, is due to convergent evolution – different species solved the same problem from different approaches. Take flight for example – this has been solved separate times in history by both birds and insects. The likelihood of cooperation is also affected by external factors; evolutionary ‘pressures’ that can guide the flow of genetic development. The physical structure of the individual, environmental changes and resource scarcity are all examples of such factors that can influence whether members of the same species work together.

Humanity is a prime example; intrinsically we seem to have a sense of inner morality and tendency to cooperate when the conditions suit. The addition of consciousness complicates morality somewhat, in that we think about what others might do in the same situation, defer to group norms/expectations, conform to our own premeditated moral guidelines and are paralyzed by indecisiveness. We also factor in environmental conditions, manipulating situations through false displays of ‘pseudo-morality’ to ensure our survival in the event of resource scarcity. But when the conditions are just so, humanity does seem to pull up its trousers and bind together as a singular, collective organism. When push comes to shove humanity can work in unison. However just as bacteria evolve cheaters and freeloaders, so to does humanity give birth to individuals that seem to lack a sense of moral guidance.

Morality must be a universal trait, a naturally emergent phenomenon that predisposes organisms to cooperate towards the common good. But just as moral ‘goodness’ evolves naturally, so too does immorality. Naturally emergent cheaters and freeloaders are an intrinsic part of the evolution of biological systems. Translating these results to the plight of humanity, it becomes apparent that such individual traits are also naturally occurring in society. Genetically, and to a lesser extent, environmentally, traits from both ends of the moral scale will always be a part of human society. This surely has implications for the plans of a futurist society, relying solely on humanistic principles. Moral equilibrium is ensured, at least biologically, for the better or worse. Whether we can physically change the course of natural evolution and produce a purely cooperative species is a question that can only be answered outside the realms of philosophy.

In contrast to our recent discussions on religious extremism, transhumanism offers an alternative position that is no less radical yet potentially rewarding. The ideology of transhumanism is comparable to secular humanism in that both advocate the importance of individuality and personal growth. However, where these two positions diverge is in regards to the future of human evolution. In this article I would like to firstly offer a broad definition of transhumanism, followed by the arguments both for and against its implementation. Finally, I would like to discuss the possibility of society adopting a transhumanist position in order to fully realise our human potential.

Transhumanism proposes that in order to take advantage of our natural abilities, a complete embracing of technological progress is necessary. Specifically, and where this position differs from the more conservative and broader topic of humanism, transhumanists believe that self- enhancement to achieve this goal through the use of emerging technology is entirely justifiable. The details of such modifications include a large variety of breakthrough technologies; transhumanists vary individually based on personal preference although the end goal is similar. Cryogenics, mind-digitalisation, genetic engineering and bionic enhancement are all possible methods proposed to usher in a ‘post-human’ era.

A secondary goal (and flowing as a consequence from the first) of transhumanism is the elimination of human suffering and inadequacies. By removing mental and physical inequalities through a process of self-directed evolution (enhancement or prenatal genetic screening/selection) the transhumanist argues that social divides will also be eliminated. Specifically, an improvement of human faculties through cybernetic augmentation is thought to eliminate the gap between intellect. It puts society on an equally intelligent footing. Likewise, the genetic engineering approach hopes to select intellect and physical prowess either pre-birth or post-birth through genetic modification. Mind-transfer or digitalisation proposes to extend both our lifespans (indefinitely) and our mental capacities. The trade-off here is our loss of the physical.

Many transhumanists regard such enhancements as not only natural, but necessary if humanity is to truly understand the world in which we live. They argue that the natural process of evolution and ‘old-fashioned’ practice/training is too slow to equip us with the necessary skills with which to undertake research in the future. One example is space travel. Human bodies are arguably not designed for prolonged exposure to the rigors of space. Bones become brittle and radiation vastly increases the chances of cancer developing (not to mention the unknown psychological and physiological effects of permanent space-habitation). Eliminating such ‘weaknesses’ would allow humans to more efficiently conquer space by removing the need for costly habitation modules and protective shielding. But does self-augmentation create more problems than it solves?

Certainly, from a moral point of view, there are a multitude of arguments levelled at transhumanism. While the majority of these arguments hold merit, I intend to argue that once the initial opposition based on emotional responses is exposed, the core principles of transhumanism really can improve the quality of life for many disadvantaged people on this planet. While the attacks on transhumanism come in many different forms I will instead be concentrating on the moral implications of endorsing this position.

The threat to morality posed by transhumanism has been levelled by both the theistic and the scientific community alike. This argument postulates that 1) ‘contempt of the flesh’ is immoral in the sense that rejecting our natural form and processes is also a rejection of god’s power and intent and 2) rather than removing divides, transhumanism will actually operate in reverse, creating increased discrepancies between those with the ability to improve and those that don’t – the creation of a ‘super-class’ of human and vast disparities in wielded power. The first point is easy enough to dismiss (from an atheistic point of view). Delving deeper, philosophical naturalism, to a degree, proposes that natural effects arrive from natural causes, thus the introduction here of artificial causes results in artificial effects. The problem lies within us being created from natural ‘stuff’ therefore how can we predict with any accuracy or confidence the outcome of unnatural processes? The second point proposes that democracy itself may be threatened by transhumanism. The potential for abuse by the emergent ‘superhuman’ class is easy enough to see. The only rebuttal hope I offer here is that surely self-improvement would aim to not only improve rational faculties, but also emotional – humans would naturally seek to improve our ability to empathise, cooperate and generally act in a morally acceptable manner.

The divide between the intellectually/physically rich and poor can only be closed if transhumanism is enacted uniformly. Unfortunately, the capitalist society in which we live most likely ensures that only the monetarily rich will benefit. Since money does not necessarily equate with moral goodness and intelligence, we are thus in dire straits as transhumanist ideology will quickly be abandoned in the pursuit of dominance and power. Therefore, transhumanism is probably the world’s most dangerous idea (Fukuyama, 2004). The potential for great evil is dizzying. Fortunately, the reverse is also true.

Elimination of inequality is a noble goal of transhumanism. It also attainable from two main angles of attack. Through the means – universal adoption of technology that removes the necessary conditions for suffering to occur (eg disability, sustenance, shelter – uploaded minds stored on digital media) and through the ends – augmentation and improvement that creates superior organisms that live harmoniously. Perhaps this is a necessary step in order for humanity to fully realise its potential; taking charge of our species’ destiny in a more directed and controlled manner than blind evolution can ever hope to achieve.

But arguably, the transhumanism dream is already happening. Society, in a way, is habituating us to the changes that must occur if transhumanism is to be adopted. Psychologically and philosophically, the ideas are out there and being debated regularly. The details, while not finalised, are being worked over and improved using (mostly) rational methodology. Internet and other wireless communications methods have begun the process of ‘disembodiment’ that the digitalisation of human minds surely requires. The internet has facilitated an exponential growth of non-traditional social interaction, existing mostly on the digital plain. Thus, we are already developing the necessary mindsets and modifications to etiquette that transhumanism requires. Cosmetic surgery, while not altogether a morally appropriate example (due to its use and abuse) is also a moving trend in society towards self-modification. On the other hand, negative examples such as psychological disorders such as self-harming and anorexia are salient reminders of how these trends can manifest themselves in untoward ways.

Therefore, the fate of transhumanism rests squarely on its ability to tread carefully across the moral tightrope; too liberal and abuse is inevitable. Too conservative and its full potential is unrealised. Left-wing supporters of transhumanism (Marvin Minsky et al) are, unfortunately, the main public face of this ideology. Their ideas are too liberal, and dangerous if used as a springboard for implementing transhumanist principles. Such examples only serve to highlight the potential for this position to be abused for personal gain. Aging scientists desperate to continue life without the frailties of decaying flesh. They look to the future like a boy dreams of one day living the space-age tales of science-fiction novels. This is not what transhumanism is supposed to be about. It is the practical realisation of a humanist life philosophy; how we could possibly use the technological tools at our disposal to create a utopian society and encourage exponential individualistic growth.

Unfortunately many obstacles remain in the path of a future where humanity transgresses its shortcomings. Morally, the question comes down to a simplistic decision. Why should we be afraid to improve what we currently leave to chance? Surely it is ‘more moral’ to realise the potential of every individual, rather than leaving it down to the throw of a dice. Allowing a child to live a life of disability and suffering as opposed to one where all opportunities are open to them has to be morally acceptable. The only uncertainty in this equation is whether the means justifies the ends.

Transhumanist ideals must be regulated and monitored if they are to be implemented appropriately and uniformly. Just as there are people now who chose not to embrace modern technology, so too will there be people who chose not to augment themselves with improvements. Such people must be respected if transhumanism is to be morally just, and does not delegate groups of people to lower levels of status or the exhibits of future-museums. Just as liberty was used to create a choice to proceed with technological advancement, so too must the liberty of those who chose not to be protected and cherished. After all, the creation of diversity is what makes us human in the first place. To sacrifice that for the sake of ‘progress’ would be a travesty and ideological genocide of the worst kind.

Social aptitude is something that is often at the forefront of my mind. Not because I am particularly interested in the topic, but rather I worry about the perceptions others hold, whether I am ‘performing’ socially at an adequate level and where the next social challenge will come from. But where do these fears and observations stem from? Maslow’s hierarchy of needs places social interaction (the need to belong) as a fundamental requirement of human nature. Evolutionary psychologists correlate brain size with group size and through a process of ‘social grooming’, the development of bigger brains (as more individuals are engaged) becomes a necessary evolutionary requirement if the individual is to remain competitive. Linguists, philosophers and evolutionary psychologists all theorise that the growth of language arose from the need for efficient communication in large social groups. According to the philosopher Daniel Dennet, the use of primitive ‘warning-call’ languages that used a single word for each unique event (high pitched cry = lion approaching) quickly exhausted the minds of our ancestors, thus encouraging the development of a versatile language where words can take on multiple meanings and be used interchangeably in syntactically acceptable ways.

The conclusive intersection reached by all of these variations on our social heritage seems to suggest that social interaction is a fundamental need of humanity. The questions I intend to explore in this article include; what does this conclusion mean for day to day life and where did it come from, should we be worried about how we are performing socially (and how others perform) and finally, is such a reliance on social exchange wise in a society that is moving away from such contact, at least in the traditional sense (replacing face-to-face interactions with those of a purely distant, digital form).

The origins of our social nature might lie in ancient survival techniques. We are relatively vulnerable creatures, and following our descent from the trees, became even more so. Mad dashes between the relative safety of foliage could have acted as the first catalyst for communal behaviour. Warning calls and proto-language enabled groups to share the task of vigilance, while at the same time, allowed some individuals to exploit the warning system, using it to cheat others out of food (faking danger to eliminate competition). This actually occurs in the real-world with primates. Anexample I can only dimly recall from an Evolutionary Psychology lecture goes a little something like this. A specific species of primate (a type of monkey I believe) can be observed in the wild to fake the warning calls that signal a predator is on the way. This eliminates other group members from the newly spotted food source and allows the faker to steal it from under their noses. What is more astounding is that upon seeing this obvious deception, the remaining group members will attack the faker and beat them to within an inch of their life! Thus the raw ingredients for human social characteristics (in a psychologically comparative sense), such as deception and basic morality/punishment of wrongdoers seem to have their roots in dynamics that govern large groupings of animals.

Questions that still linger on the horizon are 1) why aren’t there species at various stages of cognitive/social evolution (providing us with irrefutable proof of the missing human/primate link) and 2) what made us (or more specifically, our ancestors that diverged from the chimp lineage) so different to all other species; what acted as the catalyst for change that prompted the development of bigger brains and society? The answer to 1) is easy, we killed them. Neanderthals and earlier sub-species (Australopithecus) , whether due to climate change, competition with homo sapiens or simply poor genes that weren’t as catered towards adaptability as ours, were all wiped out. It is interesting to note that Neanderthals’ brain size was larger than that of modern humans, possibly (though not necessarily) indicating higher intelligence, although perhaps their brains were organised in a different or less efficient way (or perhaps they were less war-like and bloodthirsty than us!).

The second question of our uniqueness is a little tougher to explain. I believe that the combination of random genetic mutation, physical traits and environmental change all acted as evolutionary momentum towards the creation of language and society. An analogy using the imagery of mountains and channels of water aptly describes this evolutionary process. Pressures on a species to change/adapt, whether they be environmental or genetic, combine to create a large peak; the height of the peak is determined by the urgency of the change. For example, the sudden onset of a new ice age combined with a hairless body would cause the formation of a large peak as the organism would thus be genetically selecting (through evolution) for offspring that could survive the change (warm coats of fur is one solution). Like a flowing river, evolution follows the easiest path downhill to its ultimate destination. Again, in analogy to a river, external influence can alter its course or slow its speed, sometimes with drastic results (an evolutionary dead end and extinction of the species). So quite possibly, the combination of our differing physical traits (homo sapiens were taller and acquired less muscle mass than the stocky, well-built Neanderthals) combined with some sort of environmental change (climate change or elimination of a primary food source – homo sapiens were omnivores whilst Neanderthals were thought to rely solely on meat) gave our species the evolutionary push down the mountain slope that outpaced that of our competition.

The creation of society and language is even more uncertain. The main problem is deciding whether language was a gradual process or emerged ‘all at once’. But again I digress. We now have enough background knowledge to proceed with a discussion on modern social traits and what they mean for everyday interaction.

Owing to a long history of evolutionary pressure to establish pro-social traits in the human psyche, we consequently now have many autonomously and automatically operating processes that operate on a sub-conscious level. Conscious thought patterns are prone to error, therefore creating the need for these processes to occur unconsciously and without our awareness. The nature of humanity, with the majority acting as intrinsically ‘good’ and moral beings and a minority acting in their own self interest, caused the formation and detail to most of these processes. On one hand, we require co-operative patterns of interaction, with pro-social attitudes such as conflict resolution (because fighting is alot riskier than talking) and the fair division of resources. On the other hand we also require a defense (and capability to commit it ourselves) against deception. A fine balance has to occur between knowing when to take more than you need and sharing out for the greater good. In a society where all are treated as equal, communal behaviour rules. But like any evolutionary system, occasionally ‘cheaters’ will enter the fray. Those that evolve patterns of behaviour and thinking that seek to take more and gorge themselves at the expense of others. A society full of such organisms is prone to failure, therefore the natural solution is to limit the number of such agents. Perhaps this formed the basis for language and self-actualisation; a defensive mechanism invented by communal individuals to defend against cheaters and test the sincerity of other communals.

So from a purely animalistic perspective, yes we should be worried about our social performance. As most of this behaviour is rooted in unconscious processes that are often beyond our control (at least if we aren’t aware that they are occurring), other individuals will unknowingly be testing each other at each interaction. At first thought this seems paralysingly frightening. Every interaction is a test, a mental duel, a theatrical performance with both agents vying for supremacy. Unconsciously we begin to form impressions based upon animal urges (are they sexually attractive, are they a cheater or a communal, are they a threat to my power) and often such stereotyping and categorising brings out the worst in human behaviour.

More forward, as most of social interaction is based upon unconscious processes that stem from millenia of evolution, is it wise for us to proceed ‘as planned’ without challenging the ways in which we exchange information with other people? Especially in a society that is moving quickly towards one that favours anonymity and deception as a way of life. The internet and information revolution has prompted a change in the way we conduct social interactions. The elimination of non-verbal communication and direct observation of physical features during socialisation is good in the sense that our animal evolutionary processes are being stunted, but bad in that this newfound anonymity makes it easier for the deviants of society to hide behind a digital veil. The world has, without a doubt, become increasingly more social as the internet revolution reaches full intensity, however is this at the expense of meaningful and truthful social interaction? On the digital savannah, truth flies out the window as the insecure and otherwise socially inept have a vast arsenal with which to present themselves in a more favourable light. Avatars replace face-to-face communication forcing people to exist in a pseudo-reality where one can take on any attributes they desire.

Is the new path of social interaction leading us to an increase in cognitive development or stunting our growth and forcing us back hundreds of years in social evolution? The jury is still out, however it is without a doubt that social situations are still highly valued exchanges between individuals. This makes them none-the-less scary, especially for those individuals that have a tendency to bring the unconscious to the forefront of their minds (myself included). Performing such an act is paralysing, as the frontal lobes take over the task that is usually dealt with on an unconscious level. Each word is evaluated for its social merit, the mind constantly simulating the reactions of others and hypothetically testing each sentence to ensure it makes sense. It is only through a relaxation of this monitoring process and a relapse towards the unconscious that conversation flows and the individual is socially effective. But as we have seen, these unconscious processes are steeped in ancient and primitive principles that have little use for a future society that aims to be inclusive, intellectualised and transcendent. However, one thing is certain. The full extent of the changes that sociology is currently experiencing must be monitored to determine if they are accentuating dysfunction or more positively, changing our evolutionary course away from animalistic urges and towards genuine social interactions that value the input of both members without and subtle undertones and jockeying for higher authority.