You are currently browsing the tag archive for the ‘Evolution’ tag.

In a previous article, I discussed the possibility of a naturally occurring morality; one that emerges from interacting biological systems and is characterised by cooperative, selfless behaviours. Nature is replete with examples of such morality, in the form of in-group favouritism, cooperativity between species (symbiotic relationships) and the delicate interrelations between lower forms of life (cellular interaction). But we humans seem to have taken morality to a higher plane of existence, classifying behaviours and thoughts into a menagerie of distinct categories depending on the perceived level of good or bad done to external agents. Is morality a concept that is constant throughout the universe? If so, how could morality be defined in a philosophically ‘universal’ way, and how does it fit in with other universals? In addition, how can humans make the distinction between what is morally ‘good’ and ‘bad? These are the questions I would like to explore in this article.

When people speak about morality, they are usually referring to concepts of good and evil. Things that help and things that hinder. A simplistic dichotomy into which behaviours and thoughts can be assigned. Humans have a long history with this kind of morality. It is closely intertwined with religion, with early scriptures and the resulting beliefs providing the means to which populations could be taught the virtues of acting in positive ways. The defining feature of religious morality finds it footing with the lack of faith in the human capacity to act for the good of the many. Religions are laced with prejudicial put downs that seek to undermine our moral integrity. But they do touch on a twinge of truth; evolution has seen the creation of a (primarily) self-centred organism. Taking the cynical view, it can be argued that all human behaviour can be reduced to purely egotistical foundations.

Thus the problem becomes not one of definition, but of plausibility (in relation to humanity’s intrinsic capacity for acting in morally acceptable ways). Is religion correct in its assumptions regarding our moral ability? Are we born into a world of deterministic sin? Theistically, it seems that any conclusion can be supported via the means of unthinking faith. However, before this religiosity is dismissed out of hand, it might be prudent to consider the underlying insight offered.

Evolution has shown that organisms are primarily interested in survival of the self (propagation of genetic material). This fits in with the religious view that humanity is fundamentally concerned with first-order, self-oriented consequences, ann raises the question of whether selfish behaviour should be considered immoral. But what of moral events such as altruism, cooperation and in-group behavioural patterns? These too can be reduced to the level of self-centered egoism, with the superficial layer of supposed generosity stripped away to more meager foundations.

Morality then becomes a way of a means to an end, that end being the fulfillment of some personal requirement. Self initiated sacrifice (altruism) elevates one’s social standing, and provides the source for that ‘warm, fuzzy feeling’ we all know and love. Here we have dual modes of satiation, one that is external to the agent (increasing power, status) and one that is internal (evolutionary mechanism for rewarding cooperation). Religious cynicism is again supported, in that humans seem to have great difficulty in performing authentic moral acts. Perhaps our problem here lies not in the theistic stalker, laughing gleefully at our attempts to grasp at some sort of intrinsic human goodness, but rather in our use of the word ‘authentic’. If one makes an allowance and conceeds that humans could simply lack the faculties for connotation-free morality, and instead put forward the proposition that moral behaviours are instead measured by their main direction of action (directed inwards; selfishly or outwards; altruistically), we can arrival at a usable conceptualisation.

Reconvening, we now have a new operational definition of morality. Moral action is thus characterised by the focus of its attention (inward vs outward) as opposed to a polarised ‘good vs evil’, which manages to evade the controversy introduced by theism and evolutionary biology (two unlikely allies!). The resulting consequence is that we have a kind of morality which is not defined by its degree of ‘ correctness’, which from any perspective is entirely relative. However, if we are to arrive at a meaningful and usable moral universal that is applicable to human society, we need to at least consider this problem of evil and good.

How can an act be defined as morally right or wrong? Considering this question alone conjours up a large degree of uncertainty and subjectivity. In the context of the golden rule (do unto others as you would have done unto yourself), we arrive at even murkier waters; what of the psychotic or sadist whom prefers what society would consider abnormal treatment? In such a situation could ‘normally’ unacceptable behaviour be construed as morally correct? It is prudent to discuss the plausibility of defining morality in terms of universals that are not dependent upon subjective interpretation if this confusion is to be avoided.

Once again we have returned to the issue of objectively assessing an act for its moral content. Intuitively, evil acts cause harm to others and good acts result in benefits. But again we are falling far short of the region encapsulated by morality; specifically, that acts can seem superficially evil yet arise from fundamentally good intentions. And thus we find a useful identifier (in the form of intention) that is worthy of assessing the moral worth of actions.

Unfortunately we are held back by the impervious nature of the assessing medium. Intention can only be ascertained through introspection, and to a lesser degree, psychometric testing. Intention can even be illusive to the individual, if their judgement is clouded by mental illness, biological deformity or an unconscious repression of internal causality (deferring responsibility away from the individual). Therefore, with such a slippery method of assessment regarding the authenticity and nature of the moral act, it seems difficult that morality could ever be construed as a universal.

Universals are exactly what their name connotes; properties of the world in which we inhabit that are experienced across reality. That is to say, morality could be classed as a universal due to its generality amoung our species and its quality of superceeding characterising and distinguishing features (in terms of mundane, everyday experience). If one is to class morality under the category of universals, one should modify the definition to incorporate features that are non-specific and objective. Herein lies the problem with morality; it is such a variable phenomenon, with large fluctuations in individual perspective. From this point there are two main options available given current knowledge on the subject. Democratically, the qualities of a universal morality could be determined through majority vote. Alternatively, a select group of individuals or one definitive authority could propose and define a universal concept of morality. One is left with few options on how to proceed.

If a universal conceptualisation of morality is to be proposed, an individual perspective is the only avenue left with the tools we have at our disposal. We have already discussed the possibility of internal vs external morality (bowing to pressures that dictate human morality is indivisibly selfish, and removing the focus from good vs evil considerations). This, combined with a weighted system that emphasises not the degree of goodness, but rather the consideration of the self versus others, results in a useful measure of morality (for example, there will always be a small percentage of internal focus). But what are we using as the basis for our measurement? Intention has already proved to be elusive, as is objective observation of acts (moral behaviours can be reliant on internal reasoning to determine their moral worth, some behaviours go unobserved or can be ambiguous to an external agent). Discounting the possibility of a technological breakthrough enabling direct thought observation (and the ethical considerations such an invasion of privacy would bring), it seems difficult on how we can proceed.

Perhaps it is best to simply take a leap of faith, believing in humanity’s ability to make judgements regarding moral behaviour. Instead of cynically throwing away our intrinsic abilities (which surely do vary in effectiveness within the population), we should trust that at least some of us would have the insight to make the call. With morality, the buck definitely stops with the individual, which is a fact that most people can have a hard time swallowing. Moral responsibility definitely rests with the persons involved, and in combination with a universally expansive definition, makes for some interesting assertations of blame, not to mention a pressuring force to educate the populace on the virtues of fostering introspective skills.

After returning from a year-long hiatus to the United Kingdom and continental Europe, I thought it would be prudent to share my experiences. Having caught the travel bug several years ago when visiting the UK for the first time, a year long overseas working holiday seemed like a dream come true. What I didn’t envisage was the effects of this experience on cognitions, specifically, the feelings of displacement, disorientation and dissatisfaction. In this article I aim to examine the effects of a changing environment on the human perceptual experience, as it relates to overseas, out-group exposure and the psychological mechanisms underlying these cognitive fluctuations.

It seems that the human need to belong runs deeper than most would care to admit. Having discounted any possibility of ‘homesickness’ prior to arrival in the UK, I was surprised to find myself unwittingly (or perhaps conforming to unconscious social expectation – but we aren’t psychoanalysts here!) experiencing the characteristic symptomatology of overall depression, including sub-signs of negative affect, longing for a return home and feelings concurrent with social ostracism. This struck me as odd, in that if one is aware of an impending event, surely this awareness predisposes one to a lesser effect simply through mental preparation and conscious deflection of the expected symptoms. The fact that negative feelings were still experienced despite such awareness causes an alternative etiology for the phenomenon of homesickness. Indeed, it offers a unique insight into the human condition; at a superficial level our dependency on consistency and familiarity, and at a deeper, more fundamental level, a possible interpretation of underlying cognitive processes involved in making sense of the world and responding to stimuli.

Taken at face value, a change in an individual’s usual physical and social environment displays the human reliance on group stability. From an evolutionary perspective, the prospect of travel to new and unfamiliar territories (and potential groups of other humans) is a altogether risky affair. On the one hand, the individual (or group) could possibly face death or injury through anthropogenic means or from the physical environment. On the other hand, a lack of change reduces stimulation genetically (through interbreeding with biologically related group members), cognitively (reduced problem solving, mental stagnation once initial challenges relating to the environment are overcome) and socially (exposure to familiar sights and sounds reduces the capacity for growth in language and, ipsofacto, culture). In addition, the reduction of physical resources through consumption and degradation of the land via over-farming (hunting) is another reason for moving beyond the confines of what is safe and comfortable. As the need for biological sustenance outranks all other human requirements (according to Maslow’s hierarchy), inductively it seems plausible that this could be the main motivating factor why human groups migrate and risk everything for the sake of exploring the unconquered territories of terra incognito. 

The mere fact that we do, and have (as shown throughout history) uprooted our familiar ties and trundled off in search of a better existence seems to make the aforementioned argument a moot point. It is not something to be debated, it is merely something that humans just do. Evolution favours travel, with the potential benefits outweighing the risks by far. The promise of greener pastures on the other side is almost enough to guarantee success. The cognitive stimulation such travel brings may also improve the future chances of success in this operation through learnt experiences and the conquering of challenges, as facilitated by human ingenuity.

But what of the social considerations when travelling? Are our out-group prejudices so intense that the very notion of travel to unchartered waters causes waves of anxiety? Are we fearing the unknown, our ability to adapt and integrate or the possibility that we may not make it out alive and survive to propagate our genes? Is personality a factor in predicting an individual’s performance (in terms of adaptation to the new environment, integration with a new group and success at forging new relationships)? From personal experience, perhaps a combination of all these factors and more.

We can begin to piece together a rough working model of travel and its effects on an individual’s social and emotional stability/wellbeing. The change in a social and physical environment seems to predict the activation of certain evolutionary survival mechanisms that are mediated by several conditions of the travel undertaken. Such conditions could involve; similarity of the target country to the country of origin (in terms of culture, language, ethnic diversity, political values etc),  social support to the individual (group size when travelling, facilities to make contact with group members left behind), personality characteristics of the individual (impulsive, extroverted vs introverted, attachment style, confidence) and cognitive ability to integrate and adapt (language skills, intelligence, social ability). Thus we have a (predicted) linear relationship whereby an increase in the degree of change (measured on a multitude of variables such as physical characteristics, social aspects, perceptual similarities) from the original environment to the target environment causes a resultant change in the psychological distress of the individual (either increased or decreased dependent upon the characteristics of the mediating variables).

Perceptually, travel also seems to have an effect on the salience and characteristics of the experience. In this instance we have deeper cognitive processes that activate which influence the human sensory experience on a fundamental level. The model employed here is one of stimulus-response, handed down through evolutionary means from a distant ancestor. Direct observation of perceptual distortion while travelling is apparent when visiting a unique location. Personally, I would describe the experience as an increase in arousal to one of hyper-vigilance. Compared to subsequent visits to the same location, the original seems somehow different in a perceptual sense. Colours, smells, sounds and tastes are all vividly unique. Details are stored in memory that are ignored and discounted after the first event. In essence, the second visit to a place seems to change the initial memory. It almost seems like a different place.

While I am unsure as to whether this is experienced by anyone apart from myself, evolutionarily it makes intuitive sense. The automation of a hyper-vigilant mental state would prove invaluable when placed in a new environment. Details spring forth and are accentuated without conscious effort, thus improving the organism’s chances of survival. When applied to modern situations, however, it is not only disorientating, but also very disconcerting (at least in my experience).

Moving back to social aspects of travel, I have found it to be both simultaneously a gift and a curse. Travel has enabled an increased understanding and appreciation of different cultures, ways of life and alternative methods for getting things done. In the same vein, however, it has instilled a distinct feeling of unease and dissatisfaction with things I once held dear. Some things you simply take for granted or fail to take notice of and challenge. In this sense, exposure to other cultures is liberating; especially in Europe where individuality is encouraged (mainly in the UK) and people expect more (resulting in a greater number of opportunities for those that work hard to gain rewards and recognition). The Australian way of life, unfortunately, is one that is intolerant of success and uniqueness. Stereotypical attitudes are abundant, and it is frustrating to know that there is a better way of living out there.

Perhaps this is one of the social benefits of travel; the more group members that do it increases the chances of changing ways of life towards more tolerant and efficient methods. Are we headed towards a world-culture where diversity is replaced with (cultural) conformity? Is this ethically viable or warranted? Could it do more harm than good? It seems to me that there would be some positive aspects for a global conglomerate of culture. Then again, the main attraction of travel lies in the experience of the foreign and unknown. To remove that would be to remove part of the human longing for exploration and a source of cognitive, social and physical stimulation. Perhaps instead we should encourage travel in society’s younger generations, exposing them to such experiences and encouraging internal change based on better ways of doing things. After all, we are the ones that will be running the country someday.

Morality is a phenomenon that permeates through both society as a whole and also individually via the consciousness of independent entities. It is a force that regularly influences our behaviour and is experienced (in some form or another) universally, species-wide. Intuitively, morality seems to be at the very least, a sufficient condition for the creation of human groups. Without it, co-operation between individuals would be non-existent. But does morality run deeper? Is it, in fact, a necessary condition of group formation and a naturally emergent phenomenon that stems from the interaction of replicating systems? Or can morality only be experienced by organisms operating on a higher plane of existence – those that have the required faculties with which to weigh up pros and cons, engage in moral decision making and other empathic endeavors (related to theory of mind)?

The resolution to this question depends entirely on how one defines the term. If we take morality to encompass the act of mentally engaging in self-reflective thought as a means with which to guide observable behaviours (acting in either selfish or selfless interests), then the answer to our question is yes, morality seems to be inescapably and exclusively linked only to humanity. However, if we twinge this definition and look at the etiology of morality – where this term draws its roots and how it developed over time, one finds that even the co-operative behaviours of primitive organisms could be said to construe some sort of basic morality. If we delve even deeper and ask how such behaviours came to be, we find that the answer is not quite so obvious. Can a basic version of morality (observable through cooperative behaviours) result as a natural consequence of interactions beyond the singular?

When viewed from this perspective, cooperation and altruism seem highly unlikely; a system of individually competing organisms, logically, would evolve to favour the individual rather than the group. This question is especially prudent when studying cooperative behaviours in bacteria or more complex, multicellular forms of life, as they lack a consciousness capable of considering delayed rewards or benefits from selfless acts

In relation to humanity, why are some individuals born altruistic while others take advantage without cause for guilt? How can ‘goodness’ evolve in biological systems when it runs counter to the benefit of the individual? These are the questions I would like to explore in this article.

Morality, in the traditional, philosophical sense is often constructed in a way that describes the meta-cognitions humans experience in creating rules for appropriate (or inappropriate) behaviour (inclusive of mental activity). Morality can take on a vast array of flavours; evil at one extreme, goodness at the other. We use our sense of morality in order to plan and justify our thoughts and actions, incorporating it into our mental representations of how the world functions and conveys meaning. Morality is a dynamic; it changes with the flow of time, the composition of society and the maturity of the individual. We use it not only to evaluate the intentions and behaviours of ourselves, but also of others. In this sense, morality is an overarching egoistic ‘book of rules’ which the consciousness consults in order to determine whether harm or good is being done. Thus, it seeps into many of our mental sub-compartments; decision making, behavioural modification, information processing, emotional response/interpretation and mental planning (‘future thought’) to name a few.

As morality entertains such a privileged omni-presence, humanity has, understandably, long sought to not only provide standardised ‘rules of engagement’ regarding moral conduct but has also attempted to explain the underlying psychological processes and development of our moral capabilities. Religion, thus, could perhaps be the first of such attempts at explanation. It certainly contains many of the idiosyncrasies of morality and proposes a theistic basis for human moral capability. Religion removes ultimate moral responsibility from the individual, instead placing it upon the shoulders of a higher authority – god. The individual is tasked with simple obedience to the moral creeds passed down from those privileged few who are ‘touched’ with divine inspiration.

But this view does morality no justice. Certainly, if one does not subscribe to theistic beliefs then morality is in trouble; by this extreme positioning, morality is synonymous with religion and one definitely cannot live without the other.

Conversely (and reassuringly), in modern society we have seen that morality does exist in individuals whom lack spirituality. It has been reaffirmed as an intrinsically human trait with deeper roots than the scripture of religious texts. Moral understanding has matured beyond the point of appealing to a higher being and has reattached itself firmly to the human mind. The problem with this newfound interpretation is that in order for morality to be considered as a naturally emergent product of biological systems, moral evolution is a necessary requirement. Put simply, natural examples of moral systems (consisting of cooperative behaviour and within group preference) must be observable in the natural environment. Moral evolution must be a naturally occurring phenomenon.

A thought experiment known as the “Prisoner’s dilemma” summarises succinctly the inherent problems with the natural evolution of mutually cooperative behaviour. This scenario consists of two parties, prisoners, whom are seeking an early release from jail. They are given the choice of either a) betraying their cellmate and walking free while the other has their sentence increased – ‘defecting’ or b) staying silent and mutually receiving a shorter sentence – ‘cooperating’. It becomes immediately apparent that in order for both parties to benefit, both should remain silent and enjoy a reduced incarceration period. Unfortunately, and also the catalyst for the terming of this scenario as a dilemma, the real equilibrium point is for both parties to betray. Here, the pay-off is the largest – walking free while your partner in crime remains behind with an increased sentence. In the case of humans, it seems that some sort of meta-analysis has to be done, a nth-order degree of separation (thinking about thinking about thinking), with the most dominant stratagem resulting in betrayal by both parties.

Here we have an example of the end product; an advanced kind of morality resulting from social pressures and their influence on overall outcome (should I betray or cooperate – do I trust this person?). In order to look at the development of morality from its more primal roots, it is prudent to examine research in the field of evolutionary biology. One such empirical investigation (conducted by Aviles, 2002that is representative of the field involves the mathematical simulation of interacting organisms. Modern computers lend themselves naturally to the task of genetic simulation. Due to the iterative nature of evolution, thousands of successive generations live, breed and die in the time it takes the computer’s CPU to crunch through the required functions. Aviles (2002) took this approach and created a mathematical model that begins at t = 0 and follows pre-defined rules of reproduction, genetic mutation and group formation. The numerical details are irrelevant; suffice to say that cooperative behaviours emerged in combination with ‘cheaters’ and ‘freeloaders’. Thus we see the dichotomous appearance of a basic kind of morality that has evolved spontaneously and naturally, even though the individual may suffer a ‘fitness’ penalty. More on this later.

“[the results] suggest that the negative effect that freeloaders have on group productivity (by failing to contribute to communal activities and by making groups too large) should be sufficient to maintain cooperation under a broad range of realistic conditions even among nonrelatives and even in the presence of relatively steep fitness costs of cooperation” Aviles, (2002).

Are these results translatable to reality? It is all well and good to speak of digital simulations with vastly simplified models guiding synthetic behaviour; the real test comes in observation of naturally occurring forms of life. Discussion by Kreft and Bonhoeffer (2005) lends support to the reality of single-celled cooperation, going so far as suggesting that “micro-organisms are ever more widely recognized as social”. Surely an exaggerated caricature of the more common definition of ‘socialness’, however the analogy is appropriate. Kreft et al effectively summarise the leading research in this field, and put forward the resounding view that single-celled organisms can evolve to show altruistic (cooperative) behaviours. We should hope so; otherwise the multicellularity which led to the evolution of humanity would have nullified our species’ development before it even started!

But what happened to those pesky mutations that evolved to look out for themselves? Defectors (choosing not to cooperate) and cheaters (choosing to take advantage of altruists) are also naturally emergent. Counter-intuitively, such groups are shown to be kept in their place by the cooperators. Too many cheaters, and the group fails through exploitation. The key lies in the dynamic nature of this process. Aviles (2002) found that in every simulation, the number of cheaters was kept in control by the dynamics of the group. A natural equilibrium developed, with the total group size fluctuating according to the number of cheaters versus cooperators. In situations where cheaters ruled; the group size dropped dramatically, resulting in a lack of productive work and reduced reproductive rates. Thus, the number of cheaters is kept in check by the welfare of the group. It’s almost a love/hate relationship; the system hates exploiters, but in saying that, it also tolerates their existence (in sufficient numbers).

Extrapolating from these conclusions, a logical outcome would be the universal adoption of cooperative behaviours. There are prime examples of this in nature; bee and ant colonies, migratory birds, various aquatic species, even humans (to an extent) all work together towards the common good. The reason why we don’t see this more often, I believe, is due to convergent evolution – different species solved the same problem from different approaches. Take flight for example – this has been solved separate times in history by both birds and insects. The likelihood of cooperation is also affected by external factors; evolutionary ‘pressures’ that can guide the flow of genetic development. The physical structure of the individual, environmental changes and resource scarcity are all examples of such factors that can influence whether members of the same species work together.

Humanity is a prime example; intrinsically we seem to have a sense of inner morality and tendency to cooperate when the conditions suit. The addition of consciousness complicates morality somewhat, in that we think about what others might do in the same situation, defer to group norms/expectations, conform to our own premeditated moral guidelines and are paralyzed by indecisiveness. We also factor in environmental conditions, manipulating situations through false displays of ‘pseudo-morality’ to ensure our survival in the event of resource scarcity. But when the conditions are just so, humanity does seem to pull up its trousers and bind together as a singular, collective organism. When push comes to shove humanity can work in unison. However just as bacteria evolve cheaters and freeloaders, so to does humanity give birth to individuals that seem to lack a sense of moral guidance.

Morality must be a universal trait, a naturally emergent phenomenon that predisposes organisms to cooperate towards the common good. But just as moral ‘goodness’ evolves naturally, so too does immorality. Naturally emergent cheaters and freeloaders are an intrinsic part of the evolution of biological systems. Translating these results to the plight of humanity, it becomes apparent that such individual traits are also naturally occurring in society. Genetically, and to a lesser extent, environmentally, traits from both ends of the moral scale will always be a part of human society. This surely has implications for the plans of a futurist society, relying solely on humanistic principles. Moral equilibrium is ensured, at least biologically, for the better or worse. Whether we can physically change the course of natural evolution and produce a purely cooperative species is a question that can only be answered outside the realms of philosophy.

Social aptitude is something that is often at the forefront of my mind. Not because I am particularly interested in the topic, but rather I worry about the perceptions others hold, whether I am ‘performing’ socially at an adequate level and where the next social challenge will come from. But where do these fears and observations stem from? Maslow’s hierarchy of needs places social interaction (the need to belong) as a fundamental requirement of human nature. Evolutionary psychologists correlate brain size with group size and through a process of ‘social grooming’, the development of bigger brains (as more individuals are engaged) becomes a necessary evolutionary requirement if the individual is to remain competitive. Linguists, philosophers and evolutionary psychologists all theorise that the growth of language arose from the need for efficient communication in large social groups. According to the philosopher Daniel Dennet, the use of primitive ‘warning-call’ languages that used a single word for each unique event (high pitched cry = lion approaching) quickly exhausted the minds of our ancestors, thus encouraging the development of a versatile language where words can take on multiple meanings and be used interchangeably in syntactically acceptable ways.

The conclusive intersection reached by all of these variations on our social heritage seems to suggest that social interaction is a fundamental need of humanity. The questions I intend to explore in this article include; what does this conclusion mean for day to day life and where did it come from, should we be worried about how we are performing socially (and how others perform) and finally, is such a reliance on social exchange wise in a society that is moving away from such contact, at least in the traditional sense (replacing face-to-face interactions with those of a purely distant, digital form).

The origins of our social nature might lie in ancient survival techniques. We are relatively vulnerable creatures, and following our descent from the trees, became even more so. Mad dashes between the relative safety of foliage could have acted as the first catalyst for communal behaviour. Warning calls and proto-language enabled groups to share the task of vigilance, while at the same time, allowed some individuals to exploit the warning system, using it to cheat others out of food (faking danger to eliminate competition). This actually occurs in the real-world with primates. Anexample I can only dimly recall from an Evolutionary Psychology lecture goes a little something like this. A specific species of primate (a type of monkey I believe) can be observed in the wild to fake the warning calls that signal a predator is on the way. This eliminates other group members from the newly spotted food source and allows the faker to steal it from under their noses. What is more astounding is that upon seeing this obvious deception, the remaining group members will attack the faker and beat them to within an inch of their life! Thus the raw ingredients for human social characteristics (in a psychologically comparative sense), such as deception and basic morality/punishment of wrongdoers seem to have their roots in dynamics that govern large groupings of animals.

Questions that still linger on the horizon are 1) why aren’t there species at various stages of cognitive/social evolution (providing us with irrefutable proof of the missing human/primate link) and 2) what made us (or more specifically, our ancestors that diverged from the chimp lineage) so different to all other species; what acted as the catalyst for change that prompted the development of bigger brains and society? The answer to 1) is easy, we killed them. Neanderthals and earlier sub-species (Australopithecus) , whether due to climate change, competition with homo sapiens or simply poor genes that weren’t as catered towards adaptability as ours, were all wiped out. It is interesting to note that Neanderthals’ brain size was larger than that of modern humans, possibly (though not necessarily) indicating higher intelligence, although perhaps their brains were organised in a different or less efficient way (or perhaps they were less war-like and bloodthirsty than us!).

The second question of our uniqueness is a little tougher to explain. I believe that the combination of random genetic mutation, physical traits and environmental change all acted as evolutionary momentum towards the creation of language and society. An analogy using the imagery of mountains and channels of water aptly describes this evolutionary process. Pressures on a species to change/adapt, whether they be environmental or genetic, combine to create a large peak; the height of the peak is determined by the urgency of the change. For example, the sudden onset of a new ice age combined with a hairless body would cause the formation of a large peak as the organism would thus be genetically selecting (through evolution) for offspring that could survive the change (warm coats of fur is one solution). Like a flowing river, evolution follows the easiest path downhill to its ultimate destination. Again, in analogy to a river, external influence can alter its course or slow its speed, sometimes with drastic results (an evolutionary dead end and extinction of the species). So quite possibly, the combination of our differing physical traits (homo sapiens were taller and acquired less muscle mass than the stocky, well-built Neanderthals) combined with some sort of environmental change (climate change or elimination of a primary food source – homo sapiens were omnivores whilst Neanderthals were thought to rely solely on meat) gave our species the evolutionary push down the mountain slope that outpaced that of our competition.

The creation of society and language is even more uncertain. The main problem is deciding whether language was a gradual process or emerged ‘all at once’. But again I digress. We now have enough background knowledge to proceed with a discussion on modern social traits and what they mean for everyday interaction.

Owing to a long history of evolutionary pressure to establish pro-social traits in the human psyche, we consequently now have many autonomously and automatically operating processes that operate on a sub-conscious level. Conscious thought patterns are prone to error, therefore creating the need for these processes to occur unconsciously and without our awareness. The nature of humanity, with the majority acting as intrinsically ‘good’ and moral beings and a minority acting in their own self interest, caused the formation and detail to most of these processes. On one hand, we require co-operative patterns of interaction, with pro-social attitudes such as conflict resolution (because fighting is alot riskier than talking) and the fair division of resources. On the other hand we also require a defense (and capability to commit it ourselves) against deception. A fine balance has to occur between knowing when to take more than you need and sharing out for the greater good. In a society where all are treated as equal, communal behaviour rules. But like any evolutionary system, occasionally ‘cheaters’ will enter the fray. Those that evolve patterns of behaviour and thinking that seek to take more and gorge themselves at the expense of others. A society full of such organisms is prone to failure, therefore the natural solution is to limit the number of such agents. Perhaps this formed the basis for language and self-actualisation; a defensive mechanism invented by communal individuals to defend against cheaters and test the sincerity of other communals.

So from a purely animalistic perspective, yes we should be worried about our social performance. As most of this behaviour is rooted in unconscious processes that are often beyond our control (at least if we aren’t aware that they are occurring), other individuals will unknowingly be testing each other at each interaction. At first thought this seems paralysingly frightening. Every interaction is a test, a mental duel, a theatrical performance with both agents vying for supremacy. Unconsciously we begin to form impressions based upon animal urges (are they sexually attractive, are they a cheater or a communal, are they a threat to my power) and often such stereotyping and categorising brings out the worst in human behaviour.

More forward, as most of social interaction is based upon unconscious processes that stem from millenia of evolution, is it wise for us to proceed ‘as planned’ without challenging the ways in which we exchange information with other people? Especially in a society that is moving quickly towards one that favours anonymity and deception as a way of life. The internet and information revolution has prompted a change in the way we conduct social interactions. The elimination of non-verbal communication and direct observation of physical features during socialisation is good in the sense that our animal evolutionary processes are being stunted, but bad in that this newfound anonymity makes it easier for the deviants of society to hide behind a digital veil. The world has, without a doubt, become increasingly more social as the internet revolution reaches full intensity, however is this at the expense of meaningful and truthful social interaction? On the digital savannah, truth flies out the window as the insecure and otherwise socially inept have a vast arsenal with which to present themselves in a more favourable light. Avatars replace face-to-face communication forcing people to exist in a pseudo-reality where one can take on any attributes they desire.

Is the new path of social interaction leading us to an increase in cognitive development or stunting our growth and forcing us back hundreds of years in social evolution? The jury is still out, however it is without a doubt that social situations are still highly valued exchanges between individuals. This makes them none-the-less scary, especially for those individuals that have a tendency to bring the unconscious to the forefront of their minds (myself included). Performing such an act is paralysing, as the frontal lobes take over the task that is usually dealt with on an unconscious level. Each word is evaluated for its social merit, the mind constantly simulating the reactions of others and hypothetically testing each sentence to ensure it makes sense. It is only through a relaxation of this monitoring process and a relapse towards the unconscious that conversation flows and the individual is socially effective. But as we have seen, these unconscious processes are steeped in ancient and primitive principles that have little use for a future society that aims to be inclusive, intellectualised and transcendent. However, one thing is certain. The full extent of the changes that sociology is currently experiencing must be monitored to determine if they are accentuating dysfunction or more positively, changing our evolutionary course away from animalistic urges and towards genuine social interactions that value the input of both members without and subtle undertones and jockeying for higher authority.

The act of categorisation is a fundamental cognitive process that is used to attach meaning to objects. As such, it forms the basis for daily interactions both social and introspective; social in the sense that stereotyping (a form of categorisation) affects not only our thoughts, but also our behaviour when interacting with others. This process is also introspective in that the act of categorising external objects influences how that information is internalised (correctly or incorrectly stored depending on the structure of the agent’s categorical schema).

Categorisation influences our perceptions of the world in a very marked way. The main advantage this function brings is that it makes generalisations possible and useful. Without categorisation, communicating thought processes and disseminating information from our world would become a very long-winded and convoluted process. The versatility of grouping commonly featured objects together allows us to talk about things informatively while leaving out all the tedious descriptive stuff. Categorisation is also one way of allowing meaning to be attached to objects, thoughts and feelings. For example, the emotion of feeling ‘sad’ includes a vast range of varying mental states all bubbling and boiling away in a sea of unpredictability. The overall result, of course, is easily identifiable to us as ‘someone of negative affect’, but how would we accomplish this feat if we did not have access to categorisation? We would surely be paralysed by the overwhelming variation that individual differences in the expression of sadness brings. One possible function of categorisation is to work cooperatively with the sensory regions of the brain to help provide an overall picture or concept to use in working memory space. Take face recognition for example, many hundreds of fluctuating variables (shape, position, features, colour etc) that somehow are compressed and averaged into something that is usable by the brain. The act of categorising the facial features into a coherent whole allows not only recognition, but the activation of memories, stereotypes, future planning and emotions (among other actions).

Delving deeper, I wonder whether it is possible to describe a thing without falling back to categorisation? It seems not, as the very act of describing something seems to presuppose, if not require, the existence of categorisation. This ‘reductionist’s nightmare’ becomes apparent with a simple mental simulation. Try now to describe a common everyday thing without referring to pre-established categories. Take a humble kitchen drinking glass for example. Straight away I have categorised the item; I could have been talking about any glass at all, however immediately I have succeeded in creating a mental image of a glass, which was then refined by the sub category of ‘drinking’. The first category, glass, could elicit countless mental images of everyday objects. Those images would cluster around some variation of the bell curve (although how do you arrange ideas and concepts either side of the most frequent and central idea as in a ‘normal’ bell curve?), with the frequencies of each item starting off low and graduating up to the most common. Most likely the majority of conjured mental images will correspond to some fuzzy approximation of the everyday drinking glass. Each image will vary from mind to mind, however the overall category is well defined and usable in terms of conveying ideas. The brain is ready to receive and store the incoming information under the category ‘glass’. Again we attempt to describe the glass without using categories; in this case we take the reductionist approach and peel back another layer of physical form. A possible avenue for this is to describe the molecular structure; X many billions of Silica molecules arranged in formations just so, composed in turn of X number of silicon atoms and X number of oxygen atoms…and so on. The problem here is that we are still referring to categories. We are using words such as ‘atoms’, ‘molecules’ and ‘oxygen’; all are categories of physical things that are inclusive of all those objects that make up the said category. They still succeed in conjuring up a generic icon in the mind’s eye.

Or a different approach could be taken, and instead of trying to explain the constituent components of the item in question, the utilities are proposed. Our old mate the drinking glass would thus be described in terms of its usefulness (holds liquids), its actions (constrains, lifted to the mouth, poured), its influences on our bodies (delivers nutrients via the mouth) and even the processes that went into constructing it (Sven from Ikea, cheap Chinese sweatshop). It soon becomes obvious that no matter how hard we try to avoid the use of categorisation, it forms the basis for our thought processes. Whether it be categories of sub-components, materials and atomic structure or categories of behaviour, actions or origin, placing everyday objects into generalised groups according to their features is what gives it meaning. Without categories not only would the (traditional) communication of ideas be difficult, even impossible, the very essence of the stuff around us would be meaningless. In short, without categoisation, the external world looses its meaning.

But what of the negative aspects of categorisation. Perhaps the most obvious is the potential for errors; that is, incorrectly categorising something to a pigeon hole that it shouldn’t belong in. Due to the fundamental (and often unconscious) manner in which catergorisation affects the entire thought process, an error at this foundation level can spell disaster for the entire system. Subjective ‘errors’ in the categorising process become most apparent in social situations. I believe this is due to a low level sub-routine that uses social interactions to make refinements to the overall system; by observing the responses of other agents (in the form of behaviours) to the behaviour of the self (once the category has been assigned and a response elicited) the sub-routine compares and contrasts how effective and accurate the assigned category is in relation to the categories of others. In this way our individual systems of categorisation are kept in-sync, thus preserving the collective sense of meaning and making communication possible. If this is unclear, take the following example. Bill is attempting to explain a novel object to Joe. Bill states the object is a ‘Kazoolagram’ but this contains no meaning to Joe at all; his categoorical ‘set’ is missing this category with its attached label of meaning. The object’s properties are then described, and Joe responds by suggesting similar sets from his repository of meaning; “Well is it anything like a Nincompoop?” Here Joe attempts to refine his mental schemas and grasps at existing examples to attach meaning to this unique object. The banter continues, with both participants gauging the accuracy of their categorisations through the behaviour of the other agent. Eventually they agree on the meaning of the object. This brings us to another question; is meaning emergent (ie greater than the sum of the parts) or simply a cobbled together collage of pre-existing mental representations (limited by the extent of the agents prior experiences)?

It seems as though the process of categorising can be influenced by the pre-existing content present within brains, especially past examples and experiences of events or objects similar to the one in question. Meaning seems to be both an emergent property and a combination of past experiences in that individually, the features of a category are useless, however together and in partnership with the agent’s existing knowledge (making the process faster if both agents have similar experiences) categories flourish into useful, meaningful tools for the processing and transmission of information.

The point of this article was to expose the extent of categorisation and provide the case for it’s existence as an everyday, fundamental cognitive process. Sure, categorisation has its weaknesses, but more than compensates for them with its strengths. Categorisation runs deeper than most would realise; potentially providing insight into the very way in which brains receive, process and store information. Perhaps a more accurate and efficient process may arise if humanity succeeds in modifying the essence of cognition towards better ways of classing objects and describing internal states. Maybe the direct transmission of meaning brain to brain will supersede categorisation and allow for instantaneous communication between agents.