You are currently browsing the category archive for the ‘Cognition’ category.

The transhumanist movement continues to gain momentum through recognition by mainstream media and a ever-burgeoning army of empricists, free thinkers and rationalists. Recently,  the Australian incarnation of 60 Minutes interviewed David Sinclair, a biologist whom has identified  the potentially life-extending properties of resveratrol. All this attention has sought to swell the awareness of transhumanism within the general community, most notably due to the inherently appealing nature of anti-senescent interventions. But what of the neurological side of transhumanism, specifically the artificial augmentation of our natural mental ability with implantable neurocircuitry? Does research in this area create moral questions regarding its implementation, or should we be embracing technological upgrades with open arms? Is it morally wrong to enhance the brain without effort on the individual level (IE: are such methods just plain lazy)? These are the questions I would like to investigate in this article.

An emerging transhumanist e-zine, H+ Magazine, outlines several avenues currently under exploration by researchers, who aim to improve the cognitive ability of the human brain through artificial enhancement. The primary area of focus at present (from an empirical point of view) lies in memory enhancement. The Innerspace Foundation (IF) is a not-for-profit organisation attempting to lead the charge in this area, with two main prizes offered to researchers whom can 1)  successfully create a device which can circumvent the traditional learning process and 2) create a device which facilitates the extension of natural memory.

Pete Estep, chairman of IF, was interviewed by H+ magazine in relation to the foundation’s vision as to what kind of device that satisfied their award criteria might look like. Pete believes the emergence of this industry involves ‘baby steps’ of achieving successful interfaces between biological and non-biological components. Electronic forms of learning, Pete believes, are certainly non-traditional, but still a valid possibility and stand to revolutionise the human intellect in terms of capacity and quality of retrieval.

Fortunately, we seem to already made progress on those ‘baby steps’ regarding the interface between brain and technology. Various neuroheadset products are poised to be released commercially in the coming months. For example, the EPOC headset utilises EEG technology to recognise brainwave activity that corresponds to various physical actions such as facial expression and intent to move a limb. With concentrated effort and training, the operator can reliably reproduce the necessary EEG pattern to activate individual commands within the headset. These commands can then be mapped to an external device and various tasks able to be performed remotely.

Having said this, such devices are still very much ‘baby’ in their steps. The actual stream of consciousness has not yet been decoded; the secrets of the brain are still very much a mystery. Recognisation of individual brain patterns is a superficial solution to a profound problem. Getting back to Searle’s almost cliched Chinese Room thought experiment, we seem to be merely reading the symbols and decoding them, there is no actual understanding  and comprehension going on here.

Even if such a solution is possible, and a direct mind/machine interface achieved, one small part of me wonders if it really is such a good thing. I imagine such a feeling is similar to the one felt by the quintessential school teacher when handheld calculators became the norm within the educational curriculum. By condoning such neuro-shortcuts, are we simply being lazy? Are the technological upgrades promised by transhumanism removing too much of the human element?

On a broader scale, I believe these concerns are elucidated by a societal shift towards passivity. Television is the numero-uno offender with a captive audience of billions. The invasion of neurological enhancements may seek to only increase the exploitation of our attention with television programs beamed directly into brains. Rain, hail or shine, passive reception of entertainment would be accessible 24 hours a day. Likewise, augmentation of memory and circumvention of traditional learning processes may forge a society of ultimate convenience – slaves to a ‘Matrix-style’ mainframe salivating over their next neural-upload ‘hit’.

But having said all this, by examining the previous example of the humble calculator it seems that if such technological breakthroughs are used as an extensor rather than a crutch, humanity may just benefit from the transhumanist revolution. I believe any technology aiming to enhance natural neurological processing power must only be used as such; a method to raise the bar of creativity and ingenuity, not simply a new avenue for bombarding the brain with more direct modes of passive entertainment. Availability must also be society-wide, in order to allow every human being to reach their true potential.

Of course, the flow-on effects of such technology on socio-economic status, intelligence, individuality, politics; practically every facet of human society, are certainly unknown and unpredictable. If used with extension and enhancement as a philosophy, transhumanism can usher in a new explosion of human ingenuity. If a more superficial ethos is adopted, it may only succeed in ushering a new dark ages. It’s the timeless battle between good (transcendence) and evil (exploitation, laziness). Perhaps a topic for a future article, but certainly food for thought.

The human brain and the internet share a key feature in their layout; a web-like structure of individual nodes acting in unison to transmit information between physical locations. In brains we have neurons, comprised in turn of myelinated axons and dendrites. The internet is comprised of similar entities, with connections such as fibre optics and ethernet cabling acting as the mode of transport for information. Computers and routers act as gateways (boosting/re-routing) and originators of such information.

How can we describe the physical structure and complexity of these two networks? Does this offer any insight into their similarities and differences? What is the plausibility of a conscious Internet? These are the questions I would like to explore in this article.

At a very basic level, both networks are organic in nature (surprisingly, in the case of the Internet); that is, they are not the product of an ubiquitous ‘designer’ and are given the freedom to evolve as their environment sees fit. The Internet is given permission to grow without a directed plan. New nodes and capacity added haphazardly. The naturally evolved topology of the Internet is one that is distributed; the destruction of nodes has little effect on the overall operational effectiveness of the network. Each node has multiple connections, resulting in an intrinsic redundancy where traffic is automatically re-routed to the target destination via alternate paths.

We can observe a similar behaviour in the human brain. Neurological plasticity serves a function akin to the distributed nature of the Internet. Following injury to regions of the brain, adjacent areas can compensate for lost abilities by restructuring neuronal patterns. For example, injuries to the frontal cortex motor area can be minimised with adjacent regions ‘re-learning’ otherwise mundane tasks that have since been lost as a result of the injury. While such recoveries are entirely possibly with extensive rehabilitation, two key factors determine the likelihood and efficiency of the operation; the intensity of the injury (percentage of brain tissue destroyed, location of injury) and leading from this, the chronological length of recovery. These factors introduce the first discrepancy between these two networks.

Unlike the brain, the Internet is resilient to attacks on its infrastructure. Local downtime is a minor inconvenience as traffic moves around such bottlenecks by taking the next fastest path available. Destruction of multiple nodes has little effect on the overall web of information. Users may loose access to or experience slowness in certain areas, but compared to the remainder of possible locations (not to mention redundancies in content – simply obtain the information elsewhere) such lapses are just momentary inconveniences. But are we suffering from a lack of perspective when considering the similarities of the brain and the virtual world? Perhaps the problem is one related to a sense of scale. The destruction of nodes (computers) could instead be interpreted in the brain as the removal of individual neurons. If one takes this proposition then the differences begin to loose their lucidity.

An irrefutable difference, however, arises when one considers both the complexity and the purpose of the two networks. The brain contains some 100 billion neurons, whilst the Internet comprises a measly 1 billion users by comparison (with users roughly equating the number of nodes, or access terminals that are physically connected to the Internet). Brains are the direct product of evolution, created specifically to keep the organism alive in an unwelcoming and hostile living environment. The Internet, on the other hand, is designed to accommodate a never-ending torrent of expanding human knowledge.  Thus the dichotomy in purpose between these two networks is quite distinguished, with the brain focusing on reactionary and automated responses to stimuli while the Internet aims to store information and process requests for its extraction to the end user.

Again we can take a step back and consider the similarities of these two networks. Looking at topology, it is apparent that the distributed nature of the Internet is similar to the structure and redundancy of the human brain. In addition, the Internet is described as a ‘scale-free’ or power-law network, indicating that a small percentage of highly connected nodes accounts for a very large percentage of the overall traffic flow. In effect, a targeted attack on these nodes would be successful in totally destroying the network. The brain, by comparison, appears to be organised into distinct and compartmentalised regions. Target just a few or even one of these collections of cells and the whole network collapses.

It would be interesting to empirically investigate the hypothesis that the brain is also a scale-free network that is graphically represented via a power law. Targetting the thalamus for destruction, (which is a central hub through which sensory information is redirected) might have the same devastating effect on the brain as destroying the ICANN headquarters in the USA (responsible for domain name assignment).

As aforementioned, the purposes of these two networks are different, yet share the common bond of processing and transferring information. At such a superficial level we see that the brain and the Internet are merely storage and retrieval devices, upon which the user (or directed thought process) are sent on a journey through a virtual world towards their intended target (notwithstanding the inevitable sidetracks along the way!). Delving deeper, the differences in purpose act as a deterrent when one considers the plausibility of consciousness and self-awareness.

Which brings us to the cusp of the article. Could the Internet, given sufficient complexity, become a conscious entity in the same vein as the human brain? Almost immediately the hypothesis is dashed due to its rebellion against common sense. Surely it is impossible to propose that a communications network based upon binary machines and internet protocols could ever achieve a higher plane of existence. But the answer might not be as clear cut as one would like to believe. controversially, both networks could be controlled by indeterminate processes. The brain, at its very essence, is governed by quantum unpredictability. Likewise, activity on the Internet is directed by self-aware, indeterminate beings (which in turn, are the result of quantum processes). At what point does the flow of information over a sufficiently complex network result in an emergent complexity mots notably characterised by a self-aware intelligence? Just as neurons react to the incoming electrical pulses of information, so too do the computers of the internet pass along packets of data. Binary code is equated with action potentials; either information is transmitted or not.

Perhaps the most likely (and worrying) outcome in a futurist world would be the integration of an artificial self-aware intelligence with the Internet. Think Skynet from the Terminator franchise. In all possibility such an agent would have the tools at its disposal to highjack the Internet’s comprising nodes and reprogram them in such a fashion as to facilitate the growth of an even greater intelligence. The analogy here is if the linking of human minds were possible, the resulting intelligence would be great indeed – imagine a distributed network of humanity, each individual brain linked to thousands of others in a grand web of shared knowledge and experience.

Fortunately such a doomsday outlook is most likely constrained within the realms of science fiction. Reality tends to have a reassuring banality about it that prevents the products of human creativity from becoming something more solid and tangible. Whatever the case may be in regards to the future of artificial intelligence, the Internet will continue to grow in complexity and penetration. As end user technology improves, we take a continual step closer towards an emergent virtual consciousness, whether it be composed of ‘uploaded’ human minds or something more artificial in nature. Let’s just hope that a superior intelligence can find a use for humanity in such a future society.

A recurring theme and technological prediction of futurists is one in which human intelligence supersedes that of the previous generation through artificial enhancement. This is a popular topic on the Positive Futurist website maintained by Dick Pelletier, and one which provides food for thought. Mr Pelletier outlines a near future (2030s) where a combination of nanotechnology and insight into the inner workings of the human brain facilitate an exponential growth of intelligence. While the accuracy of such a prediction is open to debate (specifically the technological possibilities of successful development within the given timeframe), if such a rosy future did come to fruition what would be the consequences on society? Specifically, would an increase of average intelligence necessarily result in an overall improvement to quality of life? If so, which areas would be mostly affected (eg morality, socio-economic status)? These are the questions I would like to explore in this article.

The main argument provided by futurists is that technological advances relating to nano-scale devices will soon be realised and implemented throughout society. By utilising these tiny automatons to the largest extent possible, it is thought that both disease and aging could be eradicated by the middle of this century. This is due to the utility of nanobots, specifically their ability to carry out pre-programmed tasks in a collective and automated fashion without any conscious awareness on behalf of the host. In essence, nano devices could act as a controllable extension of the human body, giving health professionals the power to monitor and treat throughout the organisms lifespan. But the controllers of these instruments need to know what to target and how to best direct their actions; a point of possible sabotage to the futurists’ plan. In all likelihood, however, such problems will only prove to serve as temporary hindrances and should be overcome through extensive testing and development phases.

Assuming that a) such technology is possible and b) it can be controlled to produce the desired results, the future looks bright for humanity. By further extending nanotechnology with cutting edge neurological insight, it is feasible that intelligence can be artificially increased. The possibility of artificial intelligence and the development of an interface with the human mind almost ensures a future filled with rapid growth. To this end, an event aptly named the ‘technological singularity’ has been proposed, which outlines the extension of human ability through aritificial means. The singularity allows for innovation to exceed the rate of development; in short, humankind could advance (technologically) faster than the rate of input. While the plausibility of such an event is open to debate, it does sound feasible that artificial intelligence could assist us to develop new and exciting breakthroughs in science. If conscious, self-directed intelligence were to be artificially created this may assist humanity even further; perhaps the design of specific minds would be possible (need a physical breakthrough – just create an artificial Einstein). Such an idea hinges totally on the ability of neuroscientists to unlock the secrets of the human brain and allow the manipulation or ‘tailoring’ of specific abilities.

While the jury is still out debating the details of how such a feat will be made technologically possible, a rough outline of the methodologies involved in artificial augmentation could be enlightening. Already we are seeing the effects of a society increasingly driven by information systems. People want to know more in a shorter time, in other words, increase efficiency and volume. To compensate for the already torrential hordes of information available on various mediums (the internet springs to mind) humanity relies increasingly on ways to filter, absorb and understand stimuli. We are seeing not only a trend in artificial aids (search engines, database software, larger networks) but also a changing pattern in the way we scan and retain information. Internet users are now forced to make quick decisions and scan superficially at high speed to obtain information that would otherwise be lost amidst the backlog of detail. Perhaps this is one way in which humanity is guiding the course of evolution and retraining the minds basic instincts away from more primitive methods of information gathering (perhaps it also explains our parents’ ineptitude for anything related to the IT world!) This could be one of the first targets for augmentation; increasing the speed of information transfer via programmed algorithms that fuse our natural biological mechanisms of searching with the power of logical, machine-coded functions. Imagine being able to combine the biological capacity to effortlessly scan and recognise facial features with the speed of computerised programming.

How would such technology influence the structure of society today? The first assumption that must be taken is the universal implementation/adoption of such technologies by society. Undoubtedly there will be certain populations whom refuse for whatever reason, most likely due to a perceived conflict with their belief system. It is important to preserve and respect such individuality, even if it means that these populations will be left behind in terms of intellectual enlightenment. Critics of future societies and futurists in general argue that a schism will develop, akin to the rising disparities in wealth distribution present within today’s society. In counter-argument, I would respond that an increase in intelligence would likewise cause a global rise in morality. While this relationship is entirely speculative, it is plausible to suggest that a person’s level of moral goodness is at least related (if not directly) to their intelligence.

Of course, there are notable exceptions to this rule whereby intelligent people have suffered from moral ineptitude, however an increased neurological understanding and a practical implementation of ‘designer’ augmentations (as it relates to improving morality) would negate the possibility of a majority ‘superclass’ whom persecutes groups of ‘naturals’. At the very worst, there may be a period of unrest at the implementation of such technology while the majority of the population catches up (in terms of perfecting the implantation/augmentation techniques and achieving the desired level of moral output). Such innovations may even act as a catalyst for developing a philosophically sound model of universal morality; something which would in turn, allow the next generation of neurological ‘upgrades’ to implement.

Perhaps we are already in the midst of our future society. Our planet’s declining environment may hasten the development of such augmentation to improve our chances of survival. Whether this process involves the discarding of our physical bodies for a more impervious, intangible machine-based life or otherwise remains to be seen. With the internet’s rising popularity and increasing complexity, a virtual ‘Matrix-esque’ world in which such programs could live might not be so far-fetched after all. Whatever the future holds, it is certainly an exciting time in which to live. Hopefully humanity can overcome the challenges of the future in a positive way and without too much disruption to our technological progress.

The monk sat meditating. Alone atop a sparsely vegetated outcrop, all external stimulus infusing psychic energy within his calm, receptive mind. Distractions merely added to his trance, assisting the meditative state to deepen and intensify. Without warning, the experience culminated unexpectedly with a fluttering of eyelids. The monk stood, content and empowered with newfound knowledge. He has achieved pure insight…

The term ‘insight’ is often attributed to such vivid descriptions of meditation and religious devotion. More specifically, religions such as Buddhism promote the concept of insight (vipassana) as a vital prerequisite for spiritual nirvana, or transcendence of the mind to a higher plane of existence. But does insight exist for the everyday folk of the world? Are the momentary flashes of inspiration and creativity part and parcel of the same phenomenon or are we missing out on something much more worthwhile? What neurological basis does this mental state have and how can its materialisation be ensured? These are the questions I would like to explore in this article.

Insight can be defined as the mental state whereby confusion and uncertainty are replaced with certainty, direction and confidence.  It has many alternative meanings and contexts regarding its use, ranging from a piece of obtained information to the psychological capacity to introspect objectively (as according to some external judge – introspection is by its very name subjective in nature). Perhaps the most fascinating and generally applicable context is one which can be described as ‘an instantaneous flash of brilliance’ or ‘a sudden clearing of murky intellect and intense feelings of accomplishment’. In short, insight (in the context which I am interested) is one which can be attributed to the genius’ of society, those that seemingly bring together tiny shreds of information and piece them together to solve a particularly challenging problem.

Archimedes is perhaps the most widely cited example of human insight. As the story goes, Archimedes was inspired by the displacement of water in his bathtub to formulate a theory of calculating the volume of an irregular object. This technique was of great empirical importance as it allowed a reliable measure of density (referred to as ‘purity’ in those ancient times, and arising from a more fiscal motivation such as gold purity). The climax of the story describes a naked Archimedes running wildly through the streets unable to control his excitement at this ‘Eureka’ moment. Whether the story is actually true or not has little bearing on the force of the argument presented; all of us have most likely experienced this moment at one point in our lives, and is best summarised by the overcoming of seemingly insurmountable odds to conquer a difficult obstacle or problem.

But where does this inspiration come from? It almost seems as though the ‘insightee’ is unaware of the mental efforts to arrive at a solution, perhaps feeling a little defeated after a day spent in vain. Insight then appears at an unexpected moment, almost as though the mind is working unconsciously and without direction, and offers a brilliant method for victory. The mind must have some unconscious ability to process and connect information regardless of our directed attention to achieve moments such as this. Seemingly unconnected pieces of information are re-routed and brought to our attention in the context of the previous problem. Thus could there be a neurobiological basis for insight? One that is able to facilitate a behind-the-scenes process?

Perhaps insight is encouraged by the physical storage and structure of neural networks. In the case of Archimedes, the solution was prompted by the mundane task of taking a bath; superficially unrelated to the problem, however the value of its properties inflated by a common neural pathway (low bathwater – insert leg – raised bathwater similar to volumes and matter in general). That is, the neural pathways activated by taking a bath are somehow similar to those activated by the rumination of the problem at hand. Alternatively, the unconscious mind may be able to draw basic cause and effect conclusions which are then boosted to the forefront of our minds if they are deemed to be useful (ie: are they immediately relevant to the task being performed). Whatever the case may be, it seems that at times, our unconscious minds are smarter than our conscious attention.

The real question is whether insight is an intangible state of mind (ala ‘getting into the zone’) that can be turned on and off (thus making it useful for extending humanity’s mental capabilities), or whether it is just a mental byproduct from overcoming a challenge (hormonal response designed to encourage such thinking in the future). Can the psychological concept of insight be applied via a manipulation of the subject’s composition (neuronally)  and environmental characteristics (conductive to achieving insight), or is it merely an evolved response that serves a (behaviourally) reinforcing purpose?

Undoutedly the agent’s environment plays a part in determining the likelihood of insight occurring. Taking into account personal preferences (does the person prefer quite spaces for thinking?) the characteristics of the environment could serve to hamper the induction of such a mental state if it is sufficiently irritating to the individual. Insight may also be closely linked with intelligence, and depending on your personal conception of this, neurological structure (if one purports a strictly biological basis of intelligence). If this postulate is taken at face value, we have the conclusion that the degree of intelligence is directly related to the likelihood of insight, and perhaps also to the ‘quality’ of the insightful event (ie: a measure of its brilliance in comparison to inputs such as the level of available information and difficulty of the problem).

But what of day to day insight, it seems to crop up in all sorts of situations. In this context, insight might require a grading scale as to its level of brilliance if its use is to be justified in more menial situations and circumstances. Think of that moment when you forget a particular word, and try as you might, cannot remember it for the life of you. Recall also that flash of insight where the answer is simply handed to you on a platter without any conscious need to retrieve it. Paradoxically, it seems that the harder we try to solve the problem, the more difficult it becomes. However, is this due to efficiency problems such as ‘bottlenecking’ of information transfer, personality traits such as performance anxiety/frustration or some underlying and unconscious process that is able to retrieve information without conscious direction?

Whatever the case may be, our scientific knowledge on the subject is distinctly lacking, therefore an empirical inquiry into the matter is more than warranted (if it hasn’t already been commissioned). Psychologically, the concept of insight could be tested experimentally by providing subjects with a problem to solve and manipulating  the level of information (eg ‘clues’) and its relatedness to the problem (with consideration taken to intelligence, perhaps two groups, high and low intelligence). This may help to uncover whether insight is a factor to do with information processing or something deeper. If science can learn how to artificially induce a mental state akin to insight, the benefits for a positive-futurist society would be grand indeed.

After returning from a year-long hiatus to the United Kingdom and continental Europe, I thought it would be prudent to share my experiences. Having caught the travel bug several years ago when visiting the UK for the first time, a year long overseas working holiday seemed like a dream come true. What I didn’t envisage was the effects of this experience on cognitions, specifically, the feelings of displacement, disorientation and dissatisfaction. In this article I aim to examine the effects of a changing environment on the human perceptual experience, as it relates to overseas, out-group exposure and the psychological mechanisms underlying these cognitive fluctuations.

It seems that the human need to belong runs deeper than most would care to admit. Having discounted any possibility of ‘homesickness’ prior to arrival in the UK, I was surprised to find myself unwittingly (or perhaps conforming to unconscious social expectation – but we aren’t psychoanalysts here!) experiencing the characteristic symptomatology of overall depression, including sub-signs of negative affect, longing for a return home and feelings concurrent with social ostracism. This struck me as odd, in that if one is aware of an impending event, surely this awareness predisposes one to a lesser effect simply through mental preparation and conscious deflection of the expected symptoms. The fact that negative feelings were still experienced despite such awareness causes an alternative etiology for the phenomenon of homesickness. Indeed, it offers a unique insight into the human condition; at a superficial level our dependency on consistency and familiarity, and at a deeper, more fundamental level, a possible interpretation of underlying cognitive processes involved in making sense of the world and responding to stimuli.

Taken at face value, a change in an individual’s usual physical and social environment displays the human reliance on group stability. From an evolutionary perspective, the prospect of travel to new and unfamiliar territories (and potential groups of other humans) is a altogether risky affair. On the one hand, the individual (or group) could possibly face death or injury through anthropogenic means or from the physical environment. On the other hand, a lack of change reduces stimulation genetically (through interbreeding with biologically related group members), cognitively (reduced problem solving, mental stagnation once initial challenges relating to the environment are overcome) and socially (exposure to familiar sights and sounds reduces the capacity for growth in language and, ipsofacto, culture). In addition, the reduction of physical resources through consumption and degradation of the land via over-farming (hunting) is another reason for moving beyond the confines of what is safe and comfortable. As the need for biological sustenance outranks all other human requirements (according to Maslow’s hierarchy), inductively it seems plausible that this could be the main motivating factor why human groups migrate and risk everything for the sake of exploring the unconquered territories of terra incognito. 

The mere fact that we do, and have (as shown throughout history) uprooted our familiar ties and trundled off in search of a better existence seems to make the aforementioned argument a moot point. It is not something to be debated, it is merely something that humans just do. Evolution favours travel, with the potential benefits outweighing the risks by far. The promise of greener pastures on the other side is almost enough to guarantee success. The cognitive stimulation such travel brings may also improve the future chances of success in this operation through learnt experiences and the conquering of challenges, as facilitated by human ingenuity.

But what of the social considerations when travelling? Are our out-group prejudices so intense that the very notion of travel to unchartered waters causes waves of anxiety? Are we fearing the unknown, our ability to adapt and integrate or the possibility that we may not make it out alive and survive to propagate our genes? Is personality a factor in predicting an individual’s performance (in terms of adaptation to the new environment, integration with a new group and success at forging new relationships)? From personal experience, perhaps a combination of all these factors and more.

We can begin to piece together a rough working model of travel and its effects on an individual’s social and emotional stability/wellbeing. The change in a social and physical environment seems to predict the activation of certain evolutionary survival mechanisms that are mediated by several conditions of the travel undertaken. Such conditions could involve; similarity of the target country to the country of origin (in terms of culture, language, ethnic diversity, political values etc),  social support to the individual (group size when travelling, facilities to make contact with group members left behind), personality characteristics of the individual (impulsive, extroverted vs introverted, attachment style, confidence) and cognitive ability to integrate and adapt (language skills, intelligence, social ability). Thus we have a (predicted) linear relationship whereby an increase in the degree of change (measured on a multitude of variables such as physical characteristics, social aspects, perceptual similarities) from the original environment to the target environment causes a resultant change in the psychological distress of the individual (either increased or decreased dependent upon the characteristics of the mediating variables).

Perceptually, travel also seems to have an effect on the salience and characteristics of the experience. In this instance we have deeper cognitive processes that activate which influence the human sensory experience on a fundamental level. The model employed here is one of stimulus-response, handed down through evolutionary means from a distant ancestor. Direct observation of perceptual distortion while travelling is apparent when visiting a unique location. Personally, I would describe the experience as an increase in arousal to one of hyper-vigilance. Compared to subsequent visits to the same location, the original seems somehow different in a perceptual sense. Colours, smells, sounds and tastes are all vividly unique. Details are stored in memory that are ignored and discounted after the first event. In essence, the second visit to a place seems to change the initial memory. It almost seems like a different place.

While I am unsure as to whether this is experienced by anyone apart from myself, evolutionarily it makes intuitive sense. The automation of a hyper-vigilant mental state would prove invaluable when placed in a new environment. Details spring forth and are accentuated without conscious effort, thus improving the organism’s chances of survival. When applied to modern situations, however, it is not only disorientating, but also very disconcerting (at least in my experience).

Moving back to social aspects of travel, I have found it to be both simultaneously a gift and a curse. Travel has enabled an increased understanding and appreciation of different cultures, ways of life and alternative methods for getting things done. In the same vein, however, it has instilled a distinct feeling of unease and dissatisfaction with things I once held dear. Some things you simply take for granted or fail to take notice of and challenge. In this sense, exposure to other cultures is liberating; especially in Europe where individuality is encouraged (mainly in the UK) and people expect more (resulting in a greater number of opportunities for those that work hard to gain rewards and recognition). The Australian way of life, unfortunately, is one that is intolerant of success and uniqueness. Stereotypical attitudes are abundant, and it is frustrating to know that there is a better way of living out there.

Perhaps this is one of the social benefits of travel; the more group members that do it increases the chances of changing ways of life towards more tolerant and efficient methods. Are we headed towards a world-culture where diversity is replaced with (cultural) conformity? Is this ethically viable or warranted? Could it do more harm than good? It seems to me that there would be some positive aspects for a global conglomerate of culture. Then again, the main attraction of travel lies in the experience of the foreign and unknown. To remove that would be to remove part of the human longing for exploration and a source of cognitive, social and physical stimulation. Perhaps instead we should encourage travel in society’s younger generations, exposing them to such experiences and encouraging internal change based on better ways of doing things. After all, we are the ones that will be running the country someday.

Many of us take the capacity to sense the world for granted. Sight, smell, touch, taste and hearing combine to paint an uninterrupted picture of the technicolour apparition we call reality. Such lucid representations are what we use to define objects in space, plan actions and manipulate our environment. However, reality isn’t all that it’s cracked up to be. Namely, our role in defining the universe in which we live is much greater than we think. Humanity, through the use of sensory organs and the resulting interpretation of physical events, succeeds in weaving a scientific tapestry of theory and experimentation. This textile masterpiece may be large enough to ‘cover all bases’ (in terms of explaining the underlying etiology of observations), however it might not be made of the right material. With what certainty do scientific observations carry a sufficient portion of objectivity? What role does the human mind and its modulation of sensory input have in creating reality? What constitutes objective fact and how can we be sure that science is ‘on the right track’ with its model of empirical experimentation? Most importantly, is science at the cusp of an empirical ‘dark age’ where the limitations of perception fundamentally hamper the steady march of theoretical progress? These are the questions I would like to explore in this article.

The main assumption underlying scientific methodology is that the five sensory modalities employed by the human body are, by and large, uniformly employed. That is, despite small individual fluctuations in fidelity, the performance of the human senses is mostly equal. Visual acuity and auditory perception are sources of potential variance, however the advent of certain medical technologies has circumnavigated and nullified most of these disadvantages (glasses and hearing aids, respectively). In some instances, such interventions may even improve the individual’s sensory experience, superseding ‘normal’ ranges through the use of further refined instruments. Such is the case with modern science as the realm of classical observation becomes subverted by the need for new, revolutionary methods designed to observe both the very big and the very small. Satellites loaded with all manner of detection equipment have become our eyes for the ultra-macro; NASA’s COBE orbiter gave us the first view of early universal structure via detection of the cosmic microwave background radiation (CMB). Likewise, scanning probe microscopy (SPM) enabled scientists to observe on the atomic scale, below the threshold of visible light. In effect, we have extended and supplemented our ability to perceive reality.

But are these innovations also improving the objective quality of observations, or are we being led into a false sense of security? Are we becoming comfortable with the idea that what we see constitutes what is really ‘out there’? Human senses are notoriously prone to error. In addition, machines are only as good as their creator. Put another way, artificial intelligence has not yet superseded the human ‘home grown’ alternative. Therefore, can we rely on a human-made, artificial extension of perception with which to make observations? Surely we are compounding the innate inaccuracies, introducing a successive error rate with each additional sensory enhancement. Not to mention the interpretation of such observations and the role of theory in whittling down alternatives.

Consensus cannot be reached on whether what I perceive is anything like what you perceive. Is my perception of the colour green the same as yours? Empirically and philosophically, we are not yet at a position to determine with any objectivity whether this question is true. We can examine brain structure and compare regions of functional activity, however the ability to directly extract and record aspects of meaning/consciousness is still firmly in the realms of science-fiction. The best we can do is simply compare and contrast our experiences through the medium of language (which introduces its own set of limitations).As aforementioned, the human sensory experience can, at times, become lost in translation.

Specifically, the ability of our minds to disentangle the information overload that unrelentingly flows through mental channels can wane due to a variety of influences. Internally, the quality of sensory inputs is governed at a fundamental level by biological constraints. Millions of years of evolution has resulted in a vast toolkit of sensory automation. Vision, for example, has developed in such a way as to become a totally unconscious and reflexive phenomenon. The biological structure of individual retinal cells predisposes them to respond to certain types of movement, shapes and colours. Likewise, the organisation of neurons within regions of the brain, such as the primary visual cortex in the occipital lobe, processes information with pre-defined mannerisms. In the case of vision, the vast majority of processing is done automatically, thus reducing the overall level of awareness and direct control the conscious mind has over the sensory system. The conclusion here is that we are limited by physical structure rather than differences in conscious discrimination.

The retina acts as the both the primary source of input as well as a first-order processor of visual information In brief, photons are absorbed by receptors on the back wall of the eye. These incoming packets of energy are absorbed by special proteins (rods – light intensity, cones – colour) and trigger action potentials in attached neurons. Low level processing is accomplished by a lateral organisation of retinal cells; ganglionic neurons are able to communicate with their neighbours and influence the likelihood of their signal transmission. Cells communicating in this manner facilitates basic feature recognition (specifically, edges/light and dark discrepancies) and motion detection.

As with all the sensory modalities, information is then transmitted to the thalamus, a primitive brain structure that acts as a communications ‘hub’; its proximity to the brain stem (mid and hind brains) ensures that reflexes are privy to visual input prior to the conscious awareness. The lateral geniculate nucleus is the region of the thalamus which splits incoming visual input into three main signals; (M, P and K). Interestingly, these channels stream inputs into signals with unique properties (eg exclusively colour, motion etc). In addition, the cross lateralisation of visual input is a common feature of human brains. Left and right fields of view are diverted at the optic chiasm and processed on common hemispheres (left field of view from both eyes processed on the right side of the brain). One theory as to why this system develops is to minimise the impact of uni-lateral hemispheric damage – the ‘dual brain’ hypothesis (each hemisphere can act as an independent agent, reconciling and supplementing reductions in function due to damage).

We seem to lazily fall back on these automated subsystems with enthusiasm, never fully appreciating and flexing the full capabilities of sensory appendages. Micheal Frayn, in his book ‘The Human Touch’ demonstrates this point aptly;

“Slowly, as you force yourself to observe and not to take for granted what seems so familiar, everything becomes much more complicated…That simple blueness that you imagined yourself seeing turns out to have been interpreted, like everything else, from the shifting, uncertain material on offer” Frayn, 2006, p26

Of course, we are all blissfully ignorant of these finer details when it comes to interpreting the sensory input gathered by our bodies. The consciousness acts ‘with what it’s got’, without a care as to the authenticity or objectivity of the observations. We can observe this first hand in a myriad of different ways; ways in which the unreal is treated as if it were real. Hallucinations are just one mechanism where the brain is fooled. While we know such things are false, to a degree (depending upon the etiology, eg schizophrenia), such visual disturbances nonetheless are able to provoke physiological and emotional reactions. In summary, the biological (and automated) component of perception very much determines how we react to, and observe, the external world. In combination with the human mind (consciousness), which introduces a whole new menagerie of cognitive baggage, a large amount of uncertainty is injected into our perceptual experience.

Expanding outwards from this biological launchpad, it seems plausible that the qualities which make up the human sensory experience should have an effect on how we define the world empirically. Scientific endeavour labours to quantify reality and strip away the superfluous extras leaving only constitutive and fundamental elements. In order to accomplish this task, humanity employs the use of empirical observation. The segway between biological foundations of perception and the paradigm of scientific observation involves a similarity in sensory limitation. Classical observation was limited by ‘naked’ human senses. As the bulk of human knowledge grew, so too did the need to extend and improve methods of observation. Consequently, science is now possibly realising the limitation of the human mind to digest an overwhelming plethora of information.

Currently, science is restricted by the development of technology. Progress is only maintained through the ingenuity of the human mind to solve biological disadvantages of observation. Finely tuned microscopes tap into quantum effects in order to measure individual atoms. Large radio-telescope arrays link together for an eagle’s eye view of the heavens. But as our methods and tools for observing grow in complexity, so too does the degree of abstract reasoning that is required to grasp the implications of their findings. Quantum theory is one such warning indicator.

Like a lighthouse sweeps the night sky and signals impending danger, quantum physics, or more precisely, humanity’s inability to agree on any one consensus which accurately models reality, could be telling us something. Perhaps we are becoming too reliant on our tools of observation, using them as a crutch in a vain attempt to avoid our biological limitations. Is this a hallmark of our detachment from observation? Quantum ‘spookiness’ could simply be the result of a fundamental limitation of the human mind to internally represent and perceive increasingly abstract observations. Desperately trying to consume the reams of information that result from rapid progress and intense observation, scientific paradigms become increasingly specialised and diverged, increasing the degree of inter-departmental bureaucracy. It now takes a lifetime of training to even grasp the basics of current physical theory, let alone the time taken to dissect observations and truly grasp their essence.

In a sense, science is at a crossroads. One pathway leads to an empirical dead end; humanity has exhausted every possible route of explanation. The other involves either artificial augmentation (in essence, AI that can do the thinking for us) or a fundamental restructuring of how science conducts its business. Science is in danger of information overload; the limitations introduced by a generation of unrelenting technical advancement and increasingly complex tools with which to observe has taken its toll. Empirical progress is stalling, possibly due to a lack of understanding by those doing the observing. Science is detaching from its observations at an alarming rate, and if we aren’t careful, in danger of loosing sight of what the game is all about. The quest for knowledge and understanding of the universe in which we live.

When people attempt to describe their sense of self, what are they actually incorporating into the resultant definition? Personality is perhaps the most common conception of self, with vast amounts of empirical validation. However, our sense of self runs deeper than such superficial descriptions of behavioural traits. The self is an amalgamation of all that is contained within the mind; a magnificent average of every synaptic transmission and neuronal network. Like consciousness, it is an emergent phenomenon (the sum is greater than the parts). But unlike the conscious, self ceases to be when individual components are removed or modified. For example, consciousness is virtually unchanged (in the sense of what it defines – directed, controlled thought) with the removal of successive faculties. We can remove physical brain structures such as the amygdala and still utilise our capacities for consciousness, albeit loosing a portion of the informative inputs. However the self is a broader term, describing the current mental state of ‘what is’. It is both a snapshot of the descriptive, providing a broad overview of what we are at time t, and prescriptive, in that the sense of self has an influence over how behaviours are actioned and information is processed.

In this article I intend to firstly describe the basis of ‘traditional’ measures of the self; empirical measures of personality and cognition. Secondly I will provide a neuro-psychological outline of the various brain structures that could be biologically responsible for eliciting our perceptions of self. Finally, I wish to propose the view that our sense of self is dynamic, fluctuating daily based on experience and discuss how this could affect our preconceived notions of introspection.

Personality is perhaps one of the most measured variables in psychology. It is certainly one of the most well-known, through its portrayal in popular science as well as self-help psychology. Personality could also be said to comprise a major part of our sense of self, in that the way in which we respond to and process external stimuli (both physically and mentally) has major effects on who we are as an entity. Personality is also incredibly varied; whether due to genetics, environment or a combination of both. For this reason, psychological study of personality takes on a wide variety of forms.

The lexical hypothesis, proposed by Francis Galton in the 19th century, became the first stepping stone from which the field of personality psychometrics was launched. Galton’s posit was that the sum of human language, its vocabulary (lexicon), contains the necessary ingredients from which personality can be measured. During the 20th century, others expanded on this hypothesis and refined Galton’s technique through the use of Factor Analysis (a mathematical model that summarises common variance into factors). Methodological and statistical criticisms of this method aside, the lexical hypothesis proved to be useful in classifying individuals into categories of personality. However this model is purely descriptive; it simply summarises information, extracting no deeper meaning or providing background theory with which to explain the etiology of such traits. Those wishing to learn more about descriptive measures of personality can find this information under the headings ‘The Big Five Inventory’ (OCEAN) and Hans Eysencks Three Factor model (PEN).

Neuropsychological methods of defining psychology are less reliant on statistical methods and utilise a posteriori knowledge (as opposed to the lexical hypothesis which relies on reasoning/deduction). Thus, such theories have a solid empirical background with first-order experimental evidence to provide support to the conclusions reached. One such theory is the BIS/BAS (behavioural inhibition/activation system). Proposed by Gray (1982), the BIS/BAS conception of personality builds upon individual differences in cortical activity in order to arrive at the observable differences in behaviour. Such a revision of personality turns the tables on traditional methods of research in this area, moving away from superficially describing the traits to explaining the underlying causality. Experimental evidence has lent support to this model through direct observation of cortical activity (functional MRI scans). Addicts and sensation seekers are found to have high scores on behavioural activation (associated with increased per-frontal lobe activity), while introverts score high on behavioural inhibition. This seems to match up with our intuitive preconceptions of these personality groupings; sensation seekers are quick to action, in short they tend to act first and think later. Conversely, introverts act more cautiously, adhering to a policy of ‘looking before they leap’. Therefore, while not encapsulating as wide a variety of individual personality factors as the ‘Big Five’, the BIS/BAS model and others based on neurobiological foundations seem to be tapping into a more fundamental, materialistic/reductionist view of behavioural traits. The conclusion here is that directly observable events and the resulting individual differences ipso facto arise from specific regions in the brain.

Delving deeper into this neurology, the sense of self may have developed as a means to an end; the end in this case is predicting the behaviour of others. Therefore, our sense of self and consciousness may have evolved as a way of internally simulating how our social competitors think, feel and act. V. Ramachandran (M.D.), in his Edge.org exclusive essay, calls upon his neurological experience and knowledge of neuroanatomy to provide a unique insight into the physiological basis of self. Mirror neurons are thought to act as mimicking simulators of external agents, in that they show activity both performing a task and while observing someone else performing the same task. It is argued that such neuronal conglomerates evolved due to social pressures; a method of second guessing the possible future actions of others. Thus, the ability to direct these networks inwards was an added bonus. The human capacity for constructing a valid theory of mind also gifted us with the ability to scrutinise the self from a meta-perspective (an almost ‘out-of-body’ experience ala a ‘Jimeny the Cricket’ style conscience).

Mirror neurons also act as empathy meters; firing across synaptic events during moments of emotional significance. In effect, our ability to recognise the feelings of others stems from a neuronal structure that actually elicits such feelings within the self. Our sense of self, thus, is inescapably intertwined with that of other agents’ self. Like it or not, biological dependence on the group has resulted in the formation of neurological triggers which fire spontaneously and without our consent. In effect, the intangible self can be influenced by other intangibles, such as emotional displays. We view the world through ‘rose coloured glasses’ with an emphasis on theorizing the actions of others through how we would respond in the same situation.

So far we have examined the role of personality in explaining a portion of what the term ‘self’ conveys. In addition, a biological basis for self has been introduced which suggests that both personality and the neurological capacity for introspection are both anatomically definable features of the brain. But what else are we referring to when we speak of having a sense of self? Surely we are not doing this construct justice if all that it contains is differences in behavioural disposition and anatomical structure.

Indeed, the sense of self is dynamic. Informational inputs constantly modify and update our knowledge banks, which in turn, have ramifications for self. Intelligence, emotional lability, preferences, group identity, proprioreception (spatial awareness); the list is endless. Although some of these categories of self may be collapsible into higher order factors (personality could incorporate preference and group behaviour), it is arguable that to do so would result in the loss of information. The point here is that to look at the bigger picture may obscure the finer details that can lead to further enlightenment on what we truly mean when we discuss self.

Are you the same person you were 10 years ago? In most cases, if not all, the answer will be no. Core traits may remain relatively stable, such as temperament, however arguably, individuals change and grow over time. Thus, their sense of self changes as well, some people may become more attuned to their sense of self than others, developing a close relationship through introspective analysis. Others, sadly, seem to lack this ability of meta-cognition; thinking about thinking, asking the questions of ‘why’, ‘who am I’ and ‘how did I come to be’. I believe this has implications for the growth of humanity as a species.

Is a state of societal eudaimonia sustainable in a population that has varying levels of ‘selfness’? If self is linked to the ability to simulate the minds of others, which is also dependent upon both neurological structure (leading to genetic mutation possibly reducing or modifying such capacities) and empathic responses, the answer to this question is a resounding no. Whether due to nature or nurture, society will always have individuals whom are more self-aware than others, and as a result, more attentive and aware of the mental states of others. A lack of compassion for the welfare of others coupled with an inability to analyse the self with any semblance of drive and purpose spells doom for a harmonious society. Individuals lacking in self will refuse, through ignorance, to grow and become socially aware.

Perhaps collectivism is the answer; forcing groups to co-habitate may introduce an increased appreciation for theory of mind. If the basis of this process is mainly biological (as it would seem to be), such a policy would be social suicide. The answer could dwell in the education system. Introducing children to the mental pleasures of psychology and at a deeper level, philosophy, may result in the recognisation of the importance of self-reflection. The question here is not only whether students will grasp these concepts with any enthusiasm, but also if such traits can be taught via traditional methods. More research must be conducted into the nature of the self if we are to have an answer to this quandry. Is self related directly to biology (we are stuck with what we have) or can it be instilled via psycho-education and a modification of environment?

Self will always remain a mystery due to its dynamic and varied nature. It is with hope that we look to science and encourage its attempts to pin down the details on this elusive subject. Even if this quest fails to produce a universal theory of self, perhaps it will be successful in shedding at least some light onto the murky waters of self-awareness. In doing so, psychology stands to benefit both from a philosophical and a clinical perspective, increasing our knowledge of the causality underlying disorders of the self (body dysmorphia, depression/suicide, self-harming) .

If you haven’t already done so, take a moment now to begin your journey of self discovery; you might just find something you never knew was there!

Most of us would like to think that we are independent agents that are in control of our destiny. After all, free-will is one of the unique phenomena that humanity can claim as its own – a fundamental part of our cognitive toolkit. Experimental evidence, in the form of neurological imaging has been interpreted as an attack on mental freedom. Studies that highlight the possibility of unconscious activity preceding the conscious ‘will to act’ seem to almost sink the arguments from non-determinists (libertarians). In this article I plan to outline this controversial research and offer an alternative interpretation; one which does not infringe on our abilities to act independent and of our own accord. I would then like to explore some of the situations where free-will could be ‘missing in action’ and suggest that the frequency at which this occurs is larger than expected.

A seminal investigation conducted by Libet et al (1983) first challenged (empirically) our preconceived notions of free-will. The setup consisted of an electroencephalograph (EEG, measuring overall electrical potentials through the scalp) connected to the subject and a large clock with markings denoting various time periods. Subjects were required to simply flick their wrist whenever a feeling urged them to do so. The researchers were particularly interested in the “Bereitschaftspotential” or readiness potential; a signature EEG pattern of activity that signals the beginning of volitional initiation of movement. Put simply, the RP is an measurable spike in electrical activity from the pre-motor region of the cerebral cortex – a mental preparatory action that put the wheels of movement into action.

Results of this experiment indicated that the RP significantly preceded the subjects’ reported sensations of conscious awareness. That is, the act of wrist flicking seemed to precede conscious awareness of said act. While the actual delay between RP detection and conscious registration of intent to move was small (by our standards), the half a second gap was more than enough to assert that a measurable difference had occurred. Libet interpreted these findings as having vast implications for free-will. It was argued that since electrical activity preceded conscious awareness of the intent to move, free-will to initiate movement (Libet allowed free-will to control movements already in progress, that is, modify their path or act as a final ‘veto’ in allowing or disallowing it to occur) was non-existent.

Many have taken the time to respond to Libet’s initial experiment. Daniel Dennet (in his book Freedom Evolves) provides an apt summary of the main criticisms. The most salient rebuttal comes in the form of signal delay. Consciousness is notoriously slow in comparison to the automated mental processes that act behind the scenes. Take the sensation of pain, for example. Initial stimulation of the nerve cells must firstly reach sufficient levels for an action potential to fire, causing dendrites to flood ions into the synaptic gap. The second-order neuron then receives these chemical messengers, modifying its electrical charge and causing another action potential to fire along its myelinated axon. Now, taking into account the length that this signal must travel (at anywhere from 1-10m/s), it will then arrive at the thalamus, the brain’s sensory ‘hub’ where it is then routed to consciousness. Consequently, there is a measurable gap between the external event and conscious awareness; perhaps made even larger if the signal is small (low pain) or the mind is distracted. In this instance, electrical activity is also taking place and preceding consciousness. Arguably the same phenomenon could be occurring in the Libet experiment.

Delays are inevitably introduced when consciousness is involved in the equation. The brain is composed of a conglomerate of specialised compartments, each communicating with its neighbours and performing its own part of the process in turn. Evolution has drafted brains that act automatic first, and conscious second. Consequently, the automatic gains priority over the directed. Reflexes and instincts act to save our skins long before we are even aware of the problem. Naturally, electrical activity in the brain could thus precede conscious awareness.

In the Libet experiment, the experimental design itself could be misleading. Libet seems to equate his manipulation of consciousness timing with free-will, when in actual fact, the agent has already decided freely that they will follow instructions. What I am trying to say here is that free-will does not have to act as an initiator to every movement; rather it acts to ‘set the stage’ for events and authorises the operation to go ahead. When told to move voluntarily, the agent’s will makes the decision to either comply or rebel. Compliance causes the agent to authorise movement, but the specifics are left up to chance. Perhaps a random input generator (quantum indeterminacy?) provides the catalyst with which this initial order combines to create the RP and eventual movement. Conscious registration of this fact only occurs once the RP is already starting to form.

Looking at things from this perspective, consciousness seems to play a constant game of ‘catch-up’ with the automated processes in our brains. Our will is content to act as a global authority, leaving the more menial and mundane tasks up to our brain’s automated sub-compartments. Therefore, free-will is very much alive and kicking, albeit sometimes taking a back-seat to the unconscious.

We have begun by exploring the nature of free-will and how it links in with consciousness. But what of these unconscious instincts that seek to override our sense of direction and seek to regress humanity back to its more animalistic and primitive ancestry? Such instincts act covertly; sneakily acting whilst our will is otherwise indisposed. Left unabated, the agent that gives themselves completely to urges and evolutionary drives could be said to be devoid of free-will, or at the very least, somewhat lacking compared to more ‘aware’ individuals. Take sexual arousal, for instance. Like it or not, our bodies act on impulse, removing free-will from the equation with simplistic stimulus:response conditioning processes. Try as we might, sexual arousal (if allowed to follow its course) acts immediately upon visual or physical stimulation. It is only when the consciousness kicks into gear and yanks on the leash attached to our unconscious that control is regained. Eventually, with enough training, it may be possible to override these primitive responses, but the conscious effort required to sustain such a project would be psychically draining.

Society also seeks to rob us of our free-will. People are pushed and pulled by group norms, expectations of others and the messages that are constantly bombarding us on a daily basis. Rather than encouraging individualism, modern society is instead urging us to follow trends. Advertising is crafted in a way that the individual may even be fooled into thinking that they are arriving at decisions of their own volition (subliminal messaging), when in actual fact, it is simply tapping into some basic human need for survival (food, sex, shelter/security etc).

Ironically, science itself could also be said to be reducing the amount of free-will we can exert. Scientific progress seeks to make the world deterministic; that is, totally predictable through increasingly accurate theories. While the jury is still out as to whether ‘ultimate’ accuracy in prediction will ever occur (arguably, there is not enough bits of information in the universe with which to construct a computer powerful enough to complete such a task) science is coming closer to a deterministic framework whereby the paths of individual particles can be predicted. Quantum physics is but the next hurdle to be overcome in this quest for omniscience. If the inherent randomness that lies within quantum processes is ever fully explained, perhaps we will be at a place (at least scientifically) to model a individual’s future action based on a number of initial variables.

What could this mean for the nature of free-will? If past experiments are anything to go by (Libet et al), it will rock our sense of self to the core. Are we but behaviouristic automatons as the psychologist Skinner proposed? Delving deeper into the world of the quanta, will we ever be able to realistically model and predict the paths of individual particles and thus the future course of the entire system? Perhaps the Heisenberg Uncertainty Principle will spare us from this bleak fate. The indivisible randomness of the quantum wave function could potentially be the final insurmountable obstacle that neurological researchers and philosophers alike will never be able to conquer.

While I am all for scientific progress and increasing the bulk of human knowledge, perhaps we are jumping the gun with this free-will stuff. Perhaps some things are better left mysterious and unexplained. A defeatist attitude if ever I saw one, but it could be justified. After all, how would you feel if you knew every action was decided before you were even a twinkle in your father’s eye? Would life even be worth living? Sure, but it would take alot of reflection and a personality that could either deny or reconcile the feelings of unease that such a proposition brings.

They were right; ignorance really is bliss.

Compartmentalisation of consciousness

A common criticism I have come across during my philosophical wanderings the accusation that such thinkers and dreamers cannot possibly expect their ideas to ever take hold among society. “What is the point of philosophy”, they cry, “if the very musings they are proposing cannot be realistically and pragmatically implemented?” The subtle power of this argument is often overlooked; its point is more than valid. If all philosophy can do is outline an individual’s thoughts in a clear and concise manner without even a hint of how to implement said ideas then what is the point in even airing them! Apart from the intellectual stimulation such discussion brings of course, it seems as though the observations of philosophers are wasted.

In the modern world, the philosopher takes a backseat when it comes to government policy and the daily operation of state. Plato painted a far rosier picture in his ideal Republic which placed philosophers directly in the ruling class. Plato placed great emphasis on the abilities of philosophers to lead effectively.

“Until philosophers rule as kings or those who are now called kings and leading men genuinely and adequately philosophise, that is, until political power and philosophy entirely coincide, while the many natures who at present pursue either one exclusively are forcibly prevented from doing so, cities will have no rest from evils,… nor, I think, will the human race.” (Republic 473c-d)

But is this really attainable? Was Plato correct in stating ‘until (my italics, TC) philosophers rule as kings’? The implication here is that philosophers currently lack certain qualities which make them suitable for the role of leadership. Was Plato referring to a lack of practicality, a lack of confidence in their abilities to lead or something more menial such as the public’s intrinsic distrust of intellectualism? Certainly, looking at the qualities of today’s leaders it seems that one requires expert skills in the art of social deception and persuasion if they are to succeed. When Plato speaks of “those who love the sight of truth” in his description of the ideal “philosopher kings” that would rule the republic, it seems at loggerheads with the reality of modern politics.

So in order to become a successful leader in the modern world, one must be socially skilled and able-minded to sway the opinions of others, even if you don’t end up delivering. The balancing act becomes one that aims to please the majority (either through actual deliverance of election promises or ‘pulling the wool over eyes’ until we forget about them) and upset the minority. Politicians need to know how to ‘play the system’ to their advantage. They must also exude power, real or imaginary, relying on unconscious processes such as social dominance through both verbal and non-verbal communication. Smear campaigns act to taint the reputation of adversaries and deals are brokered with the powerful few that can fund the election campaign with a ‘win at all costs’ attitude (in return for favours once the individual is elected).

So why do such individuals gain a place above the world’s thinkers? Plato would surely be turning in his grave if he knew that his republic ideal would thus far be unrealised. I intend to argue that it is their pragmatism, their ability to turn policies into realities that makes politicians suitable over philosophers. Politicians seem to know the best ways of pleasing everyone at once, even if the outcome is not the best course of action. They can simply snap their fingers and make a problem disappear; ‘swept under the carpet’ temporarily at least until their term ends and the aftermath must be dealt with by another political hopeful.

Philosophers are inherently unpopular. Not because they are wrinkly old men with white beards that mumble and smoke pipes indoors, but rather they tell the truth. The scary thing is, the public does not want to hear about how things should be done; they just want them gone with the least possible inconvenience to their own lives as possible. This is where philosophy runs into trouble.

The whole ethos of philosophy is to objectively consider the evidence and plan for every contingency. It relies on criticism and deliberation in order to arrive at the most efficient outcome possible; and even after all that philosophers are still humble enough to admit they may be wrong. Is this what the public detests so much? Can they not bring themselves to respect a humbled attitude that is open to the possibility of error and willing to make changes for the sake of growth and improvement? It seems this way; society would rather be lied to and feel safe in their false sense of security than be led by individuals that genuinely had the best interests of humanity at heart.

Of course, there is the dark side to philosophy that could possibly destroy its chances of ever becoming a ruling class. The adoption of certain moral standpoints, for instance, are a cause for argument insofar as the majority would never be able to arrive at a consensus in order for them to be enacted. Philosophers seem to have alot of work remaining if they are ever to unite under a banner of cooperation and agreement on their individual positions. Perhaps the search for universals amongst the menagerie of current philosophical paradigms is needed before a ruling body can emerge. As it currently stands, there is simply too much disagreement between individuals over the best course of action to make for a governing body. At least the present system is organised under political parties with members that share a common ideology, thus making deliberations far more efficient than a group of fundamentally opposed (on not only beliefs but also plans of action) philosophers.

Does a philosophical dictatorship offer a way out of this mess? While the concept at heart seems totally counter to what the discipline stands for, perhaps it is the only way forward. At least, in the sense that a solitary individual has greater authoritative power over a lower council of advisors and informants. This arrangement eliminates the problems that arise from disagreement, but seems fundamentally flawed (in the sense that the distibution of power is unequal).

The stuggle between the mental and the practical is not only limited to the realm of politics/philosophy. An individual’s sense of self seems to be split into two distinct entities; one that is intangible, rational, conscious and impractical (the thinker) whilst the other is the inverse, a practical incarnation of ‘you’ that can deal with the unpredicabilities of the world with ease, but exists mostly at some unconsious level. People are adept at planning future events using their mental capacities, although the vast majority of the time, the unconsious ‘pragmatist’ takes over and manages to destroy such carefully laid plans (think of how you plan to tell your loved one you are going out for the night. it doesn’t quite go as smoothly as you planed). Does this problem stem from the inherent inaccuracy of our ‘mental simulators’ which prevents every possible outcome from arising in conscious consideration prior to action? Or does our automatic, unconsious self have a much further reach than we might have hoped? If the latter is correct, the very existence of free-will could be in jeopardy (the possibility of actions arising before conscious thought – to be explored at a later date).

So what of a solution to this quandry. Thus far, it could be argued that this article simply follows in the footsteps of previous philosophy which advocates a strictly ‘thought only’ debate without any real call to action or suggestions for practical implementation. First and foremost, I believe philosophers have a lot to learn from politicians (and quite rightly, vice versa). The notion of Plato’s republic ruled by mental  giants who are experienced in the philosophy of knowledge, ethics and meaning seems, at face value, attractive. Perhaps this is the next step for governmental systems on this planet; if it can be realised in an attainable and realistic fashion.

Perhaps we are already on our way towards Plato’s goal. Rising education levels could be reaching sufficient levels so as to act in a catalytic explosion of political and ideological revolution. But just as philosopher’s tend to forget about the realities of the world, so too are we getting a bit ahead of ourselves. Education levels are not uniform across the globe, even intelligence (we can’t even measure it properly) varies greatly between individuals. Therfore, the problem remains; how to introduce the philosophical principles of meta-knowledge, respect for truth and deliberated moral codes of conduct? Is such a feat even possible what with the variety of intellects on this planet?

One thing is certain. If philosophers (and individuals alike) are ever to overcome the problems that arise from transferring ideas into reality they must take a regular ‘reality check’ and ensure that their discourse can be applicable to society. This is not in any way, shape or form advocating the outlaw of discussion on impractical thought exercises and radical new ideas, but rather pursuading more philosophers to reason about worldly concerns, rather than the abstract. The public needs a new generation of leaders to guide, rather than push or sweep aside, through the troublesome times that surely lay ahead. Likewise, policitians need to start leading passionately and genuinely, with the interests of their citizens at the forefront of every decision and policy amendment. They need to wear their hearts on their sleeves, advocating not only a pragmatic, law-abiding mentality within society, but also a redesign and revitalisation of morality itself. Politicians should be wholly open to criticism, in fact encouraging it in order to truly lead their people with confidence.

Finally, we as individuals should also take time out to think of ways in which we can give that little deliberating voice inside our heads a bit more power to enact itself on the outside world, rather than being silenced by the unconsious, animalistic and unfairly dominating automaton that seems to often cause more harm than good. The phrase ‘look before you leap’ connotes a whole new meaning if this point is to be taken with even a grain of truth.

The act of categorisation is a fundamental cognitive process that is used to attach meaning to objects. As such, it forms the basis for daily interactions both social and introspective; social in the sense that stereotyping (a form of categorisation) affects not only our thoughts, but also our behaviour when interacting with others. This process is also introspective in that the act of categorising external objects influences how that information is internalised (correctly or incorrectly stored depending on the structure of the agent’s categorical schema).

Categorisation influences our perceptions of the world in a very marked way. The main advantage this function brings is that it makes generalisations possible and useful. Without categorisation, communicating thought processes and disseminating information from our world would become a very long-winded and convoluted process. The versatility of grouping commonly featured objects together allows us to talk about things informatively while leaving out all the tedious descriptive stuff. Categorisation is also one way of allowing meaning to be attached to objects, thoughts and feelings. For example, the emotion of feeling ‘sad’ includes a vast range of varying mental states all bubbling and boiling away in a sea of unpredictability. The overall result, of course, is easily identifiable to us as ‘someone of negative affect’, but how would we accomplish this feat if we did not have access to categorisation? We would surely be paralysed by the overwhelming variation that individual differences in the expression of sadness brings. One possible function of categorisation is to work cooperatively with the sensory regions of the brain to help provide an overall picture or concept to use in working memory space. Take face recognition for example, many hundreds of fluctuating variables (shape, position, features, colour etc) that somehow are compressed and averaged into something that is usable by the brain. The act of categorising the facial features into a coherent whole allows not only recognition, but the activation of memories, stereotypes, future planning and emotions (among other actions).

Delving deeper, I wonder whether it is possible to describe a thing without falling back to categorisation? It seems not, as the very act of describing something seems to presuppose, if not require, the existence of categorisation. This ‘reductionist’s nightmare’ becomes apparent with a simple mental simulation. Try now to describe a common everyday thing without referring to pre-established categories. Take a humble kitchen drinking glass for example. Straight away I have categorised the item; I could have been talking about any glass at all, however immediately I have succeeded in creating a mental image of a glass, which was then refined by the sub category of ‘drinking’. The first category, glass, could elicit countless mental images of everyday objects. Those images would cluster around some variation of the bell curve (although how do you arrange ideas and concepts either side of the most frequent and central idea as in a ‘normal’ bell curve?), with the frequencies of each item starting off low and graduating up to the most common. Most likely the majority of conjured mental images will correspond to some fuzzy approximation of the everyday drinking glass. Each image will vary from mind to mind, however the overall category is well defined and usable in terms of conveying ideas. The brain is ready to receive and store the incoming information under the category ‘glass’. Again we attempt to describe the glass without using categories; in this case we take the reductionist approach and peel back another layer of physical form. A possible avenue for this is to describe the molecular structure; X many billions of Silica molecules arranged in formations just so, composed in turn of X number of silicon atoms and X number of oxygen atoms…and so on. The problem here is that we are still referring to categories. We are using words such as ‘atoms’, ‘molecules’ and ‘oxygen’; all are categories of physical things that are inclusive of all those objects that make up the said category. They still succeed in conjuring up a generic icon in the mind’s eye.

Or a different approach could be taken, and instead of trying to explain the constituent components of the item in question, the utilities are proposed. Our old mate the drinking glass would thus be described in terms of its usefulness (holds liquids), its actions (constrains, lifted to the mouth, poured), its influences on our bodies (delivers nutrients via the mouth) and even the processes that went into constructing it (Sven from Ikea, cheap Chinese sweatshop). It soon becomes obvious that no matter how hard we try to avoid the use of categorisation, it forms the basis for our thought processes. Whether it be categories of sub-components, materials and atomic structure or categories of behaviour, actions or origin, placing everyday objects into generalised groups according to their features is what gives it meaning. Without categories not only would the (traditional) communication of ideas be difficult, even impossible, the very essence of the stuff around us would be meaningless. In short, without categoisation, the external world looses its meaning.

But what of the negative aspects of categorisation. Perhaps the most obvious is the potential for errors; that is, incorrectly categorising something to a pigeon hole that it shouldn’t belong in. Due to the fundamental (and often unconscious) manner in which catergorisation affects the entire thought process, an error at this foundation level can spell disaster for the entire system. Subjective ‘errors’ in the categorising process become most apparent in social situations. I believe this is due to a low level sub-routine that uses social interactions to make refinements to the overall system; by observing the responses of other agents (in the form of behaviours) to the behaviour of the self (once the category has been assigned and a response elicited) the sub-routine compares and contrasts how effective and accurate the assigned category is in relation to the categories of others. In this way our individual systems of categorisation are kept in-sync, thus preserving the collective sense of meaning and making communication possible. If this is unclear, take the following example. Bill is attempting to explain a novel object to Joe. Bill states the object is a ‘Kazoolagram’ but this contains no meaning to Joe at all; his categoorical ‘set’ is missing this category with its attached label of meaning. The object’s properties are then described, and Joe responds by suggesting similar sets from his repository of meaning; “Well is it anything like a Nincompoop?” Here Joe attempts to refine his mental schemas and grasps at existing examples to attach meaning to this unique object. The banter continues, with both participants gauging the accuracy of their categorisations through the behaviour of the other agent. Eventually they agree on the meaning of the object. This brings us to another question; is meaning emergent (ie greater than the sum of the parts) or simply a cobbled together collage of pre-existing mental representations (limited by the extent of the agents prior experiences)?

It seems as though the process of categorising can be influenced by the pre-existing content present within brains, especially past examples and experiences of events or objects similar to the one in question. Meaning seems to be both an emergent property and a combination of past experiences in that individually, the features of a category are useless, however together and in partnership with the agent’s existing knowledge (making the process faster if both agents have similar experiences) categories flourish into useful, meaningful tools for the processing and transmission of information.

The point of this article was to expose the extent of categorisation and provide the case for it’s existence as an everyday, fundamental cognitive process. Sure, categorisation has its weaknesses, but more than compensates for them with its strengths. Categorisation runs deeper than most would realise; potentially providing insight into the very way in which brains receive, process and store information. Perhaps a more accurate and efficient process may arise if humanity succeeds in modifying the essence of cognition towards better ways of classing objects and describing internal states. Maybe the direct transmission of meaning brain to brain will supersede categorisation and allow for instantaneous communication between agents.