You are currently browsing the tag archive for the ‘psychology’ tag.

Evil is an intrinsic part of humanity, and it seems almost impossible to erradicate it from society without simultaneously removing a significant part of our human character. There will always be individuals whom seek to gain advantage over others through harmful means. Evil can take on many forms, depending upon the definition one uses to encapsulate the concept. For instance, the popular definition includes elements of malicious intent or actions that are designed to cause injury/distress to others. But what of the individual that accidentally causes harm to another, or whom takes a silent pleasure in seeing other’s misfortune? Here we enter a grey area, the distinction between good and evil blurring ever so slightly, preventing us from making a clear judgement on the topic.

Religion deals with this human disposition towards evil in a depressingy cynical manner. Rather than suggesting ways in which the problem can be overcome, religion instead proposes that evil or “sin” is an inevitable temptation (or a part of our character into which we are born) that can only be overcome with a conscious and directed effort. Invariably one will sin sometime in their life, whereupon the person should ask for forgiveness from their nominated deity. Again we see a shifting of responsibility away from the individual, with the religious hypothesis leaning on such concepts as demonic possession and lapses of faith as an explanation for the existence of evil (unwavering belief in the deity cures all manner of temptations and worldly concerns).

In its current form, religion does not offer a satisfactory explanation for the problem of evil. Humanity is relegated to the backseat in terms of moral responsibility, coerced into conformity through a subservence to the Church’s supposed ideals and ways of life. If our society is to break free of these shackles and embrace a humanistic future free from bigotry and conflict, moral guidance must be gained from within the individual. To this end, society should consider introducing moral education for its citizens, taking a lesson from the annals of history (specifically, ancient Greece with its celebration of individual philosophical growth).

Almost counter-intuitively, some of the earliest recorded philosophies actually advocated a utopian society that was atheistic in nature, and deeply rooted in humanistic, individually managed moral/intellectual growth. One such example is the discipline of Stoicism, founded in the 2nd century BC. This philosophical movement was perhaps one of the first true instances of humanism whereby personal growth was encouraged through introspection and control of destructive emotions (anger, violence etc). The stoic way was to detach oneself from the material world (similar to Buddhist traditions), a tenet that is aptly summarised through the following quote;

“Freedom is secured not by the fulfilling of one’s desires, but by the removal of desire.”

Epictetus

Returning to the problem of evil, Stoicism proposed that the presence of evil in the world is an inevitable fact due to ignorance. The premise of this argument is that a universal reason or logos, permeates throughout reality, and evil arises when individuals go against this reason. I believe what the Stoics mean here is that a universal morality exists, that being a ubiquitous guideline accessible to our reality through conscious deliberation and reflective thought. When individuals act contrary to this universal standard, it is through an ignorance of what the correct course of action actually is.

This stoic ethos is personally appealing because it seems to have a large humanistic component. Namely, all of humanity has the ability to grasp universal moral truths and overcome their ‘ignorance’ of the one true path towards moral enlightenment. Whether such truths actually exist is debatable, and the apathetic nature of Stoicism seems to depress the overall human experience (dulled down emotions, detachment from reality).

The ancient Greek notion of eudaimonia could be a more desirable philosophy by which to guide our moral lives. The basic translation of this term as ‘greatest happiness’ does not do it justice. It was first introduced by Socrates, whom outlined a basic version of the concept as comprising two components; virtue and knowledge. Socrates’ virtue was thus moral knowledge of good and evil, or having the psychological tools to reach the ultimate good. Subsequent students Plato and Aristotle expanded on this original idea of sustained happiness by adding layers of complexity. For example, Aristotle believed that human activity tends towards the experience of maximum eudaimonia, and to achieve that end it was though that one should cultivate rationality of judgement and ‘noble’ characteristics (honor, honesty, pride, friendliness). Epicurus again modified the definition of eudaimonia to be inclusive of pleasure, thus also changing the moral focus to one that maximises the wellbeing of the individual through satisfaction of desire (the argument here is that pleasure equates with goodness and pain with badness, thus the natural conclusion is to maximise positive feeling).

We see that the problem of evil has been dealt with in a wide variety of ways. Even in our modern world it seems that people are becoming angrier, impatient and destructive towards their fellow human beings. Looking at our track record thus far, it seems that the mantra of ‘fight fire with fire’ is being followed by many countries when determining their foreign policy. Modern incarnations of religious moral codes (an eye for an eye) have resulted in a new wave of crusades with theistic beliefs at the forefront once again.

The wisdom of our ancient ancestors is refreshing and surprising, given that commonsense suggests a positive relationship between knowledge and time (human progress increases with the passage of time). It is entirely possible that humanity has been following a false path towards moral enlightenment, and given the lack of progress from the religious front, perhaps a new approach is needed. By treating the problem of evil as one of cultural ignorance we stand to benefit on a high level. The whole judicial system could be re-imagined to one where offenders are actually rehabilitated through education, rather than simply breeding generations of hardened criminals. Treating evil as a form of improper judgement forces our society to take moral responsibility at the individual level, thus resulting in real and measurable changes for the better.

The monk sat meditating. Alone atop a sparsely vegetated outcrop, all external stimulus infusing psychic energy within his calm, receptive mind. Distractions merely added to his trance, assisting the meditative state to deepen and intensify. Without warning, the experience culminated unexpectedly with a fluttering of eyelids. The monk stood, content and empowered with newfound knowledge. He has achieved pure insight…

The term ‘insight’ is often attributed to such vivid descriptions of meditation and religious devotion. More specifically, religions such as Buddhism promote the concept of insight (vipassana) as a vital prerequisite for spiritual nirvana, or transcendence of the mind to a higher plane of existence. But does insight exist for the everyday folk of the world? Are the momentary flashes of inspiration and creativity part and parcel of the same phenomenon or are we missing out on something much more worthwhile? What neurological basis does this mental state have and how can its materialisation be ensured? These are the questions I would like to explore in this article.

Insight can be defined as the mental state whereby confusion and uncertainty are replaced with certainty, direction and confidence.  It has many alternative meanings and contexts regarding its use, ranging from a piece of obtained information to the psychological capacity to introspect objectively (as according to some external judge – introspection is by its very name subjective in nature). Perhaps the most fascinating and generally applicable context is one which can be described as ‘an instantaneous flash of brilliance’ or ‘a sudden clearing of murky intellect and intense feelings of accomplishment’. In short, insight (in the context which I am interested) is one which can be attributed to the genius’ of society, those that seemingly bring together tiny shreds of information and piece them together to solve a particularly challenging problem.

Archimedes is perhaps the most widely cited example of human insight. As the story goes, Archimedes was inspired by the displacement of water in his bathtub to formulate a theory of calculating the volume of an irregular object. This technique was of great empirical importance as it allowed a reliable measure of density (referred to as ‘purity’ in those ancient times, and arising from a more fiscal motivation such as gold purity). The climax of the story describes a naked Archimedes running wildly through the streets unable to control his excitement at this ‘Eureka’ moment. Whether the story is actually true or not has little bearing on the force of the argument presented; all of us have most likely experienced this moment at one point in our lives, and is best summarised by the overcoming of seemingly insurmountable odds to conquer a difficult obstacle or problem.

But where does this inspiration come from? It almost seems as though the ‘insightee’ is unaware of the mental efforts to arrive at a solution, perhaps feeling a little defeated after a day spent in vain. Insight then appears at an unexpected moment, almost as though the mind is working unconsciously and without direction, and offers a brilliant method for victory. The mind must have some unconscious ability to process and connect information regardless of our directed attention to achieve moments such as this. Seemingly unconnected pieces of information are re-routed and brought to our attention in the context of the previous problem. Thus could there be a neurobiological basis for insight? One that is able to facilitate a behind-the-scenes process?

Perhaps insight is encouraged by the physical storage and structure of neural networks. In the case of Archimedes, the solution was prompted by the mundane task of taking a bath; superficially unrelated to the problem, however the value of its properties inflated by a common neural pathway (low bathwater – insert leg – raised bathwater similar to volumes and matter in general). That is, the neural pathways activated by taking a bath are somehow similar to those activated by the rumination of the problem at hand. Alternatively, the unconscious mind may be able to draw basic cause and effect conclusions which are then boosted to the forefront of our minds if they are deemed to be useful (ie: are they immediately relevant to the task being performed). Whatever the case may be, it seems that at times, our unconscious minds are smarter than our conscious attention.

The real question is whether insight is an intangible state of mind (ala ‘getting into the zone’) that can be turned on and off (thus making it useful for extending humanity’s mental capabilities), or whether it is just a mental byproduct from overcoming a challenge (hormonal response designed to encourage such thinking in the future). Can the psychological concept of insight be applied via a manipulation of the subject’s composition (neuronally)  and environmental characteristics (conductive to achieving insight), or is it merely an evolved response that serves a (behaviourally) reinforcing purpose?

Undoutedly the agent’s environment plays a part in determining the likelihood of insight occurring. Taking into account personal preferences (does the person prefer quite spaces for thinking?) the characteristics of the environment could serve to hamper the induction of such a mental state if it is sufficiently irritating to the individual. Insight may also be closely linked with intelligence, and depending on your personal conception of this, neurological structure (if one purports a strictly biological basis of intelligence). If this postulate is taken at face value, we have the conclusion that the degree of intelligence is directly related to the likelihood of insight, and perhaps also to the ‘quality’ of the insightful event (ie: a measure of its brilliance in comparison to inputs such as the level of available information and difficulty of the problem).

But what of day to day insight, it seems to crop up in all sorts of situations. In this context, insight might require a grading scale as to its level of brilliance if its use is to be justified in more menial situations and circumstances. Think of that moment when you forget a particular word, and try as you might, cannot remember it for the life of you. Recall also that flash of insight where the answer is simply handed to you on a platter without any conscious need to retrieve it. Paradoxically, it seems that the harder we try to solve the problem, the more difficult it becomes. However, is this due to efficiency problems such as ‘bottlenecking’ of information transfer, personality traits such as performance anxiety/frustration or some underlying and unconscious process that is able to retrieve information without conscious direction?

Whatever the case may be, our scientific knowledge on the subject is distinctly lacking, therefore an empirical inquiry into the matter is more than warranted (if it hasn’t already been commissioned). Psychologically, the concept of insight could be tested experimentally by providing subjects with a problem to solve and manipulating  the level of information (eg ‘clues’) and its relatedness to the problem (with consideration taken to intelligence, perhaps two groups, high and low intelligence). This may help to uncover whether insight is a factor to do with information processing or something deeper. If science can learn how to artificially induce a mental state akin to insight, the benefits for a positive-futurist society would be grand indeed.

In a previous article, I discussed the possibility of a naturally occurring morality; one that emerges from interacting biological systems and is characterised by cooperative, selfless behaviours. Nature is replete with examples of such morality, in the form of in-group favouritism, cooperativity between species (symbiotic relationships) and the delicate interrelations between lower forms of life (cellular interaction). But we humans seem to have taken morality to a higher plane of existence, classifying behaviours and thoughts into a menagerie of distinct categories depending on the perceived level of good or bad done to external agents. Is morality a concept that is constant throughout the universe? If so, how could morality be defined in a philosophically ‘universal’ way, and how does it fit in with other universals? In addition, how can humans make the distinction between what is morally ‘good’ and ‘bad? These are the questions I would like to explore in this article.

When people speak about morality, they are usually referring to concepts of good and evil. Things that help and things that hinder. A simplistic dichotomy into which behaviours and thoughts can be assigned. Humans have a long history with this kind of morality. It is closely intertwined with religion, with early scriptures and the resulting beliefs providing the means to which populations could be taught the virtues of acting in positive ways. The defining feature of religious morality finds it footing with the lack of faith in the human capacity to act for the good of the many. Religions are laced with prejudicial put downs that seek to undermine our moral integrity. But they do touch on a twinge of truth; evolution has seen the creation of a (primarily) self-centred organism. Taking the cynical view, it can be argued that all human behaviour can be reduced to purely egotistical foundations.

Thus the problem becomes not one of definition, but of plausibility (in relation to humanity’s intrinsic capacity for acting in morally acceptable ways). Is religion correct in its assumptions regarding our moral ability? Are we born into a world of deterministic sin? Theistically, it seems that any conclusion can be supported via the means of unthinking faith. However, before this religiosity is dismissed out of hand, it might be prudent to consider the underlying insight offered.

Evolution has shown that organisms are primarily interested in survival of the self (propagation of genetic material). This fits in with the religious view that humanity is fundamentally concerned with first-order, self-oriented consequences, ann raises the question of whether selfish behaviour should be considered immoral. But what of moral events such as altruism, cooperation and in-group behavioural patterns? These too can be reduced to the level of self-centered egoism, with the superficial layer of supposed generosity stripped away to more meager foundations.

Morality then becomes a way of a means to an end, that end being the fulfillment of some personal requirement. Self initiated sacrifice (altruism) elevates one’s social standing, and provides the source for that ‘warm, fuzzy feeling’ we all know and love. Here we have dual modes of satiation, one that is external to the agent (increasing power, status) and one that is internal (evolutionary mechanism for rewarding cooperation). Religious cynicism is again supported, in that humans seem to have great difficulty in performing authentic moral acts. Perhaps our problem here lies not in the theistic stalker, laughing gleefully at our attempts to grasp at some sort of intrinsic human goodness, but rather in our use of the word ‘authentic’. If one makes an allowance and conceeds that humans could simply lack the faculties for connotation-free morality, and instead put forward the proposition that moral behaviours are instead measured by their main direction of action (directed inwards; selfishly or outwards; altruistically), we can arrival at a usable conceptualisation.

Reconvening, we now have a new operational definition of morality. Moral action is thus characterised by the focus of its attention (inward vs outward) as opposed to a polarised ‘good vs evil’, which manages to evade the controversy introduced by theism and evolutionary biology (two unlikely allies!). The resulting consequence is that we have a kind of morality which is not defined by its degree of ‘ correctness’, which from any perspective is entirely relative. However, if we are to arrive at a meaningful and usable moral universal that is applicable to human society, we need to at least consider this problem of evil and good.

How can an act be defined as morally right or wrong? Considering this question alone conjours up a large degree of uncertainty and subjectivity. In the context of the golden rule (do unto others as you would have done unto yourself), we arrive at even murkier waters; what of the psychotic or sadist whom prefers what society would consider abnormal treatment? In such a situation could ‘normally’ unacceptable behaviour be construed as morally correct? It is prudent to discuss the plausibility of defining morality in terms of universals that are not dependent upon subjective interpretation if this confusion is to be avoided.

Once again we have returned to the issue of objectively assessing an act for its moral content. Intuitively, evil acts cause harm to others and good acts result in benefits. But again we are falling far short of the region encapsulated by morality; specifically, that acts can seem superficially evil yet arise from fundamentally good intentions. And thus we find a useful identifier (in the form of intention) that is worthy of assessing the moral worth of actions.

Unfortunately we are held back by the impervious nature of the assessing medium. Intention can only be ascertained through introspection, and to a lesser degree, psychometric testing. Intention can even be illusive to the individual, if their judgement is clouded by mental illness, biological deformity or an unconscious repression of internal causality (deferring responsibility away from the individual). Therefore, with such a slippery method of assessment regarding the authenticity and nature of the moral act, it seems difficult that morality could ever be construed as a universal.

Universals are exactly what their name connotes; properties of the world in which we inhabit that are experienced across reality. That is to say, morality could be classed as a universal due to its generality amoung our species and its quality of superceeding characterising and distinguishing features (in terms of mundane, everyday experience). If one is to class morality under the category of universals, one should modify the definition to incorporate features that are non-specific and objective. Herein lies the problem with morality; it is such a variable phenomenon, with large fluctuations in individual perspective. From this point there are two main options available given current knowledge on the subject. Democratically, the qualities of a universal morality could be determined through majority vote. Alternatively, a select group of individuals or one definitive authority could propose and define a universal concept of morality. One is left with few options on how to proceed.

If a universal conceptualisation of morality is to be proposed, an individual perspective is the only avenue left with the tools we have at our disposal. We have already discussed the possibility of internal vs external morality (bowing to pressures that dictate human morality is indivisibly selfish, and removing the focus from good vs evil considerations). This, combined with a weighted system that emphasises not the degree of goodness, but rather the consideration of the self versus others, results in a useful measure of morality (for example, there will always be a small percentage of internal focus). But what are we using as the basis for our measurement? Intention has already proved to be elusive, as is objective observation of acts (moral behaviours can be reliant on internal reasoning to determine their moral worth, some behaviours go unobserved or can be ambiguous to an external agent). Discounting the possibility of a technological breakthrough enabling direct thought observation (and the ethical considerations such an invasion of privacy would bring), it seems difficult on how we can proceed.

Perhaps it is best to simply take a leap of faith, believing in humanity’s ability to make judgements regarding moral behaviour. Instead of cynically throwing away our intrinsic abilities (which surely do vary in effectiveness within the population), we should trust that at least some of us would have the insight to make the call. With morality, the buck definitely stops with the individual, which is a fact that most people can have a hard time swallowing. Moral responsibility definitely rests with the persons involved, and in combination with a universally expansive definition, makes for some interesting assertations of blame, not to mention a pressuring force to educate the populace on the virtues of fostering introspective skills.

Labels are the expression of categories; a process that the human mind uses in order to make sense of the external world. Labels have been adopted vigorously by the medical community in order to classify patients according to their presenting symptomatology. While labels are a useful tool in such situations, they must be used appropriately if the patient is to see any improvement in their condition. Specifically,the field of psychology seems to hold an over-reliance on the use of diagnostic labelling, making them a requirement prior to the progression of treatment. I intend to show that the overuse of labelling is not only dangerous for the patient, but also for the field as a respected scientific discipline.

The usefulness of labelling stems from their ability to classify a wide range of similar cases into one over-arching category. This enables health professionals to discuss the case with meaning (convey to others in an efficient way what is wrong) and develop appropriate treatments (by grouping remedies according to their effects). These advantages can also be construed as weaknesses, if the label is used haphazardly and inappropriately.

Medical professionals utilise the power of labelling in their diagnoses in order to gain insight into the patient’s condition. Using a graduated method, information is gathered initially using broad techniques that narrow down to the specific (and eventual classification). For example, a patient suffering from a psychological disorder will be interviewed with a focus on history and presenting symptoms. Over a number of sessions the therapist will narrow their diagnosis to one main possibility. This then acts as the guide for future treatment and to an extent, ultimately decides the fate of the patient. The label is statistical in nature; generalisation procedures are constantly running in the background of the therapist’s mind. Each individual symptom is compared against their knowledge and experience to see if it fits the mold of a previous case they have encountered. Obviously this can cause problems if a) the case is unique, b) underlying problems are the root cause or c) the patient/therapist are inaccurate in their information exchange.

Casting aside individual differences in medical ability (interviewing technique, medical knowledge, experience) it seems that many variables still remain to influence the diagnostic process. Perhaps the main factor (especially in the case of psychiatric illness) in obtaining an accurate classification stems from the patient’s willingness to co-operate; the therapeutic relationship. The patient must be made to feel comfortable and at ease with the professional if accurate and meaningful information is to be exchanged. A patient that is uneasy and uncooperative will only hinder the flow of information that can be used to their benefit. Additionally, a lack of patient insight into their illness can be detrimental. If the patient lacks sufficient command over their ability to communicate clearly and accentuate their thoughts and feelings with any degree of objectivity the professional’s job will be made harder. Not only will an objective ‘truth’ have to be discerned, but the professional must also ensure that they themselves remain objective in their judgments and take steps to minimise automatic processes that could cloud the decision making process (stereotyping, assumptions etc).

The tendency for the health profession to over-emphasise the importance of labels can also lead to problems with the patient’s road to recovery. Such professionals may feel pressured to demonstrate their knowledge and aptitude, therefore quickly jump to a diagnosis without giving it sustained thought. Of course, many other variables may also influence the diagnosis such as time constraints, mood, external stimuli (distractors); in short, anything that may prevent a clear and rational consideration of the evidence both for and against arriving at a particular conclusion. The threshold where this becomes dangerous is quickly reached in medical settings, due to the often serious nature of illnesses and the cacophony of distracting environmental factors (emergency wards in a busy hospital).

In psychological settings, the pace is more relaxed, however the pressure to label is greater. How is this the case? I believe it arises from the professional mentality in this field (heavy research background, opinionated therapists subscribing to paradigms they have had experience with), coupled with the highly theoretical nature of psychological treatment. Empirical and therapeutic psychology is still a relatively new field, having its origins in the late 19th century. As such, and in combination with the medium with which it deals (intangible, consciousness), psychological training involves large amounts of theory. This introduces an element of uncertainty and opinionated debate over what is the ‘correct’ treatment and diagnosis. A tension may develop in the therapist as their theoretical training interacts with their therapeutic intuition and experience. The lack of one definitively ‘correct’ answer can hinder treatment and influence the initial diagnosis of the illness.

I believe that psychology suffers more-so than medicine due to this ambiguity. Unlike medicine, which has ancient roots as a scientific discipline, psychology lacks a secure framework and free exchange between the various paradigms. Psychology seems to have a very academic mentality, with research continually revolving the stock of common knowledge. Theories are reversed, expanded and forgotten based on the results of research. Paradigms clash when battles erupt between them over disputed research findings or the more effective course of treatment (eg Psychoanalysis vs Behaviourism vs Neuropsychology). Fortunately this tendency has lapsed in modern psychology, as the field looks to establish itself on secure footing and introduces better systems and regulation.

The problem with labelling in the field of psychology is that there is too much choice, and too little consensus between therapists. The field is dynamic, with changes to theory and treatment occurring rapidly as new evidence comes to light. Therapists need to stay on top of their game if they are to remain at the forefront of their profession. The Scientist-Practitioner model assists in this task, as it incorporates a mindset of combining best practice with the latest research; a process of self-evaluation and self-improvement that requires a life-long dedicated commitment if it is to succeed.

Due to the inherent difficulty in successfully diagnosing mental illness and the tendency for co-morbid conditions to exist alongside the target ailment, psychology as a field is in dire need of consistency. With time, the field will mature. A possible solution would be a more thorough integration with Psychiatry, the medically oriented parent of psychology. By taking the pieces that work (such as biological, materialistic and reductionist mindsets) and combining them with a strong research component that emphasises objectivity, the field as a whole can move forward. Patients will stand a better chance of receiving a more accurate diagnosis as therapists move away from the specialist fields that they adopt based on elective subjects and research projects taken at university. They will begin to look at the whole picture, pulling together a vast reservoir of knowledge from a myriad of paradigms.

Psychology should be united under one banner, rather than splitting into separate warring factions each explaining the same phenomenon from a different perspective. Such variation is useful in a field that is in its infancy and academic creativity flourishes. Now that at least some of the basic underlying processes of the human mind are known, perhaps each paradigm should look at working cooperatively and meshing their ideas. Just as the physical sciences search for M, the theory to unite the quantum with the relative, so too should psychology aim to provide a framework theory that can explain the secrets of the brain using common terminology and ideology. This is not advocating the demolition of the various offshoots of psychology, but rather removing the restrictions currently imposed. For instance, this unification aims to remove practices such as an educational psychologist can only talk about X, Y and Z while evolutionary psychologists are only qualified to comment on A, B and C. Psychology would remain a diverse discipline, however the treatment and diagnosis would be much more attenuated that it is currently.

Therapists need to relax their reliance and servitude to labels, removing these crutches (because it is easier to label someone than arrive at an independent conclusion based on objective evidence). The future of psychology as an effective method of treating mental illness, which is only going to increase as the world becomes busier and more stressful, depends on it.