You are currently browsing the category archive for the ‘Quantum Physics’ category.

The topic of free-will is one of the largest problems facing modern philosophers. An increasing empirical onslaught has done little to alleviate these murky waters. In actuality, each scientific breakthrough has resulted in greater philosophical confusion, whether it be due to an impractical knowledge base that is needed to interpret these results or counter-intuitive outcomes (RP signal, brain activity precedes conscious action). My own attempts to shed some light onto this matter are equally feeble, which has precipated the creation of the present article. What is the causal nature of the universe? Is each action determined and directly predictable from a sufficiently detailed starting point or is there a degree of inherent uncertainty? How can we reconcile the observation that free-will appears to be a valid characteristic of humanity with mounting scientific evidence to the contrary (eg Grand Unified Theory)? These are the questions I would like to discuss.

‘Emergent’ seems to be the latest buzzword in popular science. While the word is appealing when describing how complexity can arise from relatively humble beginnings, it does very little to actually explain the underlying process. These two states are simply presented on a platter, the lining of which is composed of fanciful ’emergent’ conjourings. While there is an underlying science behind the process involving dynamic systems (modelled on biological growth and movement), there does seem to be an element of hand waving and mystique.

This state of affairs does nothing to help current philosophical floundering. Intuitively, free-will is an attractive feature of the universe. People feel comfortable knowing that they have a degree of control over the course of their life. A loss of such control could even be construed as a faciliator of mental illness (depression, bipolar disorder). Therefore, the attempts of science to develop a unified theory of complete causal prediction seems to undermine our very nature as human beings. Certainly, some would embrace the notion of a deterministic universe with open arms, happy to put uncertainty to an end. However, one would do well (from a Eudamonic point of view) to cognitively reframe anxiety regarding the future to an expectation of suprise and anticipation at the unknown.

While humanity is firmly divided over their preference for a predictable or uncertain universe, the problem remains that we appear to have a causally determined universe with individual freedom of choice and action. Quantum theory has undermined determinism and causality to an extent, with the phenomenon of spontaneous vaccuum energy supporting the possibility of events occuring without any obvious cause. Such evidence is snapped up happily by proponents of free-will with little regard as to its real-world plausibility.This is another example of philosophical hand-waving, where the real problem involves a form of question begging; that is, a circular argument with the premise requiring a proof of itself in order to remain valid! For example, the following argument is often used;

  1. Assume quantum fluctuations really are indeterminate in nature (underlying causality ala ‘String Theory’ not applicable).
  2. Free-will requires indeterminacy as a physical prerequisite.
  3. Quantum fluctuations are responsible for free-will.

 To give credit where it is due, the actual arguments used are more defined than that which is outlined above, however the basic structure is similar. Basic premises can be outlined and postulates put forward describing the possible form of neurological free will, however as with most developing fields the supporting evidence is skimp at best. And to make matters worse, quantum theory has shown that human intuition is often not the best method of attempting an explaination.

 However, if we work with what we have, perhaps something useful will result. This includes such informal accounts such as anecdotal evidence. The consideration of such evidence has led to the creation of two ‘maxims’ that seem to summarise the evidence presented in regards to determinsm and free-will.

Maxim one. The degree of determinism within a system is reliant upon the scale of measurement; a macro form of measurement results in a predominantly deterministic outcome, while a micro form of measurement results in an outcome that is predominantly ‘free’ or unpredictable. What this is saying is that determinism and freedom can be directly reconciled and coexist within the same construct of reality. Rather than existing as two distinctly separate entities, these universal characteristics should be reconceptualised as two extremities on a sliding scale of some fundamental quality. Akin to Einstein’s General Relativity, the notions of determinism and freedom are also relative to the observer. In other words, how we examine the fabric of reality (large or small scale) results in a worldview that is either free or constrained by predictability. Specifically, quantum scale measurements allow for an indeterministic universe, while larger scale phenomenon are increasingly easier to predict (with a corresponding decrease in the accuracy in the measurement tool). In short, determinism (or free-will) is not a physical property of the universe, but a characteristic of perception and an artifact of the mesaurement method used. While this maxim seems commonsensical and almost obvious, I believe the idea that both determinism and free-will are reconcilable features of this universe is a valid proposition that warrants further investigation.

Maxim Two: Indeterminacy and free-will are naturally occuring results that emerge from the complex interaction of a sufficient number of interacting deterministic systems (actual mechanisms unknown). Once again we are falling back on the explanatory scapegoat of ’emergence’, however its use is partially justified (in the light of empirical developments). For example, investigations into fractal patterns and the modelling of chaotic systems seems to justify the existence of emergent complexity. Fractals are generated from a finite set of definable equations and result in an intensely complicated geometric figure with infinite regress, the surface features undulating with each magnification (interestingly, fractal patterns are a naturally occuring feature in the physical world, and can result from biological growth patterns and magnetic field lines). Chaos is a similar phenomemon, beginning from reasonably humble initial circumstances, and due to an amalgamation of interferring variables results in an overall system of indeterminacy and unpredictability (eg weather patterns). Perhaps this is the mechanism of human consciousness of freedom of will; individual (and deterministic) neurons contribute enmasse to an overall emergent system that is unpredictable. As a side note, such a position also supports the possibility of artificial intelligence; build something that is sufficiently complex and ‘human-like’ consciousness and freedom will result.

The two maxims proposed may seem to be quite obvious on cursory inspection, however it can be argued that the proposal of a universe in which determinism and freedom of will form two alternative interpretations of a common, underlying reality is unique. Philisophically, the topic is difficult to investigate and discuss due to limitations on empirical knowledge and an increasing requirement for specialised technical insight into the field.

The ultimate goal of modern empiricism is to reduce reality to a strictly deterministic foundation. In keeping with this aim, experimentation hopes to arrive at physical laws of nature that are increasingly accurate and versatile in their generality. Quantum theory has since put this inexorable march on hold while futile attempts are made to circumvent the obstacle that is the uncertainty principle.

Yet perhaps there is a light at the end of the tunnel, however dim the journey may be. Science may yet produce a grand unified theory that reduces free-will to causally valid, ubiquitous determinism. More than likely, as theories of free-will become closer to explaining the etiology of this entity, we will find a clear and individually applicable answer receding frustratingly into the distance. From a humanistic perspective, it is hoped that some degree of freedom will be preserved in this way. After all, the freedom to act independently and an uncertainty of the future is what makes life worth living!

Teleportation is no longer banished to the realm of science fiction. It is widely accepted that what was once considered a physical impossibility is now directly achievable through quantum manipulations of individual particles. While the methods involved are still in their infancy (single electrons are the heaviest particle to be teleported), we can at least begin to appreciate and think about the possibilities on the basis of plausibility. Specifically, what are the implications for personal identity if this method of transportation is possible on a human scale? Atomically destructing and reconstructing an individual at an alternate location could introduce problems with consciousness. Is this the same person or simply an identical twin with its own thoughts, feelings and desires? These are the questions I would like to discuss in this article.

Biologically we lose our bodies several times over during one human life-time. Complete organs are replaced diurnally with little thought given to the implications for self-identity. It is a phenomenon that is often overlooked, and especially so in relation to recent empirical developments with quantum teleportation. If we are biologically replaced with regularity does this imply that our sense of self is, likewise, dynamic in nature and constantly evolving? There would be reasonable arguements for both sides of this debate; maturity and daily experience do result in a varied mental environment. However, one wonders if this has more to do with innate processes such as information transfer/recollection/modification rather than purely the biological characteristics of individual cells (in relation to cell division and rejuvenation processes).

Thus it could be argued that identity is a largely conscious (in terms of seeking out information and creating internal schema of identity) and directed process. This does not totally rule out the potential for identity based upon changes to biological structure. Perhaps the effects are more subtle, modifying our identities in such a way as to facilitate maturity or even mental illness (if the duplication process is disturbed). Cell mutation (neurological tumor growth) is one such example whereby a malfunctioning biological process can result in direct and often drastic changes to identity.

However, I believe it is safe to assume that “normal” tissue regenerative processes do not result in any measurable changes to identity. What makes teleportation so different? Quantum teleportation has been used to teleport photons from one location to another, and more recently, particles with mass (electrons). The process is decidedly less romantic than science-fiction authors would have us believe; classical transmission of information is still required, and a receiving station must still be established at the desired destination. What this means is that matter transportation, ala ‘Star Trek’ transporters, is still very much an unforeseeable fiction. In addition, something as complex as the human body would require incredible computing power to scan at sufficient detail, another limiting factor in its practicality. Fortunately, there are potential uses for this technology such as in the fledging industry of quantum computers.

The process works around the limitations of the quantum Uncertainty Principle (which states that the exact properties of a quantum system can never be known in exact detail) through a process known as the “Einstein-Podolsky-Rosen” effect. Einstein had real issues with Quantum Mechanics; he didn’t like it at all (to quote the cliche ‘Spooky action at a distance’). The EPR paper was aimed at irrefutably proving the implausibility of entangled pairs of quantum particles. John Stewart Bell tripped the Einstein proposition on its head when he demonstrated that entangled particles do in fact exhibit statistically significant random behaviours (that is, the frequencies of each action correlated between both particles too highly to be due to chance alone). The fact that entanglement does not violate the no-communication theorem is good news for our assumptions regarding reality, but more bad news for teleportation fans. Information regarding the quantum state of the teleportee is still required to be transmitted via conventional methods for reassembly at the other end.

Quantum teleportation works by initially scanning the quantum state of a particle at A, with care taken not to cause too much disruption (measurement distorts the original, the harder you look the more uncertain the result). This partial scan is then transmitted at relativistic speeds to the receiver at B. A pair of entangled particles is then dispatched to both teleportation stations. Entangled particle 1 at A interacts with the remainder of A (minus the scanned out information sent to B). Entanglement then assures that this information will be instantaneously available at B (via entangled particle 2). Utilising the principles of the EPR effect and Bell’s statistical correlations, it is then possible to reconstruct the state of the original particle A at the distant location, B. While the exact mechanism is beyond the technical capacity of philosophy, it is prudent to say that the process works by taking the entangled information from EP2 and combining it with the classically transmitted information that was scanned out of the original particle, A.

Casting practicality aside for the sake of philosophical discussion,  if such a process became possible for a being as complex as a human, what would be the implications for consciousness and identity? Common sense tells us that if an exact replica could be duplicated then how is this in any way different to the original? One would simply ‘wake-up’ at the new location within the same body and mind as you left. Those that subscribe to a Cartesian view of separated body and mind would look upon teleportation with an abhorrent revulsion. Surely along the way we are loosing a part of what makes us uniquely human; some sort of intangible soul or essence of mind which cannot be reproduced? This leads one to similar thought experiments. What if another being somewhere in the Universe is born with the exact mental characteristics as yourself? Would this predispose them to some sort of underlying and phenomenological connection? Perhaps this is supported by anecdotal evidence from empirical studies into identical twins. It is thought such individuals share a common bond, demonstrating almost telepathic abilities at times. Although it could be argued that the nature of this mechanism is probably no more mystical than a familiar acquaintance predicting how you would react in a given situation, or similarities in brain structure predisposing twins to ‘higher than average’ mental convergence events.

Quantum teleportation on conscious beings also raises serious moral implications. Is it considered murder to deconstruct the individual at point A, or is this initial crime nullified once the reassembly is completed? Is it still considered immoral if someone else appears at the receiver due to error or quantum fluctuation? Others may argue that it is no different to conventional modes of transport; human error should be dealt as such (necessary condition for the label of crime/immorality) and naturally occurring disasters interpreted as nothing more than random events.

While it is doubtful that we will ever see teleportation on a macro scale, we should remain mindful of the philosophical and practical implications of emerging technologies. Empirical forces are occasionally blinded to these factors when such innovations are announced to the general public. While it is an important step in society that such processes are allowed to continue, the rate at which they are appearing can be cause for alarm if they impinge upon our human rights and the preservation of individuality. There has never been a more pressing time for philosophers to think about the issues and offer their wisdom to the world.

Many of us take the capacity to sense the world for granted. Sight, smell, touch, taste and hearing combine to paint an uninterrupted picture of the technicolour apparition we call reality. Such lucid representations are what we use to define objects in space, plan actions and manipulate our environment. However, reality isn’t all that it’s cracked up to be. Namely, our role in defining the universe in which we live is much greater than we think. Humanity, through the use of sensory organs and the resulting interpretation of physical events, succeeds in weaving a scientific tapestry of theory and experimentation. This textile masterpiece may be large enough to ‘cover all bases’ (in terms of explaining the underlying etiology of observations), however it might not be made of the right material. With what certainty do scientific observations carry a sufficient portion of objectivity? What role does the human mind and its modulation of sensory input have in creating reality? What constitutes objective fact and how can we be sure that science is ‘on the right track’ with its model of empirical experimentation? Most importantly, is science at the cusp of an empirical ‘dark age’ where the limitations of perception fundamentally hamper the steady march of theoretical progress? These are the questions I would like to explore in this article.

The main assumption underlying scientific methodology is that the five sensory modalities employed by the human body are, by and large, uniformly employed. That is, despite small individual fluctuations in fidelity, the performance of the human senses is mostly equal. Visual acuity and auditory perception are sources of potential variance, however the advent of certain medical technologies has circumnavigated and nullified most of these disadvantages (glasses and hearing aids, respectively). In some instances, such interventions may even improve the individual’s sensory experience, superseding ‘normal’ ranges through the use of further refined instruments. Such is the case with modern science as the realm of classical observation becomes subverted by the need for new, revolutionary methods designed to observe both the very big and the very small. Satellites loaded with all manner of detection equipment have become our eyes for the ultra-macro; NASA’s COBE orbiter gave us the first view of early universal structure via detection of the cosmic microwave background radiation (CMB). Likewise, scanning probe microscopy (SPM) enabled scientists to observe on the atomic scale, below the threshold of visible light. In effect, we have extended and supplemented our ability to perceive reality.

But are these innovations also improving the objective quality of observations, or are we being led into a false sense of security? Are we becoming comfortable with the idea that what we see constitutes what is really ‘out there’? Human senses are notoriously prone to error. In addition, machines are only as good as their creator. Put another way, artificial intelligence has not yet superseded the human ‘home grown’ alternative. Therefore, can we rely on a human-made, artificial extension of perception with which to make observations? Surely we are compounding the innate inaccuracies, introducing a successive error rate with each additional sensory enhancement. Not to mention the interpretation of such observations and the role of theory in whittling down alternatives.

Consensus cannot be reached on whether what I perceive is anything like what you perceive. Is my perception of the colour green the same as yours? Empirically and philosophically, we are not yet at a position to determine with any objectivity whether this question is true. We can examine brain structure and compare regions of functional activity, however the ability to directly extract and record aspects of meaning/consciousness is still firmly in the realms of science-fiction. The best we can do is simply compare and contrast our experiences through the medium of language (which introduces its own set of limitations).As aforementioned, the human sensory experience can, at times, become lost in translation.

Specifically, the ability of our minds to disentangle the information overload that unrelentingly flows through mental channels can wane due to a variety of influences. Internally, the quality of sensory inputs is governed at a fundamental level by biological constraints. Millions of years of evolution has resulted in a vast toolkit of sensory automation. Vision, for example, has developed in such a way as to become a totally unconscious and reflexive phenomenon. The biological structure of individual retinal cells predisposes them to respond to certain types of movement, shapes and colours. Likewise, the organisation of neurons within regions of the brain, such as the primary visual cortex in the occipital lobe, processes information with pre-defined mannerisms. In the case of vision, the vast majority of processing is done automatically, thus reducing the overall level of awareness and direct control the conscious mind has over the sensory system. The conclusion here is that we are limited by physical structure rather than differences in conscious discrimination.

The retina acts as the both the primary source of input as well as a first-order processor of visual information In brief, photons are absorbed by receptors on the back wall of the eye. These incoming packets of energy are absorbed by special proteins (rods – light intensity, cones – colour) and trigger action potentials in attached neurons. Low level processing is accomplished by a lateral organisation of retinal cells; ganglionic neurons are able to communicate with their neighbours and influence the likelihood of their signal transmission. Cells communicating in this manner facilitates basic feature recognition (specifically, edges/light and dark discrepancies) and motion detection.

As with all the sensory modalities, information is then transmitted to the thalamus, a primitive brain structure that acts as a communications ‘hub’; its proximity to the brain stem (mid and hind brains) ensures that reflexes are privy to visual input prior to the conscious awareness. The lateral geniculate nucleus is the region of the thalamus which splits incoming visual input into three main signals; (M, P and K). Interestingly, these channels stream inputs into signals with unique properties (eg exclusively colour, motion etc). In addition, the cross lateralisation of visual input is a common feature of human brains. Left and right fields of view are diverted at the optic chiasm and processed on common hemispheres (left field of view from both eyes processed on the right side of the brain). One theory as to why this system develops is to minimise the impact of uni-lateral hemispheric damage – the ‘dual brain’ hypothesis (each hemisphere can act as an independent agent, reconciling and supplementing reductions in function due to damage).

We seem to lazily fall back on these automated subsystems with enthusiasm, never fully appreciating and flexing the full capabilities of sensory appendages. Micheal Frayn, in his book ‘The Human Touch’ demonstrates this point aptly;

“Slowly, as you force yourself to observe and not to take for granted what seems so familiar, everything becomes much more complicated…That simple blueness that you imagined yourself seeing turns out to have been interpreted, like everything else, from the shifting, uncertain material on offer” Frayn, 2006, p26

Of course, we are all blissfully ignorant of these finer details when it comes to interpreting the sensory input gathered by our bodies. The consciousness acts ‘with what it’s got’, without a care as to the authenticity or objectivity of the observations. We can observe this first hand in a myriad of different ways; ways in which the unreal is treated as if it were real. Hallucinations are just one mechanism where the brain is fooled. While we know such things are false, to a degree (depending upon the etiology, eg schizophrenia), such visual disturbances nonetheless are able to provoke physiological and emotional reactions. In summary, the biological (and automated) component of perception very much determines how we react to, and observe, the external world. In combination with the human mind (consciousness), which introduces a whole new menagerie of cognitive baggage, a large amount of uncertainty is injected into our perceptual experience.

Expanding outwards from this biological launchpad, it seems plausible that the qualities which make up the human sensory experience should have an effect on how we define the world empirically. Scientific endeavour labours to quantify reality and strip away the superfluous extras leaving only constitutive and fundamental elements. In order to accomplish this task, humanity employs the use of empirical observation. The segway between biological foundations of perception and the paradigm of scientific observation involves a similarity in sensory limitation. Classical observation was limited by ‘naked’ human senses. As the bulk of human knowledge grew, so too did the need to extend and improve methods of observation. Consequently, science is now possibly realising the limitation of the human mind to digest an overwhelming plethora of information.

Currently, science is restricted by the development of technology. Progress is only maintained through the ingenuity of the human mind to solve biological disadvantages of observation. Finely tuned microscopes tap into quantum effects in order to measure individual atoms. Large radio-telescope arrays link together for an eagle’s eye view of the heavens. But as our methods and tools for observing grow in complexity, so too does the degree of abstract reasoning that is required to grasp the implications of their findings. Quantum theory is one such warning indicator.

Like a lighthouse sweeps the night sky and signals impending danger, quantum physics, or more precisely, humanity’s inability to agree on any one consensus which accurately models reality, could be telling us something. Perhaps we are becoming too reliant on our tools of observation, using them as a crutch in a vain attempt to avoid our biological limitations. Is this a hallmark of our detachment from observation? Quantum ‘spookiness’ could simply be the result of a fundamental limitation of the human mind to internally represent and perceive increasingly abstract observations. Desperately trying to consume the reams of information that result from rapid progress and intense observation, scientific paradigms become increasingly specialised and diverged, increasing the degree of inter-departmental bureaucracy. It now takes a lifetime of training to even grasp the basics of current physical theory, let alone the time taken to dissect observations and truly grasp their essence.

In a sense, science is at a crossroads. One pathway leads to an empirical dead end; humanity has exhausted every possible route of explanation. The other involves either artificial augmentation (in essence, AI that can do the thinking for us) or a fundamental restructuring of how science conducts its business. Science is in danger of information overload; the limitations introduced by a generation of unrelenting technical advancement and increasingly complex tools with which to observe has taken its toll. Empirical progress is stalling, possibly due to a lack of understanding by those doing the observing. Science is detaching from its observations at an alarming rate, and if we aren’t careful, in danger of loosing sight of what the game is all about. The quest for knowledge and understanding of the universe in which we live.

Most of us would like to think that we are independent agents that are in control of our destiny. After all, free-will is one of the unique phenomena that humanity can claim as its own – a fundamental part of our cognitive toolkit. Experimental evidence, in the form of neurological imaging has been interpreted as an attack on mental freedom. Studies that highlight the possibility of unconscious activity preceding the conscious ‘will to act’ seem to almost sink the arguments from non-determinists (libertarians). In this article I plan to outline this controversial research and offer an alternative interpretation; one which does not infringe on our abilities to act independent and of our own accord. I would then like to explore some of the situations where free-will could be ‘missing in action’ and suggest that the frequency at which this occurs is larger than expected.

A seminal investigation conducted by Libet et al (1983) first challenged (empirically) our preconceived notions of free-will. The setup consisted of an electroencephalograph (EEG, measuring overall electrical potentials through the scalp) connected to the subject and a large clock with markings denoting various time periods. Subjects were required to simply flick their wrist whenever a feeling urged them to do so. The researchers were particularly interested in the “Bereitschaftspotential” or readiness potential; a signature EEG pattern of activity that signals the beginning of volitional initiation of movement. Put simply, the RP is an measurable spike in electrical activity from the pre-motor region of the cerebral cortex – a mental preparatory action that put the wheels of movement into action.

Results of this experiment indicated that the RP significantly preceded the subjects’ reported sensations of conscious awareness. That is, the act of wrist flicking seemed to precede conscious awareness of said act. While the actual delay between RP detection and conscious registration of intent to move was small (by our standards), the half a second gap was more than enough to assert that a measurable difference had occurred. Libet interpreted these findings as having vast implications for free-will. It was argued that since electrical activity preceded conscious awareness of the intent to move, free-will to initiate movement (Libet allowed free-will to control movements already in progress, that is, modify their path or act as a final ‘veto’ in allowing or disallowing it to occur) was non-existent.

Many have taken the time to respond to Libet’s initial experiment. Daniel Dennet (in his book Freedom Evolves) provides an apt summary of the main criticisms. The most salient rebuttal comes in the form of signal delay. Consciousness is notoriously slow in comparison to the automated mental processes that act behind the scenes. Take the sensation of pain, for example. Initial stimulation of the nerve cells must firstly reach sufficient levels for an action potential to fire, causing dendrites to flood ions into the synaptic gap. The second-order neuron then receives these chemical messengers, modifying its electrical charge and causing another action potential to fire along its myelinated axon. Now, taking into account the length that this signal must travel (at anywhere from 1-10m/s), it will then arrive at the thalamus, the brain’s sensory ‘hub’ where it is then routed to consciousness. Consequently, there is a measurable gap between the external event and conscious awareness; perhaps made even larger if the signal is small (low pain) or the mind is distracted. In this instance, electrical activity is also taking place and preceding consciousness. Arguably the same phenomenon could be occurring in the Libet experiment.

Delays are inevitably introduced when consciousness is involved in the equation. The brain is composed of a conglomerate of specialised compartments, each communicating with its neighbours and performing its own part of the process in turn. Evolution has drafted brains that act automatic first, and conscious second. Consequently, the automatic gains priority over the directed. Reflexes and instincts act to save our skins long before we are even aware of the problem. Naturally, electrical activity in the brain could thus precede conscious awareness.

In the Libet experiment, the experimental design itself could be misleading. Libet seems to equate his manipulation of consciousness timing with free-will, when in actual fact, the agent has already decided freely that they will follow instructions. What I am trying to say here is that free-will does not have to act as an initiator to every movement; rather it acts to ‘set the stage’ for events and authorises the operation to go ahead. When told to move voluntarily, the agent’s will makes the decision to either comply or rebel. Compliance causes the agent to authorise movement, but the specifics are left up to chance. Perhaps a random input generator (quantum indeterminacy?) provides the catalyst with which this initial order combines to create the RP and eventual movement. Conscious registration of this fact only occurs once the RP is already starting to form.

Looking at things from this perspective, consciousness seems to play a constant game of ‘catch-up’ with the automated processes in our brains. Our will is content to act as a global authority, leaving the more menial and mundane tasks up to our brain’s automated sub-compartments. Therefore, free-will is very much alive and kicking, albeit sometimes taking a back-seat to the unconscious.

We have begun by exploring the nature of free-will and how it links in with consciousness. But what of these unconscious instincts that seek to override our sense of direction and seek to regress humanity back to its more animalistic and primitive ancestry? Such instincts act covertly; sneakily acting whilst our will is otherwise indisposed. Left unabated, the agent that gives themselves completely to urges and evolutionary drives could be said to be devoid of free-will, or at the very least, somewhat lacking compared to more ‘aware’ individuals. Take sexual arousal, for instance. Like it or not, our bodies act on impulse, removing free-will from the equation with simplistic stimulus:response conditioning processes. Try as we might, sexual arousal (if allowed to follow its course) acts immediately upon visual or physical stimulation. It is only when the consciousness kicks into gear and yanks on the leash attached to our unconscious that control is regained. Eventually, with enough training, it may be possible to override these primitive responses, but the conscious effort required to sustain such a project would be psychically draining.

Society also seeks to rob us of our free-will. People are pushed and pulled by group norms, expectations of others and the messages that are constantly bombarding us on a daily basis. Rather than encouraging individualism, modern society is instead urging us to follow trends. Advertising is crafted in a way that the individual may even be fooled into thinking that they are arriving at decisions of their own volition (subliminal messaging), when in actual fact, it is simply tapping into some basic human need for survival (food, sex, shelter/security etc).

Ironically, science itself could also be said to be reducing the amount of free-will we can exert. Scientific progress seeks to make the world deterministic; that is, totally predictable through increasingly accurate theories. While the jury is still out as to whether ‘ultimate’ accuracy in prediction will ever occur (arguably, there is not enough bits of information in the universe with which to construct a computer powerful enough to complete such a task) science is coming closer to a deterministic framework whereby the paths of individual particles can be predicted. Quantum physics is but the next hurdle to be overcome in this quest for omniscience. If the inherent randomness that lies within quantum processes is ever fully explained, perhaps we will be at a place (at least scientifically) to model a individual’s future action based on a number of initial variables.

What could this mean for the nature of free-will? If past experiments are anything to go by (Libet et al), it will rock our sense of self to the core. Are we but behaviouristic automatons as the psychologist Skinner proposed? Delving deeper into the world of the quanta, will we ever be able to realistically model and predict the paths of individual particles and thus the future course of the entire system? Perhaps the Heisenberg Uncertainty Principle will spare us from this bleak fate. The indivisible randomness of the quantum wave function could potentially be the final insurmountable obstacle that neurological researchers and philosophers alike will never be able to conquer.

While I am all for scientific progress and increasing the bulk of human knowledge, perhaps we are jumping the gun with this free-will stuff. Perhaps some things are better left mysterious and unexplained. A defeatist attitude if ever I saw one, but it could be justified. After all, how would you feel if you knew every action was decided before you were even a twinkle in your father’s eye? Would life even be worth living? Sure, but it would take alot of reflection and a personality that could either deny or reconcile the feelings of unease that such a proposition brings.

They were right; ignorance really is bliss.

Compartmentalisation of consciousness

Quantum physics is a fascinating branch of modern science that has grown in popularity. Terms such as “the uncertainty principle”, “quantum entanglement” and “probability waves” have all become commonly-used phrases in the scientific community. In the same way that Newtonian mechanics explains the world of the very big, (the orbits of planets, falling apples) quantum physics aims to improve our understanding of the very small (sub-atomic scales). Once objects start interacting at a smaller level, quantum mechanics takes over and produces some weird and wacky results. What Newton’s laws and (to an extent) Einstein’s special and general theories of relativity have in common sense and comprehensibility, quantum physics makes up for in its plain weirdness.

In the wacky world of the quanta, particles appear out of nothing and vanish again in an instant. Particles separated by infinite distances show characteristics of ‘entanglement’; that is, measurements taken on one particle instantaneously affect the state of the partner (seemingly violating the faster-than-light limitations of general relativity). Similarly, quantum particles exhibit tunneling behaviours. Being probabilistic in nature, the quantum wave equation for any given particle will expand as a function of time. Occasionally, this wave (or probability of existing in a particular position) will penetrate insurmountable obstacles (that is, distances or barriers where the energy to escape them is more than the particle’s kinetic energy). In effect, the particle has ‘tunnelled’ through thin air.

The probabilistic nature of quantum physics introduces some worrying implications for the nature of reality. In particular, the Copenhagen Interpretation (one leading view on what the quantum calculations translate into in the macroscopic world) posits that an observer is needed to collapse the wave functions, creating what we see as real. Taken literally, this means that nothing exists if we aren’t watching. The falling tree in a deserted forest really does make no sound, solving the Chinese proverb succinctly. Erwin Schrodinger, one of the pioneers of quantum theory and the man behind wave equations, disagreed with this interpretation most vehemently. Schrondinger’s cat was the fruits of his protest; a thought experiment introducing the paradox that this interpretation brings.

Schrodinger’s thought experiment goes a little something like this. It states that a cat, sealed off totally from the outside world and attached to a death device will exist in a superposition of quantum states. Its probabilty wave will spread out over time, with the cat existing as both dead and alive at the same time. The hypothetical death device consists of a decaying radioactive source, emitting particles that are detected via a Geiger counter. The probabilty wave spreads in such a manner due to the underlying quantum randomness that controls the process of radioactive decay (tunnelling allows beta particles to escape the overwhelming pull of the weak nuclear force). Thus, once a sufficient period of time has passed and the probability of the radioactive substance emitting a particle (or not) is exactly 1/2, the cat is said to be both alive and dead.

Schrodinger was not advocating the truth of this experiment, rather using it instead to draw attention to the paradox and ‘can of worms’ that the Copenhagen Interpretation had brought about. While the experiment may indeed be possible in the realm of quantum uncertainty, it certainly requires a definite leap of faith away from the common sense interpretation of everyday occurances. The major premises that this argument requires us to accept is that a) probability waves exist (that is, quantum particles exist in a superposition of possible states), b) an observer is necessary to collapse the function and bring about reality and c) the observer must be intelligent (namely that there is something inherently unique about conscious beings and their quantum-collapsing ability).

Firstly I will take a minor detour and actually lend a snippet of support to the thought experiment. The old saying ‘a watched pot never boils’ seems to make no practical sense, however a simple rephrasing to ‘a watched quantum pot never boils’ is closer to the truth. Researchers imitated the physical process of boiling on a quantum scale by bombarding a collection of beryllium atoms with microwaves. These incoming microwaves were then absorbed by the atoms, booting them up from a low to high energy level. The researchers knew that the time period for all atoms to become excited was around 250ms, therefore by beaming a burst of laser light into their atomic midst, the number of atoms still in their lower ground state could be counted (excited atoms cannot absorb the incoming photons, therefore only the atoms in the lower, less excited state will be affected). Initially they only looked at 125ms, when around half the atoms should be excited. And they were! Then then increased the number of observations, looking four times in 250ms. They found something unexpected. With each successive observation, the atoms would ‘reset’ their energy levels; in effect, by increasing the number of observations the atoms would never reach the higher state. The watched pot never boiled! (For further reading, search for “The Quantum Zeno Effect“).

The explanation here directly supports one of Schrodinger’s main requirements for the thought experiment. Quantum probability waves exist. What the researchers believe happens is that the probability wave of each atom is artificially collapsed by the act of observing. When the atoms are free from observation, the probability wave is free to spread out, increasingly the likelihood of observing all the atoms similarly excited. By looking multiple times, the wave is collapsed prematurely, preventing the wave from spreading out to its potential equilibrium state. In effect, the intent of the observer controls to outcome. If you want half the atoms to become excited, no problem, look at time t/2. You want the pot to never boil? OK, just keep watching continuously.

However, Shrodinger’s second and third requirements denoting the features of the observer doing the collapsing are not so easy to support. Why are humans so arrogant to believe that there is something inherently special about us that we are required for the universe to exist? It simply makes no sense whatsoever that outside of our measly existence, nothing is actually real until we look. Rather, the quantum constituents may be probabilistic however the virtual seething mass of particles that zip around and interact with each other must surely provide the means to collapse wave functions. A conscious observer is not needed for reality to have any objective meaning (what about prior to the evolution of conscious beings – are we all being observed by an omnipotent being which makes us all real?) The universe itself must surely be doing the observing and the collapsing, through the myriad of interacting particles.

I believe the main problems people suffer from when discussing quantum mechanics is that they try to relate it to pre-existing notions of reality. They also place the importance of human consciousness above the fact that the universe will continue to exist regardless of whether we are around to watch. This deluded geocentricism has long plagued humanity, causing major scientific retardation throughout the ages (Aristotle et al). The implications of quantum mechanics on reality still holds many mysteries. If watching a quantum pot causes it to freeze in its initial state, what does this mean for reality and intent (and also free-will)? If quantum processes control the operation of minds, perhaps it will also prove to be the mysterious bridge that spans between Cartesian mind/body duality. Perhaps the secret to consciousness is the uncertainty introduced by the quantum reality that underlies every physical process.

Stopping for lunch at the usual time I made my way to my seat in the corner of the canteen beside the rack of journal articles. One thing I love about working for a CRO is the plethora of sciency-related readings available in the staff library. The August issue of New Scientist caught my eye with its intriguing title; “Spooks in space“. Now before we get started I would like to make one thing clear; I am not a physics guru, mathematical representations of physical theorems not only confuse me but I also question the usefulness of over complicating a subject that already holds such a stereotype of requiring intellectualism and genius in order to fully appreciate it. Therefore, while the first section will outline a (very) basic foundation of the theories, I hope that the second part is more thought provoking and practical for discussion.

Boltzmann brains, named after the 19th century thermodynamicist Ludwig Boltzmann, are a hypothesised phenomenon arising from the cosmological interpretation of the second law of thermodynamics (the complexity of the universe will always increase). Boltzmann’s original idea was that random thermal fluctuations may have been responsible for the creation of our universe. Delving deeper, Boltzmann proposed that our observable universe (with its low level of entropy and thus higher organisation) may be a figment of our own imagination; we may simply be the result of a ‘random fluctuation’ within another universe of higher entropy (lower organisation, higher chaos).

The Boltzmann paradox is thus; if we are the result of a a random fluctuation, our likelihood of existing is much less probable than a universe full of Boltzmann brains. In short, the billions of self-aware brains that make up humanity (remember, if we are due to random fluctuations) are less likely than a single, self-aware and conscious entity with false memories and perceptions of the world around it.

The good news is, we aren’t Boltzmann brains! I believe the argument here is that in order for Boltzmann brains to arise, the target universe from which they are formed must be at a high level of entropy. Due to the fact that we exist in a universe with low entropy (being relatively young) tends to rule out the likelihood of so many brains spontaneously arising all with false memories and perceptions of the universe. The Boltzmann scenario is only salvageable if our portion of the observable universe is a small ‘bubble’ within one much larger that has high entropy (chaotic and prone to random fluctuation).

Boltzmann brains have vast theological implications, if correct. They may form the basis for a rationalised and scientific explanation for the existence of a god. As a devout atheist (who has gained some tolerance for religious discussion over the years) I do hold an active interest in rational theological discussion. The Boltzmann hypothesis seems to be the first plausible (although still highly unlikely) explanation for the existence of god that doesn’t involve mindless devotion and ‘leaps of faith’. Below is a post I found that outlines a basic theory, which I hope to develop further.

“Getting back to Boltzman Brains, it occurred to me that a Boltzman Brain could provide a naturalistic explanation for the existence of God.
The first cause proof of God is that there has to be a first cause to our universe. Atheists, however, always retort: “Oh yeah, then what caused God?”
So, a theist can now say that God was a spontaneously-formed Boltzman Brain formed from the formless chaos of Nothingness.
This response also rescues God from the charge that if He exists, then He is Nothingness itself; God would really be a Something rather than a Nothing if He were a Boltzman Brain.
Since there is no existence more lonely than being a disembodied, utterly alone, Boltzman Brain, God created the world and us in order to have some company. . ” – Warren Plats, link.

Thus the requirements for a Boltzmann-based god would be;

  1. A sufficiently old universe (infinite age?) to allow for the spontaneous formation of a being with self-awareness and omniscient capabilities.
  2. Methods for that being to interact with its universe or itself in order to create the target universe.
  3. A desire on the god’s behalf to create the target universe.
  4. Allowance of the god’s existence for a sufficiently long enough period to both formulate and enact the creation (random fluctuations in chaos can more easily remove order than create it – a cup is more likely to drop and smash than it is to jump up and reform).

Moving on from these requirements, a possible Boltzmann god may then arise from the constituents of an infinitely old universe rearranging themselves spontaneously so as to create order from chaos and in the process, give rise to an all knowing, all powerful entity. As a side note I would like to make the point that the name “Boltzmann Brains” is slightly misleading; our ideas of what constitutes consciousness is often clouded by our own experiences. So far, humanity is the only fully conscious entity in our observable universe, therefore we tend to describe consciousness in terms of ourselves. Boltzmann brains, and in fact other more exotic forms of alien consciousness need not necessarily be made up of the same stuff that makes up our brains. Nerve cells, blood vessels and electrical impulses can give way to, and are less likely to produce consciousness than more simple models such as silicon chips and even clouds of interacting atoms (such as Hydrogen, the most abundant element in the universe). Given enough time, anything that can happen, will. In this case, a universe that has existed for an infinitely long period has a higher likelihood of producing such a conscious entity.

But the question remains of how such an entity can spark the creation of a universe that is suitable to lifeforms like us. Was it external manipulation such as a conscious and purposely directed fluctuation that gave rise to our universe? Or was it an internal rearrangement of its own constituents (eg; the creation of a singularity); a self-directed suicide on behalf of the entity that created our reality? The latter opens up the possibility of a cyclic universe, in that everything that has come to pass will happen again. The eventual creation of a god-like Boltzmann brain serves as the eventual catalyst which prevents the perpetual darkness that an infinitely expanding universe would bring and starts everything afresh.

I hope to revist the topic of Boltzmann brains sometime in the future. What seemed as a relatively ‘goofy’ and niche area of philosophical physics quickly spills out into a question of reality itself, the implications of Boltzmann brains as typical observers (can we really be sure that our measurements of the universe are ‘really real’) and the usefulness of Boltzmann brains as a theological model for creationism (albeit in a distinctly more science-heavy form).