You are currently browsing the tag archive for the ‘AI’ tag.

The human brain and the internet share a key feature in their layout; a web-like structure of individual nodes acting in unison to transmit information between physical locations. In brains we have neurons, comprised in turn of myelinated axons and dendrites. The internet is comprised of similar entities, with connections such as fibre optics and ethernet cabling acting as the mode of transport for information. Computers and routers act as gateways (boosting/re-routing) and originators of such information.

How can we describe the physical structure and complexity of these two networks? Does this offer any insight into their similarities and differences? What is the plausibility of a conscious Internet? These are the questions I would like to explore in this article.

At a very basic level, both networks are organic in nature (surprisingly, in the case of the Internet); that is, they are not the product of an ubiquitous ‘designer’ and are given the freedom to evolve as their environment sees fit. The Internet is given permission to grow without a directed plan. New nodes and capacity added haphazardly. The naturally evolved topology of the Internet is one that is distributed; the destruction of nodes has little effect on the overall operational effectiveness of the network. Each node has multiple connections, resulting in an intrinsic redundancy where traffic is automatically re-routed to the target destination via alternate paths.

We can observe a similar behaviour in the human brain. Neurological plasticity serves a function akin to the distributed nature of the Internet. Following injury to regions of the brain, adjacent areas can compensate for lost abilities by restructuring neuronal patterns. For example, injuries to the frontal cortex motor area can be minimised with adjacent regions ‘re-learning’ otherwise mundane tasks that have since been lost as a result of the injury. While such recoveries are entirely possibly with extensive rehabilitation, two key factors determine the likelihood and efficiency of the operation; the intensity of the injury (percentage of brain tissue destroyed, location of injury) and leading from this, the chronological length of recovery. These factors introduce the first discrepancy between these two networks.

Unlike the brain, the Internet is resilient to attacks on its infrastructure. Local downtime is a minor inconvenience as traffic moves around such bottlenecks by taking the next fastest path available. Destruction of multiple nodes has little effect on the overall web of information. Users may loose access to or experience slowness in certain areas, but compared to the remainder of possible locations (not to mention redundancies in content – simply obtain the information elsewhere) such lapses are just momentary inconveniences. But are we suffering from a lack of perspective when considering the similarities of the brain and the virtual world? Perhaps the problem is one related to a sense of scale. The destruction of nodes (computers) could instead be interpreted in the brain as the removal of individual neurons. If one takes this proposition then the differences begin to loose their lucidity.

An irrefutable difference, however, arises when one considers both the complexity and the purpose of the two networks. The brain contains some 100 billion neurons, whilst the Internet comprises a measly 1 billion users by comparison (with users roughly equating the number of nodes, or access terminals that are physically connected to the Internet). Brains are the direct product of evolution, created specifically to keep the organism alive in an unwelcoming and hostile living environment. The Internet, on the other hand, is designed to accommodate a never-ending torrent of expanding human knowledge.  Thus the dichotomy in purpose between these two networks is quite distinguished, with the brain focusing on reactionary and automated responses to stimuli while the Internet aims to store information and process requests for its extraction to the end user.

Again we can take a step back and consider the similarities of these two networks. Looking at topology, it is apparent that the distributed nature of the Internet is similar to the structure and redundancy of the human brain. In addition, the Internet is described as a ‘scale-free’ or power-law network, indicating that a small percentage of highly connected nodes accounts for a very large percentage of the overall traffic flow. In effect, a targeted attack on these nodes would be successful in totally destroying the network. The brain, by comparison, appears to be organised into distinct and compartmentalised regions. Target just a few or even one of these collections of cells and the whole network collapses.

It would be interesting to empirically investigate the hypothesis that the brain is also a scale-free network that is graphically represented via a power law. Targetting the thalamus for destruction, (which is a central hub through which sensory information is redirected) might have the same devastating effect on the brain as destroying the ICANN headquarters in the USA (responsible for domain name assignment).

As aforementioned, the purposes of these two networks are different, yet share the common bond of processing and transferring information. At such a superficial level we see that the brain and the Internet are merely storage and retrieval devices, upon which the user (or directed thought process) are sent on a journey through a virtual world towards their intended target (notwithstanding the inevitable sidetracks along the way!). Delving deeper, the differences in purpose act as a deterrent when one considers the plausibility of consciousness and self-awareness.

Which brings us to the cusp of the article. Could the Internet, given sufficient complexity, become a conscious entity in the same vein as the human brain? Almost immediately the hypothesis is dashed due to its rebellion against common sense. Surely it is impossible to propose that a communications network based upon binary machines and internet protocols could ever achieve a higher plane of existence. But the answer might not be as clear cut as one would like to believe. controversially, both networks could be controlled by indeterminate processes. The brain, at its very essence, is governed by quantum unpredictability. Likewise, activity on the Internet is directed by self-aware, indeterminate beings (which in turn, are the result of quantum processes). At what point does the flow of information over a sufficiently complex network result in an emergent complexity mots notably characterised by a self-aware intelligence? Just as neurons react to the incoming electrical pulses of information, so too do the computers of the internet pass along packets of data. Binary code is equated with action potentials; either information is transmitted or not.

Perhaps the most likely (and worrying) outcome in a futurist world would be the integration of an artificial self-aware intelligence with the Internet. Think Skynet from the Terminator franchise. In all possibility such an agent would have the tools at its disposal to highjack the Internet’s comprising nodes and reprogram them in such a fashion as to facilitate the growth of an even greater intelligence. The analogy here is if the linking of human minds were possible, the resulting intelligence would be great indeed – imagine a distributed network of humanity, each individual brain linked to thousands of others in a grand web of shared knowledge and experience.

Fortunately such a doomsday outlook is most likely constrained within the realms of science fiction. Reality tends to have a reassuring banality about it that prevents the products of human creativity from becoming something more solid and tangible. Whatever the case may be in regards to the future of artificial intelligence, the Internet will continue to grow in complexity and penetration. As end user technology improves, we take a continual step closer towards an emergent virtual consciousness, whether it be composed of ‘uploaded’ human minds or something more artificial in nature. Let’s just hope that a superior intelligence can find a use for humanity in such a future society.

Many of us take the capacity to sense the world for granted. Sight, smell, touch, taste and hearing combine to paint an uninterrupted picture of the technicolour apparition we call reality. Such lucid representations are what we use to define objects in space, plan actions and manipulate our environment. However, reality isn’t all that it’s cracked up to be. Namely, our role in defining the universe in which we live is much greater than we think. Humanity, through the use of sensory organs and the resulting interpretation of physical events, succeeds in weaving a scientific tapestry of theory and experimentation. This textile masterpiece may be large enough to ‘cover all bases’ (in terms of explaining the underlying etiology of observations), however it might not be made of the right material. With what certainty do scientific observations carry a sufficient portion of objectivity? What role does the human mind and its modulation of sensory input have in creating reality? What constitutes objective fact and how can we be sure that science is ‘on the right track’ with its model of empirical experimentation? Most importantly, is science at the cusp of an empirical ‘dark age’ where the limitations of perception fundamentally hamper the steady march of theoretical progress? These are the questions I would like to explore in this article.

The main assumption underlying scientific methodology is that the five sensory modalities employed by the human body are, by and large, uniformly employed. That is, despite small individual fluctuations in fidelity, the performance of the human senses is mostly equal. Visual acuity and auditory perception are sources of potential variance, however the advent of certain medical technologies has circumnavigated and nullified most of these disadvantages (glasses and hearing aids, respectively). In some instances, such interventions may even improve the individual’s sensory experience, superseding ‘normal’ ranges through the use of further refined instruments. Such is the case with modern science as the realm of classical observation becomes subverted by the need for new, revolutionary methods designed to observe both the very big and the very small. Satellites loaded with all manner of detection equipment have become our eyes for the ultra-macro; NASA’s COBE orbiter gave us the first view of early universal structure via detection of the cosmic microwave background radiation (CMB). Likewise, scanning probe microscopy (SPM) enabled scientists to observe on the atomic scale, below the threshold of visible light. In effect, we have extended and supplemented our ability to perceive reality.

But are these innovations also improving the objective quality of observations, or are we being led into a false sense of security? Are we becoming comfortable with the idea that what we see constitutes what is really ‘out there’? Human senses are notoriously prone to error. In addition, machines are only as good as their creator. Put another way, artificial intelligence has not yet superseded the human ‘home grown’ alternative. Therefore, can we rely on a human-made, artificial extension of perception with which to make observations? Surely we are compounding the innate inaccuracies, introducing a successive error rate with each additional sensory enhancement. Not to mention the interpretation of such observations and the role of theory in whittling down alternatives.

Consensus cannot be reached on whether what I perceive is anything like what you perceive. Is my perception of the colour green the same as yours? Empirically and philosophically, we are not yet at a position to determine with any objectivity whether this question is true. We can examine brain structure and compare regions of functional activity, however the ability to directly extract and record aspects of meaning/consciousness is still firmly in the realms of science-fiction. The best we can do is simply compare and contrast our experiences through the medium of language (which introduces its own set of limitations).As aforementioned, the human sensory experience can, at times, become lost in translation.

Specifically, the ability of our minds to disentangle the information overload that unrelentingly flows through mental channels can wane due to a variety of influences. Internally, the quality of sensory inputs is governed at a fundamental level by biological constraints. Millions of years of evolution has resulted in a vast toolkit of sensory automation. Vision, for example, has developed in such a way as to become a totally unconscious and reflexive phenomenon. The biological structure of individual retinal cells predisposes them to respond to certain types of movement, shapes and colours. Likewise, the organisation of neurons within regions of the brain, such as the primary visual cortex in the occipital lobe, processes information with pre-defined mannerisms. In the case of vision, the vast majority of processing is done automatically, thus reducing the overall level of awareness and direct control the conscious mind has over the sensory system. The conclusion here is that we are limited by physical structure rather than differences in conscious discrimination.

The retina acts as the both the primary source of input as well as a first-order processor of visual information In brief, photons are absorbed by receptors on the back wall of the eye. These incoming packets of energy are absorbed by special proteins (rods – light intensity, cones – colour) and trigger action potentials in attached neurons. Low level processing is accomplished by a lateral organisation of retinal cells; ganglionic neurons are able to communicate with their neighbours and influence the likelihood of their signal transmission. Cells communicating in this manner facilitates basic feature recognition (specifically, edges/light and dark discrepancies) and motion detection.

As with all the sensory modalities, information is then transmitted to the thalamus, a primitive brain structure that acts as a communications ‘hub’; its proximity to the brain stem (mid and hind brains) ensures that reflexes are privy to visual input prior to the conscious awareness. The lateral geniculate nucleus is the region of the thalamus which splits incoming visual input into three main signals; (M, P and K). Interestingly, these channels stream inputs into signals with unique properties (eg exclusively colour, motion etc). In addition, the cross lateralisation of visual input is a common feature of human brains. Left and right fields of view are diverted at the optic chiasm and processed on common hemispheres (left field of view from both eyes processed on the right side of the brain). One theory as to why this system develops is to minimise the impact of uni-lateral hemispheric damage – the ‘dual brain’ hypothesis (each hemisphere can act as an independent agent, reconciling and supplementing reductions in function due to damage).

We seem to lazily fall back on these automated subsystems with enthusiasm, never fully appreciating and flexing the full capabilities of sensory appendages. Micheal Frayn, in his book ‘The Human Touch’ demonstrates this point aptly;

“Slowly, as you force yourself to observe and not to take for granted what seems so familiar, everything becomes much more complicated…That simple blueness that you imagined yourself seeing turns out to have been interpreted, like everything else, from the shifting, uncertain material on offer” Frayn, 2006, p26

Of course, we are all blissfully ignorant of these finer details when it comes to interpreting the sensory input gathered by our bodies. The consciousness acts ‘with what it’s got’, without a care as to the authenticity or objectivity of the observations. We can observe this first hand in a myriad of different ways; ways in which the unreal is treated as if it were real. Hallucinations are just one mechanism where the brain is fooled. While we know such things are false, to a degree (depending upon the etiology, eg schizophrenia), such visual disturbances nonetheless are able to provoke physiological and emotional reactions. In summary, the biological (and automated) component of perception very much determines how we react to, and observe, the external world. In combination with the human mind (consciousness), which introduces a whole new menagerie of cognitive baggage, a large amount of uncertainty is injected into our perceptual experience.

Expanding outwards from this biological launchpad, it seems plausible that the qualities which make up the human sensory experience should have an effect on how we define the world empirically. Scientific endeavour labours to quantify reality and strip away the superfluous extras leaving only constitutive and fundamental elements. In order to accomplish this task, humanity employs the use of empirical observation. The segway between biological foundations of perception and the paradigm of scientific observation involves a similarity in sensory limitation. Classical observation was limited by ‘naked’ human senses. As the bulk of human knowledge grew, so too did the need to extend and improve methods of observation. Consequently, science is now possibly realising the limitation of the human mind to digest an overwhelming plethora of information.

Currently, science is restricted by the development of technology. Progress is only maintained through the ingenuity of the human mind to solve biological disadvantages of observation. Finely tuned microscopes tap into quantum effects in order to measure individual atoms. Large radio-telescope arrays link together for an eagle’s eye view of the heavens. But as our methods and tools for observing grow in complexity, so too does the degree of abstract reasoning that is required to grasp the implications of their findings. Quantum theory is one such warning indicator.

Like a lighthouse sweeps the night sky and signals impending danger, quantum physics, or more precisely, humanity’s inability to agree on any one consensus which accurately models reality, could be telling us something. Perhaps we are becoming too reliant on our tools of observation, using them as a crutch in a vain attempt to avoid our biological limitations. Is this a hallmark of our detachment from observation? Quantum ‘spookiness’ could simply be the result of a fundamental limitation of the human mind to internally represent and perceive increasingly abstract observations. Desperately trying to consume the reams of information that result from rapid progress and intense observation, scientific paradigms become increasingly specialised and diverged, increasing the degree of inter-departmental bureaucracy. It now takes a lifetime of training to even grasp the basics of current physical theory, let alone the time taken to dissect observations and truly grasp their essence.

In a sense, science is at a crossroads. One pathway leads to an empirical dead end; humanity has exhausted every possible route of explanation. The other involves either artificial augmentation (in essence, AI that can do the thinking for us) or a fundamental restructuring of how science conducts its business. Science is in danger of information overload; the limitations introduced by a generation of unrelenting technical advancement and increasingly complex tools with which to observe has taken its toll. Empirical progress is stalling, possibly due to a lack of understanding by those doing the observing. Science is detaching from its observations at an alarming rate, and if we aren’t careful, in danger of loosing sight of what the game is all about. The quest for knowledge and understanding of the universe in which we live.