The symptoms characteristic of Alice in Wonderland syndrome are usually subtle in nature, in the sense that they tend to affect only a minor aspect of a person’s full perceptual experience. As a consequence, everything is perceived just as before, except that, for example, all vertical lines are slanted, as happened to Paul; or time is found to slow down, the way this is experienced by Neo in The Matrix [1]; or one’s body is experienced as shrinking to the height of a thimble, as happened to Alice in the story and to Ms. Rembrandt in real life and so on, in accordance with any of the numerous variations that we have encountered so far, and indeed many, many more. Sometimes these symptoms occur in conjunction with each other, although they mostly present as isolated perceptual distortions, leaving the rest of what we perceive intact. What is more, even when several symptoms are present, they tend to be experienced in the same sensory modality, being all visual in nature, or all somatosensory and so on. As far as we know, based on the limited number of extant case descriptions, only 15% of all people with Alice in Wonderland syndrome report symptoms in more than one sensory modality [2]. The remaining 85% of them report unimodal experiences—which, as we just saw, tend to consist of a single symptom.
To understand how it is possible that only a single aspect of our perceptual experience can be altered, whereas the rest remains the way it had always been, we need to turn to neurobiology and realise that the perceptual system is not like a video camera (that offers a series of snapshots representing scenes in front of the lens) or a microphone (that converts vibrations in the air into electrical signals) or even a computer (which merely stores, retrieves and manipulates data). Instead, it is a vast neural network with numerous larger and smaller circuits, connected in a hierarchical manner, which together process perceptual information at numerous disparate locations and in numerous ingenuous ways. Higher cortical circuits ensure that, in the end, all that information is reassembled in such a way that the resulting percept is coherent and complete, to the extent that, in visual perception, even the blind spot is ‘smoothed over’ and, in audition (hearing or listening), words are ‘filled in’ when our reception of them is hazy. Moreover, to produce such a complete and coherent whole, these higher cortical circuits are dependent on the workings of lower-level circuits, all the way down to tiny stacks or clusters of individual nerve cells called cortical column s. It is said that ‘the devil is in the details’—and that certainly holds true for Alice in Wonderland syndrome.
Ironically, perhaps, initial research on those tiny cortical columns, which are located in a part of the brain called striate cortex , was carried out at almost the exact time that Todd was busy designing his syndrome of Alice in Wonderland. During the 1950s, unbeknownst to each other, Todd was working away on his clinical syndrome at Littlemore, while two scientists on the other side of the Atlantic Ocean conducted basic research on the working of the cat’s striate cortex. These two men were David Hubel (1926–2013) and Torsten Wiesel (b. 1924), who, in 1981, went on to receive a shared Nobel Prize in Physiology or Medicine for their groundbreaking contributions to unravelling the neural machinery that underlies visual perception. Higher circuits of the perceptual system may be able to iron out some of the system’s inherent ‘flaws’ by attaching meaning to incomplete data (as with the eye’s ‘blind spot ’) and by suppressing information that is deemed irrelevant (as in the case of afterimages and floaters )—however, in the end, they have to deal with the raw material supplied by lower-level circuits, such as those conceptualised by Hubel and Wiesel. In accordance with the principle of ‘garbage in, garbage out’, the raw material affects our perceptual experience as a whole and, if corrupted, alters it in ways that higher cortical circuits are unable to correct, no matter how hard they try to ‘smooth things over’ or ‘fill in the gaps’.
My thanks for your most informative message and attachment. I did not know or at least don’t remember anything about the syndrome so well described in the attachment. Even though I grew up in a mental hospital where my father was director and did some psychiatry as a young doctor, I have unfortunately in my old age (94 years in a few months) lost my interest in trying to understand or speculate about the possible neural basis of various mental states.
That Professor Wiesel, at his advanced age, no longer invested his precious time in trying to fathom the neural basis of perceptual distortions was, of course, totally understandable. Nevertheless, I could not help but ask myself what unforeseen directions his work might have taken if, during his active years, he had been aware of Todd’s approach to psychopathology and of the possibilities it offered for a clinical application of his own celebrated work on cortical column s. In what follows, I therefore seek to incorporate Torsten Wiesel ’s work—and that of his long-standing scientific partner, David Hubel —in what we already know about Alice in Wonderland syndrome. So here we go, examining a line of research carried out during the 1950s, to see how it can help us understand another line of research from the 1950s, which has only recently started to move into the spotlight.
5.1 Visual Distortions
The visual distortion s described by Todd in 1955—that is, metamorphopsias —have since been reported in around 90% of all published case descriptions on Alice in Wonderland syndrome [2]. Even in the absence of prevalence rates from systematic, large-scale studies—which simply have not been carried out—these figures accord with my own clinical impression that the visual modality is indeed the one that is most frequently affected. It is therefore tempting to believe that metamorphopsia s are also the most important symptoms of Alice in Wonderland syndrome, although, obviously, we are currently not in a position to say that. What we do know is that metamorphopsias present in many shapes and varieties. In the preceding chapters, we already met several persons suffering from widely varying types of metamorphopsia. For those who wish to see what else may go wrong in the visual modality when a person suffers from Alice in Wonderland syndrome, Table A.2 (Appendix A) provides an overview of all the metamorphopsias described in the medical literature. Although it lists more than 40 types, even that collection is not exhaustive. For instance, at my outpatient clinic in The Hague, I sometimes hear people describe the corners of a room being torn open, like those of a worn-out shoebox, and its contents being sucked through the cracks, leaving them behind in an empty room. As far as I know, there is no literature on this peculiar visual phenomenon and, to be honest, I have no clue as to how it should be called, let alone how it should be explained in terms of underlying neurobiological processes.
Fortunately, other metamorphopsias are straightforward in their presentation—even though they may puzzle those who experience them, they can often be easily connected with parts of the visual network responsible for their mediation. In many cases, these are circuits—or relatively small components of circuits—at the lower levels of the perceptual system’s hierarchical organisation. To get an idea of how these lower-level functional units affect our visual perception as a whole, let us recall our high-school biology lessons: we were taught that all visual perception starts out with light rays (or photons) falling on an object and being reflected off its surface, to then enter the eye and pass through the cornea and lens, which together converge those light rays in such a way that they project in an inverted fashion on the retina. It is well known that the retina is a light-sensitive membrane at the back of the eye, which contains rods and cones that react to the incoming photons by producing neural impulses, or weak electrical currents. From both eyes, these neural impulses are relayed to the optical nerves (which lead straight into the brain) and meet each other at the optic chiasm, a neural intersection within the brain itself, which makes half of the nerve fibres from each eye cross over in such a way that information from the left part of the visual field of both eyes ends up at the right side of the brain and vice versa; all this happens in a part of the brain (located way back in our head) that we know as ‘visual cortex ’. When that part is activated, or so we learned a long time ago, we see the object in front of us. As we realised back then, we do not actually ‘see’ with our eyes but with our brain—more specifically, with that part of the brain that is removed as far away as possible from our eyes. I am certain that no engineer would have designed a brain this way—but, then again, we are light years away from designing anything that comes even close to the human brain.
There is a philosophical side to this account of visual perception (which I shall bypass here) that challenges the notion that neurophysiological activity in visual cortex can indeed be equated to ‘seeing’. Instead, proponents of this critical tradition tend to offer a dualistic model that designates ‘seeing’ as an emergent mental state of brain activity—or as a parallel state, for that matter—but, at any rate, debate the notion that neurons firing in visual cortex (‘matter’) can be equated with the event we call seeing (a ‘mental state’). Irrespective of whether or not we should consider that line of thought worth pursuing, there is another problem with the traditional account of visual perception that requires our attention, that is, the somewhat simplistic way in which it describes the route taken by visual information from the eyes towards those areas that make us aware of its contents (whether or not through that extra little hop into the ‘mind’).
The part of the perceptual network traditionally endowed with making us aware of complex visual scenes is called visual association cortex . However, what we never learned in high school is that visual association cortex depends on numerous signals from the visual network as a whole, including signals from specialised groups of neurons, which are thought to be similar to the cortical columns described by Hubel and Wiesel on the basis of their famous animal experiments.
Incidentally, the notion of cortical column s was not Hubel and Wiesel’s own invention, but was introduced by another American neuroscientist, Vernon Mountcastle (1918–2015). Upon hearing about his work, Hubel and Wiesel decided to search for similar microstructures in a part of the brain that hardly anyone seemed interested in at the time, that is, visual cortex . They did that by making single-cell recordings of neurons inside striate cortex of cats. It was a purely experimental endeavour, initiated by them at Johns Hopkins University in 1958 and continued for many years at Harvard, where the two of them explored these parts of the cat brain much in the way seamen in Columbus ’ time crossed the ocean—without a clear idea as to where they were headed. The bodies of the cells they studied, all the way in the back of the cat’s brain, had a diameter of 4–100 μm, which is one four-thousandth to a tenth of a millimetre, which is almost too small to picture, especially if you use an ordinary ruler for reference. In their attempts to activate those tiny cells, they first taught themselves the dazzling technique of inserting microelectrodes into the brains of live cats. They did that by drilling a 2–3-mm hole through the skull and the underlying dura mater (the thick membrane enveloping the brains of mammals) and then sticking a small tube through it. After waxing the space between skull and tube, they then inserted a tungsten electrode through the tube and manipulated it (without anything as fancy as X-ray guidance) into the cat’s striate cortex. If that was not challenging enough, they subsequently started shining tiny beams of light into the cat’s eyes, thus projecting circular spots of light onto its retina, as well as black spots against a light background, red lines through a slit, and so on in numerous variations, hoping that one of those light stimuli would call forth a response from that single, tiny cell connected to their microelectrode.
The break came one long day in which we held onto one cell for hour after hour. To find a region of retina from which our spots gave any hint of responses took many hours, but we finally found a place that gave vague hints of responses. We worked away, in shifts. Suddenly, just as we inserted one of our glass slides into the ophthalmoscope, the cell seemed to come to life and began to fire impulses like a machine gun. It took a while to discover that the firing had nothing to do with the small opaque spot—the cell was responding to the fine moving shadow cast by the edge of the glass slide as we inserted it into the slot. It took still more time and groping around to discover that the cell gave responses only when the faint line was swept slowly forward in a certain range of orientations. Even changing the stimulus orientation by a few degrees made the responses much weaker, and an orientation at right angles to the optimum produced no responses at all [3].
That is how the two of them got a single cortical neuron to respond to the cat’s perception of lines at a vertical orientation. It was a major accomplishment, and a major discovery, which formed the basis for their now classic paper, published in 1959 in the Journal of Physiology [4]. In it, Hubel and Wiesel described what they called ‘orientation column s’. Like Mountcastle before them, they thought of these tiny units as vertically organised stacks of cortical neurons—literally stacks of nerve cells that reacted by turning either ‘on’ or ‘off’ in response to visual stimuli of lines under a certain angle. After this first cortical column , they described similar ones that responded to black lines but not to white ones and those that did the reverse or responded to both; ones that responded to red lines but not to black or white ones; ones involved with stereo vision, and so on, detailed, in the flurry of papers that followed, as a seemingly endless variety of tiny functional units of visual information processing. We now also know of cortical columns specialised for colour, movement direction, spatial frequency and myriad other variables. Contrary to what was initially thought, many of these tiny functional units turned out to react to more than one particular stimulus. Moreover, many of them do not appear to consist of literal stacks of cells, but rather of clusters of cells without any well-circumscribed borders. And yet the central notion of very small, yet highly specialised functional units has remained, along with the notion that these are found at the lower levels of the hierarchically organised visual network [5] .
In the intervening decades, studies have indicated that such tiny units also exist in human visual cortex [6]. The reason why neuroscientists are still not fully certain about their exact nature and even doubt whether, in humans, they have the shape of vertical stacks of cells (as proposed by Hubel and Wiesel) is that post-mortem studies of the human brain consistently fail to show any anatomically segregated cellular columns with a vertical orientation, and thus fail to provide visual conformation for their existence. As a consequence, we have had to rely on the study of live human brains to find out whether they exist. To do so, neuroscientists have projected light stimuli with lines and gratings onto the retinas of test persons and sought to measure neural activity in the backs of their heads. For obvious reasons, they did not do so by inserting tungsten electrodes into the brains of their test persons, the way Hubel and Wiesel had done with their cats, but rather by inviting the participants to take place inside an MRI scanner and then measuring changes in the activity of their brains concomitant with the visual stimuli projected onto their retinas. Even though the spatial resolution of today’s MRI scanners is too coarse to get functional units of such a minute size to register individually, under these circumstances the brain does respond selectively to retinal stimulation with lines and gratings of different orientations. What is more, functional MRI studies indicate that small-scale units that respond to lines of different angles lie intermingled in striate cortex , just the way Hubel and Wiesel had reported in cats and other mammals [7]. That is how we know that their findings are as relevant for human visual perception and that (even though we may still have to establish the exact details of the way things work in humans) we, too, depend on the proper functioning of numerous discrete cortical neuron populations for the processing of visual stimuli.
Thus, because of the legacy of Hubel and Wiesel, and the tentative confirmation of their findings by imaging studies in humans, I was able to tell Paul and his mother that the reason why he saw everything as slanted was probably that something was wrong with his orientation column s. More specifically, my guess was that, in his case, the orientation columns for vertical lines had ceased to switch on when they should, and that, in their stead, those for oblique lines had started to switch on, making him see all vertical lines as slanted. Because the size of cortical columns prevents them from showing up individually on MRI scans and, moreover, any focal electrophysiological activity that might deprive them of their function is too subtle to register on an EEG , it was no wonder that these auxiliary investigations had shown nothing out of the ordinary in Paul’s case. Nevertheless, my guess was that the orientation columns for oblique lines had taken over the function of those for vertical lines, probably because they were no longer switched off by lateral inhibition. Lateral inhibition is a process by which neighbouring neuron populations with different functions cancel out their next-door neighbours when they themselves are activated. This is a tried and tested mechanism in brains throughout evolution, thus allowing for maximum signal strength and minimal interference. In Paul’s case, the orientation column s representing vertical lines seemed to have lost their ability to switch on and, thereby, also their ability to suppress the spurious turning-on of adjacent columns, which represented oblique lines.
Incidentally, orientation columns are not the only functional units to be found at this very low level of organisation of the brain’s visual system. Some of the cortical column s described by Hubel and Wiesel have a function in stereo vision , which is a prerequisite for depth vision. As you will remember, Alice, at some point, encounters the gardeners with their flat, oblong appearance, to be joined later by the soldiers, and the King and Queen of Hearts , which are all depicted as live playing cards. When in real life we see all things as flat, the way Alice perceived these characters, we speak of loss of stereoscopic vision , a symptom we encountered before. This type of metamorphopsia can be caused not only by the loss of function of one eye, but also by the brain itself, when it ceases to create the illusion of three-dimensionality, a function that we are so accustomed to that we normally do not even think about it. The cortical columns responsible for this are the ones devoted to stereopsis , located in striate cortex , or V1 , along with cells with a similar function in area V3 /V3A, identified by Hubel and Wiesel in their later work with monkeys, and, from then on, referred to as binocular depth cells [8].
However, the human visual system is not merely a collection of tiny stacks of neurons switching on or off in response to specific input signals. While such tiny ‘stacks’ (or whatever their actual shape may be in humans) are certainly part of it, and an important part for that matter, we should not forget that the system as a whole is a hierarchical network and that, ‘higher up’ in the network, much larger neuron populations are active—some of which, however, also have specialised functions. An example is visual area 4 , or V4 for short, a relatively large structure located in the ventral occipital lobe, which used to be called the brain’s ‘colour centre ’. Even though we now know that other cortical areas also play a role in the processing of colour (ranging from V1 to V3 ) area V4 nonetheless plays a major role in it. Philosophers have asked themselves for millennia whether colours exist ‘out there’ in the world where we see them, or whether they might perhaps be created by ourselves. When we look at a field full of bright red poppies, bathing in the sunlight, is the colour red inherently present in their petals, or do those petals somehow trigger us to see the colour red? Plato (c. 427–347 BC), and even the Presocratics (none of whom had any notion of the workings of the brain, let alone of the visual network ), had elaborate theories on how colour is created by ‘the fire emanating from our eyes’ and ‘the fire emanating from objects’, thus already debating the naïve point of view that colours simply exist ‘out there’ in the outside world, for us to be registered by merely looking at them [9]. The interest of these ancient philosophers in the origin of colours may well have been fuelled by people with colour blindness , who may fail to distinguish between red and green, for example—or by the writings of Homer , whose description of the sea as ‘wine-dark’ and other peculiar remarks on colour have led historians to believe that he may have been colour-blind as well [10]. During the 17th Century, the debate on colour was given a facelift by posing the question whether colours are ‘primary’ or ‘secondary’ qualities of objects, that is, whether they are inherently present in objects themselves, or whether they are produced by those objects in an observer [11]. During the 19th Century, Nietzsche , in his characteristically bold style, took that question to a whole new level by offering,
We observe all things through our human head, and cannot cut that head off; and yet the question remains what would be left of the world if you cut it off anyway [12].
It is not difficult to see what that question implies for colours: Would we still be seeing colours if we were able to look at the world without having to use our head and the perceptual system contained therein? The question is unanswerable, of course, since we cannot perceive anything without engaging the perceptual system. But at least we now know that, without area V4 , we would only be seeing things in black and white, whether or not colours actually exist ‘out there’.1 What the philosopher with a hammer must have suspected, well over a century ago, is something that empirical science has taught us in the meantime, namely, that the subjective experience of colour does indeed depend on the brain. Without V4, or with V4 being temporarily switched off, we suffer from achromatopsia , a type of metamorphopsia characterised by a lack of colour [13].
Being much larger than the orientation column s, and having a place much higher up in the visual network , V4 not only subserves the perception of colour but also plays a role in the processing of shape, texture, brightness, orientation, curvature, stereopsis and motion. Regarding motion, however, another part of the brain is of even greater importance. That part is the middle temporal visual area, also known as V5 , another relatively large neuron population, located in extrastriate cortex . You may think that the seeing of movement is simply a matter of looking at some object moving through your visual field; however, we need V5 to actively add the factor of movement for us; otherwise, we are incapable of visually perceiving it. That may sound rather abstract—but let us see what happens when this mysterious structure stops doing what it normally does so effortlessly that you and I do not even think about it when we go about our lives.
Some time ago, a male in his 50s, Mr. Janssen, had been referred to me because he had been inactive for some 30 years. Not just inactive in the sense of hanging around with friends in bars or occupying himself with hobbies and chores while he should have been pursuing a career but really, almost totally inactive. This is something we see occasionally in people suffering from chronic psychosis ; one can imagine that it may also affect those suffering from chronic depression or untreated catatonia , although, even in those cases, 30 years would be an unusually long period of time for someone to be that inactive. Because of his extreme lethargy, for more than half of his life, Mr. Janssen had been treated on-and-off with antidepressants , antipsychotics and numerous other types of medication, without any result other than that he became even more lethargic due to the side effects. The physician who referred him to me had taken over duties from his predecessor and had been very keen to distil out of the man’s story of chronic inactivity that there was something wrong with his visual perception. However, he was unable to tell what it was —and neither could Mr. Janssen himself. However, this new physician insisted that I should see him for a consultation, in the hope that I would be able to pinpoint what had been wrong with him all those years.
When I saw Mr. Janssen, together with one of my residents, Maaike van Gent , we made the acquaintance of a remarkably lively man, a bit short of stature, but looking quite youthful in his sport-lifestyle clothing, who did not strike us at all as someone suffering from chronic psychosis or depression . Nevertheless, he confirmed that he had not been able to do much more than lie on the couch ever since his early twenties. He used no substances worth mentioning and said that he would certainly like to work if he could, but that he could not. When we asked him about his visual perception and showed him the PowerPoint presentation with images of metamorphopsia s, he recognised none of them, except for the one that shows a ballerina with a series of copies of herself trailing behind her, photographed with the aid of a stroboscope. What he recognised specifically was how she was depicted as if moving by leaps. As Maaike guessed correctly (and you may also have guessed by now), it turned out that he had suffered for the past 30 years from akinetopsia , a very rare type of metamorphopsia , characterised by the inability to perceive movement [14]. Thus, instead of seeing people walk from A to B, Mr. Janssen saw them popping up in different places while they passed through his visual field, as if they temporarily ceased to exist and were then recreated a fraction of a second later in a different location; you will get the idea if you have ever seen people dancing in a disco under stroboscopic light. It might seem a minor inconvenience in comparison with the hearing of derogatory voices, or with the debilitating effects of a major depression, but in everyday life we are so dependent on V5 and its function in the perception of movement (whether it be in traffic or in the relatively safe environment of our homes), that it is indeed almost impossible for us to function properly without it.2 Come to think of it, we cannot even watch TV when we have this condition—which even the demented elderly can still do.
To further illustrate what V5 does, let us have a look at the logical opposite of akinetopsia , a condition called Riddoch’s phenomenon . Riddoch’s phenomenon is characterised by a combination of cortical blindness and an intact ability to see movement. This curious phenomenon was first described in 1917 by George Riddoch (1888–1947), a Scottish neurologist. Before becoming a neurologist, Riddoch had served during the Great War as a temporary captain in the Royal Army Medical Corps , in which capacity he had come into contact with soldiers who had been shot in the back of the head and had miraculously survived that, even though some of them had lost their entire visual cortex . This had rendered them cortically blind, meaning that they were unable to see anything, even though their eyes were still functioning properly, and the neural impulses generated by their retinas were still being relayed all the way to the back of their head—where, however, they encountered a gaping hole where visual cortex should have been. What Riddoch discovered, was that some of these cortically blind soldiers were indeed blind to light, colours and shapes, but not to movement. It may be hard to imagine, but what these unfortunate soldiers saw was pure movement—without any objects or scenes being moved. As recounted by those who had fought side-by-side with these cortically blind men, allegedly some of them had still been able to aim correctly as soon as enemy soldiers had come storming in the direction of their trenches. This only makes sense if we realise that V5 is not located right at the very back of the head, but slightly more to the front and that it may, thus, stay intact when visual cortex is destroyed.
Over the years, many people have told me that they see stationary objects moving in their direction (or, alternatively, moving away from them) or that they see movement in the walls or in other inert surfaces. In such instances, V5 may have spuriously turned ‘on’ when it should not have. The same probably holds true for a phenomenon I myself am familiar with, which is the seeing of movement in the corner of an eye. Sometimes, while standing in the kitchen or sitting on the couch, I have the sensation that one of our cats is moving through the periphery of my field of vision. However, when I turn to look, nine times out of ten, there is no cat. When it happens again, and I focus on what I actually see, I always realise that all I perceive is movement. I merely think of a cat because they walk through our house all the time, but that is not what I see in such instances. The reason for this to happen is spurious activity in the rods, the photoreceptors located in the periphery of our retina, which may become so sensitive to movement—especially after 40 years of age—that they sometimes erroneously relay signals to V5 when nothing is actually moving.
As these examples indicate, metamorphopsias are distortions of highly distinct aspects of our visual perception, caused by dysfunctioning of equally distinct constituents of the brain’s visual network . However, we need to remember that those structures function in the wider context of the perceptual system as a whole. That certainly holds true for the fusiform gyrus , another bilateral structure, located at the base of the brain, which we already met in the context of Dodgson’s inability to distinguish individual faces (i.e. prosopagnosia ). The same bilateral structure was the one which we had initially considered to be responsible for generating the dragon faces perceived by Mrs. van Nuys—although our MRI scans did not confirm that. Having an important function in the identification, recognition and representation of faces, in conjunction with the face-representing network as a whole, dysfunctioning of the fusiform gyrus may alter our perception of faces in such a way that single aspects (such as eyebrows, eyes or mouths) are grotesquely deformed in each and every face that we perceive or, alternatively, in our own face when we see it in the mirror . The latter type of prosopometamorphopsia , as this type of visual distortion is called, was what Willem suffered from. Looking in the mirror , he sometimes saw his face being indented by a brick-like shape, with one eye protruding over the edge of its socket and gradually slithering down his cheek. As he never observed this bizarre deformation in other people, we may conclude that the perceptual system apparently makes use of different mechanisms, in all likelihood sustained by different parts of the fusiform gyrus , to represent one’s own face versus the faces of other people. As we saw, Ms. Rembrandt had similar experiences, perceiving her own face in the mirror as large and bloated and furrowed, whereas other people’s faces remained unaltered. Mrs. van Nuys, on the other hand, used to see other people’s faces take on the likeness of a dragon. What made her case so special, and indeed so rare, is that the deformation was not limited to a single facial feature such as an eye or a nose, but that all aspects of people’s faces were thus replaced by the facial features of dragons.
These examples confirm that human visual perception does indeed depend on the orchestrated action of numerous smaller and larger neuron populations with specialised functions for the encoding of different aspects of our visual input. Some of those functions involve the representation of the tiniest imaginable units of visual perception, such as lines of a certain orientation, or light–dark contrasts, whereas others involve extremely complex configurations such as human faces. In order to render an accurate representation of scenes in the outside world, the higher cortical centres of the brain need to be able to rely on the input from all those numerous units to piece those scenes together. The examples given above may serve to give us an impression of what may go wrong when those units at the lower (and sometimes higher) levels of organisation fail to do what they should do and—while we are at it—may fill us with awe that things do not go wrong more often in our everyday lives.
A final point that I wish to touch upon for now, is something that was hinted at in the story of Mrs. van Nuys, but not yet fully elucidated. As we saw, the transformation of human faces into dragon faces that she experienced was always preceded by a symptom-free interval of one to several minutes. Such a latency period is in fact reported by many people suffering from visual distortions. It is such a striking and consistent feature that even the early pioneers in the area of visual perception did not fail to notice it [15–17]. Speculating about its cause, they came up with the name cerebral asthenopia to convey the idea of a heightened fatigability of otherwise normally functioning neuron populations. Since, in such cases, the perceptual system is apparently capable of representing faces or other visual input undistortedly (for however brief an interval), the neural mechanism underlying it must be structurally intact. The fact that after several seconds or minutes these flawless representations give way to distorted ones, means that these neuron populations cease to exert their normal function. Given that we do not exactly know what causes this to happen, ‘heightened fatigability’ seems a pretty good name to cover the neural mechanism that leads up to metamorphopsias . Since no-one knows which biochemical process (or processes) may underlie that central mechanism, I think this is a topic that certainly needs exploring in future studies.
5.2 Distortions in Other Sensory Modalities
Although other types of distortion appear to be considerably rarer, they are just as spectacular as the visual symptoms of Alice in Wonderland syndrome. Moreover, the way they are mediated is at least as fascinating. To get an idea of how it is possible that people may sometimes experience their body as becoming larger or smaller than it is, or see their hands change into witches’ claws, or suddenly have the sensation that their skull is open to the air, allowing the wind to sweep over their unprotected brain (as one of my patients once described to me in great colour and detail), we need to realise that we are never in direct contact with our body.
Yes, you just read that: we are never in direct contact with our body.
You may think that you know your body pretty well, but you don’t.
How could that be? After all, we look at ourselves numerous times per day, glancing in the mirror in the bathroom, in the hallway, in the dressing room, in our car, at school, at work; moreover, I am not even talking about all those other places where we see ourselves visually represented nowadays: in the reflecting surfaces of shop windows, in photographs, on film, on the closed-circuit TVs of surveillance cameras, in selfies on our phone and so on, in numerous places, perhaps even more than many of us feel comfortable with. And yet all that imagery does not bring us into direct contact with our bodies. Not even when we use the simplest device of all, the mirror.
Mirrors go a long way back, to 8000 years ago, when the first ones were probably made in what is now called Turkey. They were genuine works of art, shiny and slightly convex, made out of a volcanic glass called obsidian [18]. Obviously, the common people had no idea that they even existed. The few mirrors that were so skilfully manufactured were locked away inside palaces, to be used only by Emperors and Sultans, and to be showcased by them in front of envious guests. Long after that came mirrors of polished bronze and copper, whereas metal-coated glass mirrors have only been available since somewhere between the first and 3rd Century AD—initially, again, only to the extremely wealthy, but today, obviously, to anyone with a bit of money or the resourcefulness to pick one out of a dumpster. For us, today, surrounded as we are by mirrors and other reflecting surfaces, it is astonishing to realise that for millennia, the majority of people hardly ever got to see their own face except, perhaps, when they occasionally looked down into a pool of water or found some other shiny surface that might reflect their features, such as a knife or a polished piece of marble; even then, they could hardly have been expected to get a proper, undistorted look. Since nowadays most of us cannot manage to get through a single day successfully avoiding a representation of our own face, we tend to think that we have a fairly accurate idea of what we look like. However, as I said, looking at ourselves in a mirror does not bring us into direct contact with our bodies. In fact, the story about Willem, who sometimes saw his face as if bashed in by a brick, should have already made us suspicious of what we ourselves are looking at when we look into a mirror. If there was such a huge difference between the face that Willem saw and the one he felt with his fingertips, what exactly did he look at—and what exactly do we all look at, when we look into a mirror ?
It may sound obvious, but when we look into a mirror, all we see is light reflected back at us, giving us an impression of colours and shapes and movement. As we know by now, we do not even see all that with our eyes. Our eyes help us to converge the light rays onto the retina and to thus generate neural impulses that travel all the way to the back of our brain. Eventually, it is the brain that actively creates an image of what our body is supposed to look like. Even though it does so on the basis of the light reflected back to us through the mirror, it creates that image on its own, the way it creates images of everything that we observe. Since we already know that the visual system is not like a video camera, and that it has quite a few degrees of freedom, you will realise that it does not necessarily represent our bodily features one-on-one.
Knowing that, you will probably also realise why so many people have difficulty, when merely looking into a mirror, in assessing whether they have gained or lost some weight. I myself love my daily rounds through the park in our neighbourhood—nothing spectacular, just a bit of cardio to stay reasonably in shape—and, having done so steadfastly for several years, I must have lost some 20 pounds in the process. Nevertheless, when I look into the mirror, I do not really have the impression that I am any leaner than before. It is only when I meet people who have not seen me for a while, that I hear them say, ‘Hey, you look so much thinner … have you been running or something?’
So, our brains create a visual image of what our bodies look like. But then again, we also feel a certain way about our bodies, don’t we? We can feel whether we are fat or not—by touching our belly with our hands, for example, and squeezing those tender love handles between our fingertips—but even without such tactile information, your body just feels a certain way, from the inside, on the basis of which you probably have a fairly accurate sense of how tall you are, how fat you are, and what your body must be like on the whole. That was probably how people—before mirrors became a household product—nevertheless had some idea of what they looked like. However, similar to the way the visual system conjures up a body image that is not necessarily reliable, it does something similar to bodily sensations. On the basis of numerous impulses from within the body, and from those that interact with any of its outer surfaces, it creates a representation of the body, which is called the ‘body image’. The body image (also referred to as body schema and image de soi in the older literature) is an active product of the brain, which determines how we experience our bodies. When we direct our attention at our bodies, what we get, instead, is the latest version of our body image; however, this image may not always easily catch up with changes to our actual body and may, occasionally, deceive us completely.
We have all heard about people who lost a limb in an accident and, subsequently, went on to experience their missing arm or leg as if it were still there: the so-called ‘phantom limb ’—simply being there or hurting or itching or conveying the sensation that it is bent or twisted in an uncomfortable position [19]. Apparently, even a dramatic change such as the loss of a limb may give the body image a hard time catching up. However, when this system goes truly off the rails, it can do so spectacularly. We already know how dysfunctioning of cortical column s (and sometimes larger neuron populations) in visual cortex can change the world-as-we-see-it, subtly yet profoundly. Something similar can happen to the somatosensory system, which is responsible for conjuring up our body image. Did you ever wonder why so many women (along with a few men) who suffer from anorexia nervosa experience themselves as grossly overweight, whereas it is clear for anyone who cares to look beyond the camouflage of their oversized clothing that they are all skin and bones? And what to think of those who suffer from a rare condition called ‘supernumerary phantom limb ’, who have the sensation of having an extra arm or leg? [20] Obviously, those people are perfectly aware that they were born with just two arms and two legs and could not have grown an extra limb in the meantime; however, they might be so convinced of their predicament that they may plead with a surgeon to remove it—or, more likely, a series of surgeons, since they may be persistent at this, and surgeons do not tend to apply a scalpel to things they cannot see.
I myself was once consulted by a former teacher in his fifties, called Mr. Müller, who had the continuous sensation that the muscles and vertebrae in his back were moving—to such an extent that they seemed to be relocating themselves—and who, moreover, had the recurring sensation of a wing-shaped piece of flesh fluttering on top of his right shoulder. He referred to that piece of flesh, which he could see and feel, as ‘a gill’ (since it reminded him of the gills of fishes). When he came to see me, he had experienced these highly disturbing sensations for 7 years and had not been able to work ever since the fluttering had started. After I had examined his back, his shoulders and his neck, and had told him that there was nothing out of the ordinary to be seen, I went on to explain that, instead, his body image appeared to be playing tricks with him. I took my time drawing a brain and elaborating on the somatosensory areas and, when I was finally finished, he nodded understandingly. He was a smart man who had lectured in front of highly gifted children, so he had no trouble following my gist. However, since I was not entirely sure that I had convinced him so quickly, I then suggested that we might have a surgeon on duty in our psychiatric hospital, whom we might call and whom we might ask to remove this fluttering ‘gill’ of his. Although I had made it abundantly clear to him that this was only a hypothetical proposal, he immediately and hopefully consented. It pained me to see the spark of hope in his eyes and was also shocked to see how this intelligent man was so utterly convinced of the reality of his condition.
Incidentally, Willem, the student whom we met several times before, also experienced bodily distortion s. Once, as he told me, he had seen his left hand grow to about twice its size, whereas the right one had retained its normal proportions. Looking on in amazement at the long, pointy fingers he appeared to have grown on his left hand (with an old, weathered aspect, like that of furrowed branches), he was even more amazed when an additional pair of fingers started to grow out of the same hand, at an angle of 90° to his actual fingers. In Willem’s case, this lasted only a few minutes. Being accustomed to all the perceptual distortion s he experienced so often, and having a unique insight into his own situation, he never believed that his hand had actually been deformed this way. And yet that was exactly what his body image told him—and, thus, what he perceived.
Taking things to an entirely different level, people may even experience their bodies to be altered so profoundly, that they are convinced that they have changed into something different. Thus, they may believe that they are no longer human and have changed into a dog, a wolf or some other animal. The medical literature indicates that this is extremely rare, since no more than 60 cases have been published on this enigmatic condition over the past century and a half; nevertheless, in my psychiatric hospital alone, we identified 8 new cases over the past 6 years [21]. Thus, we had a young man in our hospital who was convinced that his arms had become extremely hairy (which I failed to see), who felt his jaw and teeth becoming larger and harder, and who at night had the sensation that his toenails scratched the bed sheets as if his feet had turned into paws [22]. Being an intelligent, well-educated young man, he had searched the Internet and had come to the conclusion that he suffered from lycanthropy , the transformation of a man into a wolf as described in ancient mythology, even though this was at odds with the laws of natural science (or so he said himself). Another young man, admitted to our secluded nursing ward, would howl and bark uncontrollably, sniffle at door posts, urinate in the corner of his room and on the balcony and, when asked about his behaviour, insist, that nothing was the matter with him. Only after years of treatment did he confide to one of our nurses that he had been convinced all the time that he was a dog. Similarly, in the nursing home of our hospital, we had a near-blind elderly gentleman who was convinced that his dog was with him, even though this loyal companion of his had died several years before. He would make gestures as if he were stroking the dog and smile when he had the impression that it was licking his hand. However, whenever he became ill due to some recurring infection, he would climb out of bed at night and be found by the nurses, lying on the rug in front of his bed, licking his forearm as if it were a paw, and overall behaving like he himself had turned into his beloved dog. One other patient was convinced that he was a bird, spreading his arms from time to time as if they were wings, and asking disappointedly, upon hearing that the social security services had discontinued his allowance, ‘Didn’t I lay enough eggs then?’ Similarly, we had a man admitted who thought he was a cat and hissed at the nurses whenever he did not get what he wanted—as well as several other patients who thought they were wolves or dogs.
As mentioned, the literature suggests that such fundamentally disturbing changes to the body image are extremely rare. However, I find it hard to believe that people suffering from clinical zoanthropy (as this condition is called) would have a preference for converging on the city of The Hague and, for that reason alone, end up in our hospital. As a consequence, my guess would be that the condition is in fact much more common than traditionally assumed, and that there must be numerous other persons out there who experience such profound changes to their body image that even their sense of personal identity is affected by it.
Such cases of clinical zoanthropy—and of phantom limb and anorexia nervosa —lie on a continuum with Alice in Wonderland syndrome. However, we would be stretching the definition of Alice in Wonderland syndrome beyond its rightful limits if we said that they all belong under the same diagnostic heading. I am presenting them here only to illustrate my point that we are not in direct contact with our bodies and that, even though the body image tends to represent the features of our physical bodies reasonably well under normal circumstances, things can go seriously wrong when it stops doing so. Some of the patients whom we have met—Willem, Mr. Müller, Mr. Salvatore, Mrs. van Nuys and Ms. Rembrandt—can certainly testify to this.
Nevertheless, the question remains as to how it is possible that our body image gets so messed up. Since it is shaped by countless impulses from within the body and from the outside world and, therefore, by information about virtually everything that we experience (consciously or not), it is easy to grasp that the underlying system must be a widely disseminated network in our brain—and that it must be larger than the visual network that we examined before. After all, it processes not only visual information but also tactile and somatosensory information and, indeed, information from all our other sensory modalities—shaped, moreover by psychological factors such as memory and expectancy, and even by social factors such as aesthetics and ideas about style and fashion. Essential to integrating all that information are the parietal lobes and, according to several studies, an area called the temporo-parieto-occipital junction , which is the tripoint of association cortex that includes portions of the temporal, parietal and occipital lobes. What exactly goes wrong in that vast system when bodily distortion s arise differs between individuals and cannot always be pinpointed in individual cases. However, as we shall see, we do know of numerous conditions that may cause it to go wrong.
However, before we discuss those conditions, let us first examine one more type of non-visual distortion . Although Todd’s original paper does not mention distortions in the sensory modalities for audition, taste or smell, one would expect that these might also be affected in the context of Alice in Wonderland syndrome. And perhaps they are, sometimes, considering Hamed’s case description of a man diagnosed with Alice in Wonderland syndrome: this person experienced visual and somesthetic distortions and, in addition, hyper – and hypoacusis , saying about the latter, ‘I hear people’s voices loud and close or faint and far’. [23] Since there is hardly any other literature on such instances of auditory distortion whereby voices or other sounds become either muted or heightened (let alone on distortions of smell or taste in the context of Alice in Wonderland syndrome), the topic I would like to address next is the group of time distortion s. After that, as promised, we will have a look at the causes underlying all those perceptual distortion s.
5.2.1 Time Distortions
The neural mechanisms subserving our experience of time have not been studied as extensively as those underlying somatosensory perception, let alone those underlying visual perception [24]. Nonetheless, we know that the brain and the body as a whole are like a giant clock repair shop, filled with clockworks ticking away at a pace of their own, some of them in synchrony with each other, others at rhythms of their own. Those rhythms are set to numerous internal and external cues, ranging from daylight availability, feeding patterns and changes in oxygen need, to surges of hormone secretion, and the ebb and flow of intracellular calcium levels. In the meantime, many of those rhythms also attune themselves to each other. Although the net result is a unified sense of time, it is as yet unknown whether a separate structure or receptor exists inside the brain that serves as the ‘end organ’ for all these biological clockworks [25]. However, within the network as a whole, we can certainly point out several processes that are of more importance than others.
The whole idea of time perception stands and falls with the notion that time needs to be cut up into relatively short intervals to enable us to keep track of it. Floating in an isolation tank filled with salt water at body temperature, in the absence of any light or sound or tactile stimuli whatsoever, is almost impossible. What we need is something by which to mark the passage of time . That something is rhythmicity. Some rhythms are given to us physiologically, such as our heart rate, for example, which provides a natural rhythm that is used by the brain to make estimations of the passage of time. Another (albeit much slower) rhythm is the diurnal pattern of night and day, and an even slower one is the cycle of lunar months. Other rhythms are artificially made by us. Clocks and other chronographs were invented specifically for that purpose, but simply counting may also do the trick, for example, when a musician tries to sustain a chord for a particular duration of time, when dancers dance in silence or when members of a synchronised-swimming team make their moves under water, where they cannot hear the music played above.
Thus, the rhythmicity needed for the creation of psychological time stems from the outside world, as well as from within the body. What is more, it is found at all levels of the body’s hierarchical organisation. When we go all the way down to the lowest level of the network for time perception, we find individual genes, in individual body cells, which produce proteins at a fixed rate and, thus, at a rhythm of their own. Since that rhythm differs for different types of cells, if we could make them audible, what we would get from the different tissues and organs made up of those cells would be a cacophony of rhythms. At the highest rung of the brain’s network for time perception stands the suprachiasmatic nucleus , a neuron population that is so-called because it is located right above the optic chiasm, the crossroads of the two optic nerves that I mentioned before. Like gene expression at the cellular level, the suprachiasmatic nucleus has a steadfast rhythm of its own. However, being much larger, and being positioned at the top of the chain, it is also much more influential, subjecting numerous lower-level mechanisms throughout the brain, and throughout the body as a whole, to its dominant pace. Because of that, the suprachiasmatic nucleus is also called the ‘internal master clock ’.
The internal master clock is linked directly to the pineal gland, which secretes the hormone, melatonin, that regulates many phasic processes, including sleep onset, sleep architecture and sleep–wake transitions. The internal master clock also regulates surges in the secretion of other hormones and, moreover, promotes metabolic changes, fluctuations in thermoregulation and changes in autonomic nervous function, which, in turn, regulate processes ranging from activity of the adrenal glands to the secretion of gastric acid. Obviously, all those numerous processes need to be attuned to each other in a coherent way in order to keep the body functioning properly. The internal master clock plays an important role in accomplishing that. And yet the structure itself is not an atomic clock, ticking away at the same pace whatever the circumstances. Thus, it does take cues from the external world, such as changes in environmental blue light and changes in physical activity, even though it has a strong tendency to stick to its own fixed pace, thus ensuring that behavioural patterns and intrinsic physiological rhythms are optimally synchronised to each other and to the 24-hour light-darkness cycle.
When our experience of time is altered, we speak of time distortion s and we know that also these phenomena come in several varieties. Looking back, it may seem like yesterday when we were young, whereas it seems to take forever until next week’s film arrives in the cinema (and that, I know from personal experience, is something not only kids complain about). Time appears to fly as long as we enjoy ourselves and it seems to drag when we are understimulated. Obviously, in such cases, it is not chronological time that stretches and squeezes but psychological time .
Time distortion s such as these are based on high-level judgements about duration, which primarily depend on memory, expectation and other psychological factors. That also holds true for Alice’s paradoxical experience of spending a day’s worth of adventures in Wonderland, which, in the context of the story’s ‘reality’, lasted no longer than an hour. By contrast, the phenomenon of psychological time suddenly speeding up or slowing down, such as experienced by my patient when he walked towards the bus stop, is probably rooted in mechanisms at an intermediate level of the time-perception network. It is controlled neither by the internal master clock nor by DNA expression at the cellular level, but rather by a process at a level in between. It probably involves neuron populations that are also activated when we are about to crash our car into a tailback, or—on a lighter note—are immersed in the action of attempting to score a winning goal at soccer. We all know situations like these, where we are so acutely aware of what is happening, be it in a calamity or in some other high-pressure situation, that we experience everything in astonishing detail, as if it were filmed with a high-speed camera, and played back to us in slow motion. Such changes in the apparent speed of time are called instances of protracted duration (for an overview of the various types of time distortion, see Appendix A, Table A.3). Physiologically, protracted duration is intricately linked to the amygdala , an almond-shaped structure in the deeper, evolutionarily older part of the brain that is involved in threat detection and hence in generating the sensation of fear and preparing the organism for action. Under frightening circumstances, the amygdala is believed to contribute to the formation of memories that are phenomenologically richer, and thus ‘denser’ than ordinary memories, which we therefore tend to recall afterwards as being stretched out over a longer period of time [26]. That is one hypothesis, at least, which begs the question of whether protracted duration is something we experience only in retrospect, or whether we also experience it in the moment itself. I am inclined to think the latter, although more research is needed to cast a proper light on that.
So, within the context of the widely disseminated network for time perception, the amygdala is believed to play a key role in the mediation of protracted duration . Whether that also holds true for the opposite phenomenon (i.e. the quick-motion phenomenon , where everything appears to rush by at an amazing speed) is as yet unknown. However, in addition to the amygdala, numerous other parts of the network for time perception are reported to be of importance.3 Since this area of research has only recently started to gain wider attention, many other parts of the network will probably be added to this list in the future, and different types of time distortion will probably be explained in terms of different states of the network as a whole. Moreover, it is feasible that there may prove to be a structure comparable to visual association cortex , which provides us with a unified sense of time on the basis of all those clocks ticking away inside and outside our body.4
5.3 Underlying Disorders
Now that we have looked at the visual, somatosensory and temporal sensory modalities, and have a general idea of the various levels at which they can be compromised when a person has Alice in Wonderland syndrome, it is time to address the question of how these lower-level and sometimes higher-level circuits can be erratically switched on and off. How is it possible that cortical column s sometimes selectively stop working? Or all of a sudden show activity when they shouldn’t? And what kind of process is capable of altering our body image in such a way that extra fingers appear to be growing out of our hand? Or of altering the function of the brain’s network for time perception? After all, we do not merely want to know which parts of the perceptual network are responsible for causing the symptoms characteristic of Alice in Wonderland syndrome, we also wish to know how they can become so disabled that they stop doing what they should. There are numerous reasons for that to happen, some of which may sound rather ominous, especially if you have first-hand experience with perceptual distortion s yourself. However, before we go on to the more serious conditions, let us see what the British neurologist MacDonald Critchley (1900–1997) had so wisely observed, even before Todd had published his paper on Alice in Wonderland syndrome. Having encountered numerous patients with visual distortions, Critchley wrote that,
Metamorphopsia is by no means confined to patients with organic focal disease of the brain. Indeed, most cases occur in quite different circumstances. Some of them are met within normal, though sensitive, aesthetic and introspective individuals [29].

Stay updated, free articles. Join our Telegram channel

Full access? Get Clinical Tree

