Showing posts with label Brain. Show all posts
Showing posts with label Brain. Show all posts

Monday, 19 November 2012

Time: facts of physics and fibs of the brain?

Time is not what it seems...
Time is money, time is ticking, time is of the essence, time is many things. But what is time really?

Since Einstein's theories of special and general relativity, physicists understand time very differently from our every-day experience of time. 

I look at time the way physicists understand it with Emmanuel Olaiya, a particle physicist at the Rutherford Appleton Laboratory in Oxford in the UK.

I also speak with Craig Callender, a philosopher of physics at the the University of California in San Diego, about his quest to find why we are so hung up on a past, present and future with neatly flowing time between them.

You can find the feature on the FQXi website here. Just scroll down to the second feature. 

Tuesday, 25 October 2011

Music to Deaf Ears


Here is a link to the radio documentary I worked on over the summer.

What do deaf people hear? How far are scientists in their understanding of the auditory system? And how close are they in their ability to restore hearing? These are some of the questions the documentary ‘Music to Deaf Ears’ addresses. The piece aims to take the listener on a journey through the science, experience and sound of different types and gradations of hearing loss to allow listeners with normal hearing a glimpse into the world of a deaf person.

Thursday, 19 May 2011

Mobilising the immobile

DARPAs dream?
Would you be happy to stay on-board a plane if you were told that the pilot was flying it whilst sitting in his armchair flying the plane with only his thoughts? This may sound like science fiction, but the Defense Advanced Research Projects Agency (DARPA) in the US has already spent millions of dollars on research that focuses on the interaction between the human brain and machines, and mind-controlled aeroplanes are high on their wish list, together with battle robots.

While armies of robots may still be well ahead into the future, immense progress has been made in research that could have a tremendous impact on the lives of people who suffer from paralysis. Often caused by diseases such as spinal cord injury after an accident, or by stroke, paralysis can be, at its most extreme, virtually complete, leading to locked-in syndrome. One of the most famous cases of paralysis after spinal cord injury is perhaps the late actor Christopher Reeve, who became paralysed from the neck down after a fall from a horse. Over the past couple of decades, research into paralysis has focused on the development of a brain-machine interface that allows people to control muscles and even prosthetic limbs using thought alone. 

Wilder Penfield
To understand the magnitude of this development, we have to go back into history. Wilder Penfield laid an important part of the foundation when he was working at McGill University in Canada in the 1950s.  Penfield, who studied neuropathology at Merton College in Oxford, became well known for the way in which he treated epilepsy patients by destroying the neurons in the foci of the seizures. Before each surgery, he would electrically stimulate the brain of his conscious patients to localise brain function. This allowed him to guide his surgery more precisely and prevent damage to other cognitive and motor functions. It also enabled him to draw a complete map of the motor cortex, or motor homunculus. The sizes of the body parts on the cortical map correspond to the complexity of the movements that those parts perform. Our ability to use such fine scale movements required to speak or to use tool is clearly reflected in Penfield’s map.
Penfield's motor homunculus

 Further exploration into motor functioning led Eberhard Fetz and colleagues to discover that monkeys were able to voluntarily control neural activity through operant training, as long as they were given biofeedback. The monkeys, who had electrodes implanted in their brain, were looking at a meter that showed their own neuronal activity. Whenever the needle of the meter was pointing to the right side, the monkey received a reward. As soon as monkeys had learned that there was a link between the meter and the reward, they were able to adjust their neural firing patterns for more rewards (Fetz, 1969; Fetz and Baker, 1973).  In the beginning the neuronal activity still resulted in active movement of the monkeys’ limbs, but over time, while the monkey still voluntarily controlled the neuronal activity, the overt movements extinguished (Fetz and Baker, 1973).

In the early 1980s, researchers began to slowly adopt the idea that ensembles of neurons spread across the brain, and not just single neurons, create physiological activity in the central nervous system of mammals (Gerstein and Aertsen, 1985). Donald Hebb had already suggested this in 1949, but research in this area was delayed by the persistent idea that single unit recordings would provide us with the answer to the neural code, because it was believed that only a few specific neurons controlled a particular movement. Since there are over a 100 billion neurons in our brain, finding those specific neurons would be like looking for a needle in a haystack. Thus, the discovery that ensembles of neurons were responsible for movements made looking for the target areas a lot easier.

By using different imaging techniques to look at this widespread brain activity, Jeannerod and colleagues made an interesting discovery. They compared activity in the brains of participants as they were imagining making a specific movement and compared these activations with the activity observed when the participants were actually making the movement. They found a striking resemblance between the observed activations, leading to the conclusion that imagining a movement may not be so neurally different from actually performing the movement.  This result provided a significant step in brain-machine interface research (Jeannerod, 1995; Jeannerod and Frak, 1999).

In parallel, Nicholelis and Chapin managed to train rats to receive rewards for activating lever press related activity in their brain. First, they trained thirsty rats to press a lever to get a drink of water. As they were pressing the lever, the researchers recorded the patterns of activity from 46 neurons in the rats’ brains, which they used in the next stage of the experiment. The lever was disconnected from the water supply and the rat no longer received water from a lever press. However, the rat went on pressing and the scientists gave the rat water whenever its brain produced the command for pressing the lever. After a while, the rat stopped pressing the lever altogether, but kept producing the press lever command in its brain. The external machine that delivers the reward is now directly operated through the command ‘press lever’ in the rat’s brain (Nicholelis et al., 1998).

The next step was to attempt something similar in primates and make the movements more intricate. Using a similar method as in the rats, Nicolelis and his colleagues managed to let a monkey swing an artificial arm from left to right, just by using thought. Then it was time to allow the monkey to make more complex movements. They first learned to use a joystick to drag a cursor onto a target on a computer screen while the researchers recorded the patterns of activity like before. Soon, the monkeys learned that the command for ‘drag cursor’ resulted in reward, and they stopped actively dragging the cursor onto the target. This was swiftly followed by an ability to reach and grab, trained in similar ways (Nicholelis et al., 2003).

Thus, scientists are now able to create communication pathways between the brain and an external machine (e.g. a computer cursor or an artificial limb). The first human trials have already successfully shown that paralysed patients were able to directly control computer cursors (Hochberg et al., 2006). However, they have also raised issues about safety and reliability of the intracranial electrodes that need to be addressed before clinical trials can extent to larger patient populations.

Sensory feedback is essential
Another limitation of current-generation brain-machine interfaces is a lack of sensory control. To successfully sip from a cup of tea requires more than just reaching and grabbing movements. The touch sensors that send signals from our hand to our brain allow us to pick up the cup without knocking it over. However, an artificial extremity has no such sensors. Thus, we need feedback of information from the external device to the brain. The only feedback currently available is visual feedback, which gives insufficient control over artificial limbs. A recent article in the Journal of Neuroscience showed that adding sensory feedback information would increase control greatly and that is the next challenge researchers are currently focusing on (Suminski et al., 2010; Nicholelis and Lebedev, 2009).

Research is also currently focusing on the striking finding that the brains of the monkeys that participate in these studies become structurally adapted to the external devices. Thus, different areas within the motor cortex of these monkeys now seem to be representing the robot, as if the robot was a part of their own body. If further research proves this to be right, the implications can be unprecedented. It would allow the patient to perceive the prosthetic device as an actual part of their body (Nicholelis and Lebedev, 2009).

A global team of neurophysiologists, computer scientists, engineers, roboticists, neurologists and neurosurgeons are now working together in the “Walk Again Project” to do no less than develop a generation of neuroprosthetic devices that can restore full-body mobility in patients with severe paralysis. Thus, while DARPA is dreaming of creating armies of robots, their money has been well spent on research that could mobilise the immobile in the not too distant future.

References:
Fetz EE. (1969). Science 163, 955-958.
Fetz EE, Baker MA. (1973).  Journal of Neurophysiology 36, 179-204.
Gerstein GL, Aertsen AM. (1985). Journal of Neurophysiology 54, 1513-1528.
Jeannerod M. (1995). Neuopsychologia 33, 1419-1432.
Jeannerod M, Frak V. (1999). Current Opinion in Neurobiology 9, 735-739.
Nicolelis MAL et al. (1998). Nature Neuroscience 1, 621–630.
Nicolelis MAL et al. (2003). Proceedings of the National Academy of Sciences 100, 11041-11046.
Hochberg LR  et al. (2006). Nature 442, 164-171.
Suminski AJ et al. (2010). Journal of Neuroscience 30, 16777-16787.
Nicolelis MAL, Lebedev, MA. (2009). Nature Reviews Neuroscience 10, 530-540.


This article appeared in the Trinity Term issue of 'Phenotype' in a slightly modified form. Phenotyope is the termly science magazine published by the Oxford University Biochemical Society. Here you can read this issue of the magazine in its entirety.

Wednesday, 2 June 2010

Illuminating the brain's bright future

In the 1920s Felix the Cat had a brilliant idea and a light bulb appeared over his head; thus was created the signature of an epiphany. But recent advances in neuroscience leave you wondering whether in the future we will be more familiar with light bulbs actually driving our thoughts and inspiration rather than just being a visual metaphor. Gero Miesenböck, currently Waynflete Professor of Physiology at Oxford University, has been pioneering work that uses light to control brain cells, a field known as optogenetics.

Our brain consists of approximately 100 billion neurons that, as Miesenböck lyrically describes, form “an intricate tapestry”. To understand how neuronal signalling drives our behaviour, he says, we need to tease apart the disparate contributions that each of the different populations of neurons make to our behaviour. Nobel laureate Francis Crick remarked in a famous article in 1979 that one thing scientists have dreamed about is a tool that would allow them to selectively activate or turn off certain groups of cells while leaving others unaffected. Twenty years later, he suggested how this might be achieved: with light and molecular engineering. And this is precisely what optogenetics does.

To understand this technique we have to go back to the 1990s when German biologist Peter Hegemann discovered that green algae, commonly found in ponds, respond to light by wagging their tail. This behaviour was intriguing because algae are unicellular creatures without eyes. Hegemann discovered that when light photons hit the protein coils packed in the algae’s cell membrane, a chemical reaction creates a tiny gap in the membrane, causing an ionic current to be produced and the algae’s tail to wag. The protein that allows this reaction with light is called channelrhodopsin and is comparable to rhodopsins found in our own eyes.

Meanwhile, Miesenböck and his colleagues, working in New York and later at Yale, wondered whether they could exploit a similar mechanism to control brain cells. They took light sensitive proteins like the photoreceptors of our eyes, transplanted them into neurons and, by simply shining a light on them, the team was able to activate the modified neurons, a first step towards neuronal control.

To exploit the full power of this method, however, the researchers needed to discover a way just to excite or inhibit selected populations of cells, and with genetic engineering they were able to achieve this. By harnessing the cunning of viruses or by creating genetically-modified mice and flies, it was possible to make expression of the rhodopsin-encoding gene specific to particular neurons, meaning that only those neurons would become active when illuminated.

The road to success for optogenetics was not easy. The first difficult step was to find out whether they were able to transplant the rhodopsin-containing photoreceptors of flies to other cells in a culture and activate them with a flash of light. Once they succeeded in doing this, the second, even more complicated challenge was to move from changing neuronal activity in a cell culture to changing the behaviour of a living being, in Miesenböck’s case the fruit fly. The promise became initially clear when Susana Lima, Miesenböck’s PhD student at the time, showed him the first baby steps taken by a fruit fly on command of light. Within 5 years, they had learned how to remote control a fly.

The technique is now so advanced there is a large volume of work looking at how brain cells control behaviour. Last year in Cell, Miesenböck and his team exposed the learning mechanisms of a fly by creating false memories (1). They placed a fly in a narrow chamber, half of which smelled of an old tennis shoe, the other half of sweet fruit. By observing how much time the fly spent on either side, the researchers were able to work out which was the fly’s preferred smell. When this location was later paired with a memorable, aversive signal – a painful electrical shock – the fly learned to avoid this location and spend more time on the opposite side of the chamber. From previous research, Miesenböck knew which neurons were involved in learning to associate the shock with an odour and could therefore directly target this system with optogenetics. By activating these cells with light when the fly was in the location of its preferred smell, Miesenböck’s team was able to provoke identical avoidance behaviour even though no electric shock was given. Thus, the fly learned from an experience it never had.

Might we be able to use this technique to control our minds in the future? Miesenböck thinks that it will be a while before optogenetics can be used in humans: “You would have to express a foreign gene in a targeted fashion and this is where the show-stopper currently lies”. While using this technique in humans may be a long way off, he does believe that optogenetic research in flies might nonetheless directly aid our understanding of the human brain because biology is generally conserved. “Nature rarely invents the wheel twice”.

For now, Miesenböck thinks the field should focus on blurring the boundaries between work in whole organisms and fine-scale research in cell cultures. They could make use of the fact that tissue in a cell culture can be treated as if it was still part of a functioning brain by activating the cells with flashes of light – a use of optogenetics that is currently underappreciated. “There will be room for brain-free neurobiology, where optogenetics provides the interface to allow researchers to really talk to and feed artificial information into neuronal systems”.

Miesenböck also advocates using light “to enable scientists to drive nervous systems outside their normal operating limits, because this is often where mechanisms reveal themselves”. Miesenböck’s team used this approach to investigate the origin of sex differences in flies. While male and female fly brains are very similar, they nonetheless display sex-specific courting behaviours. The gene that controls male courting behaviour is expressed in a very small number of neurons in the abdominal ganglia of the fly. By specifically targeting these cells with optogenetics and shining light onto this circuitry, Miesenböck’s team was able to produce male courting behaviour in all the flies, even the females (2). Thus, they were able to show that females possess a bisexual brain containing a motor programme necessary for male courtship behaviour, but do not activate it because the neuronal commands required for the behaviour are absent.

With the ability to dissect neuronal functioning in the healthy brain, optogenetics might also hold potential to help understand the exact mechanisms that cause neurological and psychiatric diseases such as depression and schizophrenia and even help treat them. For example, Karl Deisseroth and his team at Stanford University in California published a study in Science last year that used optogenetics in rats to investigate directly how deep brain stimulation might alleviate symptoms of Parkinson’s Disease, something that had previously been poorly understood (3).

Thus, despite the difficulties in applying the method to humans, Miesenböck is hopeful: “With optogenetics we can really identify the players that are responsible for particular behaviour and that may give us knowledge for targets of more conventional treatment. Then conventional treatment can become more effective and cleaner.

References:
1. Claridge-Chang et al., 2009. Writing memories with light-addressable reinforcement circuitry. Cell 139:405- 415.
2. Dylan Clyne & Miesenböck, 2008. Sex-Specific Control and Tuning of the Pattern Generator for Courtship Song in Drosophila. Cell 133:354-363.
3. Gradinaru et al., 2009. Optical deconstruction of parkinsonian neural circuitry. Science 324:354-359.

This article appeared in 'Phenotype'. Here you can read the magazine its entirety.

A shorter, related post can be found on my blog here.

Friday, 16 April 2010

Freud's full circle?

While Freud's theories have had an enormous impact on psychiatry - psychoanalysis today still uses similar methods to the ones Freud developed in the beginning of the 20th century - they have long been engulfed in controversy. Freud's psychoanalytical thinking focused on the understanding of human behaviour by gaining access into the unconscious mind. In a typical session on Freud's sofa you might talk about your dreams and fantasies, letting your mind wander and speak without controlling your thoughts. Freud would listen to you, absorbing your thoughts and interpreting them, unravelling the unconscious conflicts that caused the symptoms for which you came to this session. Unveiling and subsequently dealing with these unconscious conflicts would cure the original symptoms of your mental instability.

One of the major criticisms of Freud lies in the lack of experimental scrutiny that surrounds his methods of baring the unconscious. Such lack of experimental evidence was, and still is, seen as unscientific. In the 1960s and 70s however, the idea of the presence of the unconscious re-emerged and became of particular interest for neuropsychologists who were trying to gain understanding in seemingly unconscious processes in split-brain patients and in disorders such as Alien Hand Syndrome. In split-brain patients, all the connecting fibres between the two sides of the brain were surgically cut to alleviate severe symptoms of epilepsy such that there are no direct routes for communication between the two halves of the brain any more. While this undoubtedly helped reduced the severity of symptoms, this procedure also had some other interesting effects. In a series of experiments that went on to gain him a Nobel Prize, Roger Sperry showed that each hemisphere could seemingly have simultaneous systems of volition. For instance, when he showed a split-brain patient a picture on the left side of a computer screen, which will be processed by the right side of the brain, the side that usually does not contain the language areas, they would tell him they had not seen anything. However, when he then asked them to select an object from several alternatives with their left hand (the one controlled by the right hemisphere), they would choose the object that was presented to them just a second ago even though they could not express why they had picked that exact object.

While complete sections of the corpus callosum tend no longer to be performed, similar bizarre “unconscious” desires also manifest themselves in patients with particular brain damage that affects this region. For instance, in patients with Alien Hand Syndrome one hand does something completely different and independent from the other. Perhaps the most famous example was Dr. Strangelove, who had to keep one hand in control with the other. Another compelling example is that of a woman who was determined to smoke a cigarette, but whenever her one hand had put the cigarette in her mouth, the other would grab it and throw it away.

In fact, as Emeritus Professor of Neuropsychology at Oxford Larry Weiskrantz has pointed out, a curious facet of many clinical syndromes caused by brain damage is that, while these patients may lose particular conscious faculties such as being able to recall past events or identify people by their faces, they still retain “unconscious” abilities to do exactly these things. A patient with prosopagnosia may not consciously be able to recognise faces as a result of damage to the temporal lobe, a region in the lower part of the brain particularly important for memory, but will still able to show changes in arousal when seeing someone familiar.

Today, with the ability to look inside the human brain while someone is ‘thinking’, we can observe the processes that go on inside, even the unconscious ones. With such brain imaging techniques neuroeconomists have already started to gain insight into unconscious thought processing by showing that when we make economic decisions, for instance buying something on eBay, we tend to depend much less on our conscious, rational deliberation and much more on subconscious gut feeling and emotion. Perhaps Professor John-Dylan Haynes at the Bernstein Center for Computational Neuroscience in Berlin made an even more intriguing discovery: he was able to predict, by looking at someone’s pattern of brain activity with functional neuroimaging, what a person is going to do and when they will do it nearly 10 seconds before he or she actually does it.

In an article published in Brain this week, Robin Carhart-Harris and Karl Friston argue that with the aid of these brain-imaging techniques, Freudian concepts might now be tested experimentally. Until recently, one of the most common ways to analyse brain imaging data was to directly compare networks of brain activation during a specific task to networks of activation during periods where the brain was assumed to be at rest. However, over the past ten years, research pioneered by Marcus Raichle started looking into what was actually going on in the brain during these periods of rest. Surprisingly, he and his colleagues noticed that the patterns of activity during rest periods were remarkably consistent, which lead him and other researchers to suggest the existence of a “default” network. According to Carhart-Harris and Friston this default network might represent intrinsic internal thought remarkably consistent with the unconscious thought processes in Freud’s later theories. Many of the key principles of Freud's theory they argue, such as 'the ego' (our conscious self) and 'the id' (our unconscious self), echo our current knowledge of how the brain functions on a global level (i.e. a different set of areas in the brain that is active during conscious processing from the set that is active during unconscious processing).

Could it be that, after his initial success and subsequent fall from grace, Freud has now come full circle? Appropriately, it turns out that even Freud himself had originally attempted a not dissimilar scientific approach in the Project of Scientific Psychology published in 1895. In his neurophysiological theory he suggested that the transfer of energy between neurons in the brain caused unconscious processes, but in the years to come he decided that neuronal processing as understood at the time seemed much too complex for such an interpretation. Instead, as a result of his analyses of dreams, he proposed that the unconscious was a result of highly condensed, symbolic thoughts – the primary processes – and the conscious a highly rational and logical way of thinking – the secondary processes. That neuroscientists are, consciously or unconsciously, currently returning to these ideas would likely have amused Freud.

This post also appeared on Cherwell's Matter Scientific

Thursday, 4 February 2010

The name Samantha tastes like bubblegum

The number 4 is a bright acid-yellow and 5 is crayola-blue. Together they should make 8, which is a bright green, but instead they really make 9, which is wet-dirt-brown. It has never made sense to me. Algebra is what makes X turn brown, too. Letters least of all should be brought into that mess.

If you are like me, this will undoubtedly not make any sense to you. However, to approximately one in twenty people it may, at least to a certain degree, seem familiar, even if they may disagree vehemently on the exact pairings of colours and numbers. The above quotation is the writing of a sixteen-year-old girl with synaesthesia. Synaesthesia (syn meaning ‘together’ and aisthesis, ‘sensation’) is a neurological condition in which an instant, involuntary co-occurrence of one sensation takes place as a result of the occurrence of another type. This can happen between any of the senses – days of the week may have their own particular colours, G-major a particular smell, and a triangle a specific taste.

While the concept of synaesthesia is not new – ancient Greek philosophers already investigated the link between colour and music, and Newton suggested that colours and sounds may have similar frequencies – it was not until the 1980s that scientists started investigating synaesthesia in earnest.

Those without synaesthesia may wonder whether it is simply a set of made-up or delusional associations. However, one of the characteristics of synaesthesia is that pairings between sensations remain stable over time (life-long). Moreover, these sensations seem to arise from an organic basis: patterns of activity in a synaethetes’ brain reflect both the appropriate and the paired sensation as if it ‘really’ perceives both types of stimulation. For example, while anyone’s auditory cortex will be activated when listening to music, a sound-colour synaesthete will also activate the visual cortex to reflect the colours simultaneously experienced in their mind. Further investigation into synaesthetes’ brains has led to the discovery of ‘hyperconnectivity’, namely the existence of many more pathways between the cortical regions that process different sorts of sensory information as compared to a normal brain, maybe allowing more possibility for crossing-over of different sorts of sensory information.

Behavioural studies in babies have shown that our brain is hyperconnected at birth and that, as part of the maturation process, we lose this hyperconnectivity in the first few month to years of our life. Although a specific gene has not yet been discovered, synaesthesia tends to run in the family, and this has led researchers to believe that a genetic abnormality prevents the brain from complete cortical maturation, thus leaving the brain hyperconnected.

Even though synaesthesia is a neurological condition, it is difficult to claim that that people suffer from it. Many highly creative people such as Nabokov, Kandinsky, and Messiaen were synaesthetes, and for most it is just as innate as the colour of their eyes and the size of their feet. This ‘normalness’ of the condition may well be the reason why prevalence in the general population has been estimated from anywhere between 1 in 20000 to 1 in 20 people (the latter being a more likely estimate).

Although this may still leave the vast majority of us without such abilities, it is often overlooked how much we all possess some synaesthetic abilities. For instance, Professor Charles Spence of the University of Oxford showed in an experiment conducted at Heston Blumenthal’s award-winning restaurant that sounds play a particularly important role in our perception of food: a bacon and egg ice-cream was perceived as tasting more strongly of bacon when it was accompanied by the sizzling sound of bacon being fried compared to when there was no such sound present, the result of sensations crossing-over. Similarly, many of us automatically associate shapes and sounds (does “Kiki” sound sharp or “Bouba” rounded?) or even perceive certain names as being sexier than others. For fun exploring your own synaesthesic tendencies test yourself here.

As someone who does not have the slightest hint of a synaesthesic mind, I can only wish I could, if only for one day, return to my infant state and re-see the days of the week in vivid colour, let Kandinsky’s paintings make music in my mind, and finally find out what circles really taste like …

This post was written for and published on Cherwell's Matters Scientific

Tuesday, 18 August 2009

Rewire your brain after a long period of stress

We all know that stress can be very bad for your health, with post-traumatic stress syndrome being one of the severest expressions. Stress causes the release of cortisol in our blood stream. Cortisol is a hormone involved in many important functions in the body such as the regulation of our metabolism, regulation of blood pressure, maintenance of blood sugar levels, and it has an important immune function. 

Cortisol is often dubbed the stress hormone because it serves an important role in 'fight and flight' responses to potentially threatening situations. Small increases in cortisol can have very positive effects in situations where split-second decisions have to be made in order to survive. It can give that extra bit of alertness to dodge a punch or prevent a car crash, it heightens your memory temporarily, it briefly increases immunity and it even makes you less sensitive to pain.

Unfortunately, when the cortisol levels in your blood are constantly high as a result of ongoing exposure to stress, they can lead to many physical problems such as blood sugar imbalance, a decrease in bone density and muscle tissue, high blood pressure, a lowered immune system, and even impairments in your cognitive abilities such as memory and perception.



Scientists from the university of Minho in Portugal showed that the plasticity of the brain of rats under constant pressure diminished significantly. Four weeks of stress made the rats less flexible in solving problems in order to get their rewards. They would persist with one possible option and not try to explore other ways of getting their rewards, unlike their unstressed controls who would try to get to their goal in any way possible.

They did find that there is some hope though. After four leisurely weeks the rats started behaving more like the controls again, and their brains seemed to rewire too. You can read more about this research and on the New York Times website in the Science section.

Saturday, 6 June 2009

The reptilian brain: the new left brain, right brain?

Over the past couple of weeks I have heard a number of different people on the radio – a scientist in one case, psychologist in another – talk about our intuitive brain, our reptilian brain, our emotional brain as if it were a separate entity within our brain. Slightly surprised by this idea, I decided to look at it in a bit more detail – surprised because my conception of the brain as someone who has studied it is that of an integrated, unified entity in which different regions throughout are working together in order to perceive the world, make sense of it and act in it appropriately. The idea of the existence of a special system dealing with our emotions and intuitions seems counterintuitive in a system where every single part works together with other parts in order to think, act, memorize, speak.

We like to believe that the decisions we make, the actions we take and, at least sometimes, the thoughts we have are based on rational weighing up of pro’s and con’s. Equally, I think that most of us would admit that at times we do things because our gut feeling tells us to do so. This idea is not a new one; even Aristotle suggested that a logical decision can be overturned by mere appetite for pleasure or anger. But even if there is the existence of gut feeling alongside reason, does this mean we have a separate set of brain regions to deal with our different states of mind?

According to a number of researchers, it does. Several groups of prominent neuroeconomists, such as Douglas Bernheim and Antonio Rangel, have proposed models that describe the brain as operating in a “cold” deliberative mode or a “hot” emotional mode, depending on the situation in which a decision is being made. Based on the anatomical structure of our brains, back in the 1960s Paul MacLean proposed the influential theory that we have a ‘triune’ brain consisting of three parts, each formed at a different time in evolution. In essence, he too argued that our brains contain ancient reptilian fight or flight mechanisms, animal instincts and emotions, and new thoughtful cortex to offset these other urges.

While some of these principles are generally accepted, the existence of entirely separate systems underlying emotional, intuitive impulses on the one hand and rational considered behaviour is more controversial. Nonetheless, the notion of a direct anatomical basis for separate intuitive and rational systems seems to have caught the public imagination. A quick Google search on “reptilian brain and decision” brings up numerous self-help and business-based writings about how to tame your reptilian brain, live with your emotional urges and stop it buying your Starbucks lattes.

It doesn't seem too long ago to me that another dichotomous idea from neuroscience, that the two halves, or hemispheres, of our brain have highly specific functions, was providing this market with these metaphors. Again, the science suggested (not without challenge) that left part of the brain was the dominant linguistic side, cold and calculating and dealing with details (the cognitive side) whereas the right was the imaginative but suppressed side that dealt with the global processing of information and emotions (the intuitive side). And again, this spawned a large industry of self-help and business books on everything from how to unshackle your right hemisphere in order to become more imaginative and even to help us get in touch with the opposite sex.

Looking at the persistence with which ideas of a separation between intuition and reason have popped up in the past and present, is it likely whether these theories will ever fade? Or does our hunger to become a better person – more creative or more logical – make us embrace the idea of separate systems because we feel they give us (false?) opportunity to enhance certain qualities in ourselves to make us into the person we want to be?

I wrote this article for 'Matters Scientific', the science blog of Cherwell, the Oxford University student newspaper.

Wednesday, 29 April 2009

The autistic gene

When my nephew was 9 years old he was obsessed with taps. Whenever he saw a tap, he had to turn it on. He has always been very good at drawing, he likes to draw even the smallest details you and I would miss. And if you want to know what day of the week March 23 is in 2050, just ask him and you will get the answer within seconds. He is a grown man now. He does not have many friends, he prefers being on his own. He has autism.

This week a study in Nature revealed evidence for a link between a specific gene and the development of autism.

Autism is a so-called neurodevelopmental disorder whereby the normal growth patterns of the central nervous system and the brain are altered. This abnormal development often results in learning disabilities, and social and emotional problems.

Over the years, there has been a lot of speculation as to what causes the brain and nervous system of autistic people to develop in a different way compared to the normal development. This has led to outrageous speculations about the lack of oxygen at birth, and to masses of people refusing to vaccinate their children with the MMR vaccine to protect them from getting measles, mumps and rubella, because of a falsely assumed link between autism and the vaccine for which no-one ever found any convincing evidence.

More controlled research has recently led to the discovery that the abnormal development seen in autism is especially affecting frontal regions of the brain. With the help of these regions we make plans and decisions and they do also support our interactions with other people. Another important discovery about the structure of the autistic brain is that there are a smaller number of pathways used for communication within the frontal lobes and between the frontal lobes and other regions in the brains of autistic people.

The article in Nature describes how in a group of 10.000 subjects consisting of autistic patients and their families, common genetic variants on 5p14.1 were identified. This gene has been tentatively linked with autism before, and this study confirms its importance in autism.

Thus, this study does not only show us the gene involved in autism, we can also derive from it that the class of genes to which this specific gene belongs is important for the normal development of structure and communication pathways in the healthy human brain.

Having a gene to hold accountable for autism does pose some ethical questions; with genetic screening it is possible to look at the genetic make-up of the unborn child and you could, based on the results, decide whether to proceed the pregnancy, or that, maybe, it's better for everybody not to have the child ... But given that the development of autism is not solely dependent on the genetic expression and there is a likely chance your child will be completely healthy, do you really want to make these decisions? And do we really want to rid the world of the Lewis Carrolls, Glenn Goulds, Beethovens, Vincent van Goghs and Wittgensteins, all of whom are suspected to have had autistic traits?