NeuroTalk S2E4 Gail Mandel

Each week the Stanford Neurosciences Institute (SNI) invites a prominent scientist to come to campus and share their most recent work with the Stanford community. Each week, as part of the Neuwrite West podcast NeuroTalk, we engage the SNI speaker in an informal interview/conversation. This week, we talk to Gail Mandel about her long, and winding journey into neuroscience, what makes a neuron a neuron, how astrocytes contribute to neurological disorder, and more!

Dr. Mandel is a Senior Scientist at the Vollum Institute and a Professor in the Department of Biochemistry and Molecular Biology at the Oregon Health and Science University, as well as an HHMI investigator.

Gail Mandel talks about her long, and winding journey into neuroscience, what makes a neuron a neuron, how astrocytes contribute to neurological disorder, and more! Dr. Mandel is a Senior Scientist at the Vollum Institute and a Professor in the Department of Biochemistry and Molecular Biology at the Oregon Health and Science University, as well as an HHMI investigator.


Listening options:

Our conversation with professor Mandel can be streamed or downloaded here: 

NeuroTalk S2E4 Gail Mandel

You can also subscribe to NeuroTalk though iTunes by searching for "Neuwritewest" in the iTunes store and subscribing to our channel.

NeuroTalk S2E3 Penguins & Pajamas

 This week on NeuroTalk, we bring you a special report about a scientific sleepover hosted by the California Academy of Science called Penguins & Pajamas! Stanford postdocs from a variety of disciplines presented on their research, and we bring stories from the event, and speak with Mary Cavanagh and Antoine de Morree from the Stanford postdoc association. Below, you'll also find full interviews with many of the postdocs at the event.  

This week we bring you a special report about a scientific sleepover hosted by the California Academy of Science called Penguins & Pajamas! Stanford postdocs from a variety of disciplines presented on their research, and we bring stories from the event, and speak with Mary Cavanagh and Antoine de Morree from the Stanford postdoc association.

David Zhang talks about the science behind cloning, and the ongoing efforts to clone a woolly mammoth.

Felice Kelly and Fiona Strouts talk about how live bacteria and yeasts transform simple ingredients into more complex flavors.

Gazi Yildirim talks about the Quake Catcher Network, the world's largest, low-cost strong-motion seismic network.

 

Learn more about the Quake Catcher Network here: http://qcn.stanford.edu/

Jenny Lumb describes the science of hula-hooping!

Jolyn Gisselberg

Merav Vonshak talks about the worldwide domination of invasive ants and consequences for biodiversity.

Rico Rojas talks about cholera, climate change, and the ecological relationships between humans and their pathogens.

Zeeshan Maan talks about translating research from the bench to the bedside.

Urvi Vyas talks about conducting brain surgery without ultrasound.

Viola Caretti talks about a novel approach to studying brain cancer by using light-activated neuronal stimulation.

Stefano Bonetti explains how to use magnetism to get a levitating train.

Avi Adhikari talks about the neurobiology underlying anxiety.

All pictures by Mark Padolina and Luqia Hou.

You can find more information about the Stanford Postdoc Association on their website: http://www.stanford.edu/group/supd/

or their Facebook page: https://www.facebook.com/StanfordUniversityPostdoctoralAssociation

You can find more information about Penguins & Pajamas on the California Academy of Sciences website: http://www.calacademy.org/events/sleepovers/

For more information about Stanford's involvement in Penguins & Pajamas, and other events, you can also contact Mary Cavanagh directly at museumpostdocs@gmail.com

NeuroTalk S2E2: Diana Bautista

DefaultBlogImg.png

Each week the Stanford Neurosciences Institute (SNI) invites a prominent scientist to come to campus and share their most recent work with the Stanford community. Each week, as part of the Neuwrite West podcast NeuroTalk, we engage the SNI speaker in an informal interview/conversation, with the aim of gaining a better insight into the speaker’s personality, and providing a platform for the kinds of stories which are of interest to us but are often left out of more formal papers or presentations. This week, we talk to Diana Bautista about the difference between itch and pain, the curious organ of the star-nosed mole, and more! Dr. Bautista is an assistant professor of molecular and cellular biology at the University of California at Berkeley.

This week, we talk to Diana Bautista about the difference between itch and pain, the curious organ of the star-nosed mole, and more! Dr. Bautista is an assistant professor of molecular and cellular biology at the University of California at Berkeley.


Other listening options: Our conversation with professor Bautista can be streamed or downloaded here: NeuroTalk S2E2 Diana Bautista You can also subscribe to NeuroTalk though iTunes by searching for "Neuwritewest" in the iTunes store and subscribing to our channel.

Please let us know if you have any trouble accessing the podcast.

Thanks, and enjoy!

On behalf of NeuWrite West, Erica Seigneur Forrest Collman Mark Padolina

Are you there, God? It’s me, dopamine neuron

Are you there, God? It’s me, dopamine neuron

Dopamine neurons are some of the most studied, most sensationalized neurons out there. Lately, though, they’ve been going through a bit of an identity crisis. What is a dopamine neuron? Some interesting recent twists in dopamine research have definitively debunked the myth that dopamine neurons are all of a kind – and you should question any study that treats them as such.

Read More

BRAIN Initiative Interim Report: A Readers Guide

BRAIN Initiative Interim Report: A Readers Guide

Weighing in at 58 pages, the Interim Report of the BRAIN Working Group (online version, here) is a detailed document that identifies and discusses eight research areas that were determined by the working group (with help from expert consultants, aka additional neuroscientists) to be high priority areas for the 2014 fiscal year. So what are these high priority research areas? How closely do they hew to ongoing research areas long acknowledged as important by the neuroscience community? How much do they rely on recruiting non-neuroscientists to research teams? How clearly do these areas address the Presidential mandate of the BRAIN Initiative? Will these goals help us to elucidate the importance of the Initiative, both in our minds and in the minds of the general public?

What follows are my impressions of the critical points contained within each of the eight sections that make up the body of the Interim Report.

Read More

NeuroTalk S2E1: Yun Zhang

DefaultBlogImg.png

Welcome to the new year of school, and a new year of NeuroTalk! In the first episode of our second season, our guest is Yun Zhang, an associate professor of biology at Harvard University. We speak with professor Zhang about growing up in science, and studying learning and behavior in C.elegans!

Note to listeners: we had some connectivity issues while conducting the interview, so the audio quality is not as good in some places.

Welcome to the new year of school, and a new year of NeuroTalk! In our first episode of our second season, we speak with Yun Zhang about growing up in science, and learning and behavior in the nematode C.elegans! Yun Zhang is an associate professor of biology at Harvard University.


You can also stream or download this NeuroTalk here: 

NeuroTalk S2E1 Yun Zhang

Season 1 of NeuroTalk is still available for your listening pleasure here:

NeuroTalk Archive

Comment /Source

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Thinking outside the gene

  Our DNA contains the code that builds the bodies we call ourselves. These days, we are used to hearing about genes: phrases of DNA, read out by cellular machinery to construct the components of our bodies. We are used to the idea that mutations in our genes, changes or mistakes in the code, can make people sick. But the code written into our DNA is not as static or inflexible as we might imagine and it is not only your genetic sequence that has an effect on your physical traits (phenotype). Cells have layer upon layer of processes that control when and how much a gene is expressed, introducing complexity at multiple levels. Not only (as it often seems) to frustrate scientists, but rather to confer the redundancy, flexibility and robustness that allow development and survival to continue in the face of environmental change. One group at Columbia University is now looking at the role played by these extra levels of regulation in age-related memory loss. The reason some people experience memory loss in old age and others don’t, may have nothing to do with which genes you have. Rather, the difference may lie in how and when your cells express those genes.

We have a storage problem. At the risk of repeating a decades-old factoid, the DNA contained within a single cell is around 2 metres long. The average diameter of a human cell is 10 micrometres giving a shortfall of space in the order of two hundred thousand. Somehow all that DNA has to fit inside the cell, and histone proteins are the contortionists that make that possible. By winding DNA around itself, then around histone proteins, then winding those around each other, then winding that again a few more times, our cells can cram in all the DNA necessary to code for everything that makes us human. But now we have a new problem. If the code we need is in the middle of a tangled mess of other code and wrapped around bulky proteins, which are then crammed together even further, how can that code be accessed? This is where epigenetics comes in. Epigenetics is a rather vague term used to describe a whole host of strategies used by cells to regulate the expression of genes. But why do cells have to regulate gene expression? And how does that relate to the problem of genetic storage? Every single cell in your body* contains all of your genetic information. In other words, a single cell in your skin (or anywhere else, for that matter) contains all the information necessary to make any other cell in the body and, theoretically, could be reprogrammed to become any other cell. But a skin cell has no use for, say, proteins used to send nervous impulses, and can exploit this position of limited need to tackle the problem of genetic storage. Cells don’t need to access the entire genetic code all of the time. There are things in there, for example, that are used when we’re developing in the womb, but have no function once we’re out in the wide world. These genes, then, can be archived – set aside to be passed on to our offspring for their in utero development. By selecting which genes are buried away and which are close to the surface, ready to be decoded, the cell can perform efficiently and still house the entire human genome.

Epigenetic_mechanisms

Epigenetic_mechanisms

Figure from http://www.beginbeforebirth.org/the-science/epigenetics

The amount of control imparted by epigenetic mechanisms is only just beginning to be appreciated. Perhaps from fear of a return to Lamarckism, there was a reluctance in the scientific community to attribute heritable changes to anything other than mutations in DNA. However, we now know that differences in phenotype can be the result of processes other than changes in genetic sequence. These epigenetic mechanisms have been shown not only to influence an organism’s phenotype, but also to have the capacity to be inherited by offspring. That is to say that two organisms can have a different phenotype, not because their genetic sequence is different, but because their parents regulated the expression of that gene in different ways. One highly visual example of this is the Agouti mouse, in which the coat colour of the offspring can be influenced by supplements given to the mother during pregnancy. Expose the mother to bisphenol A (BPA) and her offspring are more likely to be yellow. Without BPA, they come out brown [1].

Agouti_2

Agouti_2

Figure modified from reference 1.

In this recent paper on memory loss [2], the authors wanted to look at what causes age-related memory loss and how it differs, if indeed it does differ, from Alzheimer’s disease. Previous studies have suggested that Alzheimer’s primarily affects an area of the hippocampus called the entorhinal cortex. In contrast, normal ageing (which is also associated with memory loss) involves changes in a different part of the hippocampus – the dentate gyrus [3]. With this in mind, the authors took brain tissue from post-mortem samples of healthy people to look for differences between the entorhinal cortex and the dentate gyrus. They looked at changes in gene expression that were associated with age by measuring how much of each gene was being expressed in each brain region and matching expression level to the age of the person. One difference they saw was in the dentate gyrus, which showed a large, age-related decrease in the expression of an enzyme (RbAp48) that modifies histone proteins. These, remember, provide a scaffold for DNA and help to determine which genes are accessible and which are archived. This finding suggested that age-related memory loss may not be the result of a person having a defective gene, but rather the result of incorrect genetic archiving. As is usual in this kind of study, they turned to a mouse model to look at this enzyme in more detail. By breeding mice unable to make RbAp48, they were able to show that this enzyme is necessary for normal memory: mice lacking RbAp48 performed worse on memory tests (navigating a maze or recognising an unfamiliar object). As mice get older, their memory appears to deteriorate based on tests like this, and mice lacking RbAp48 experienced this deterioration at a younger age than mice with normal levels of RbAp48. When looking at human brains, the decrease in RbAp48 wasn’t seen in the area of the brain associated with Alzheimer’s disease, suggesting that age-related memory loss has a unique starting point and is not just an early sign of Alzheimer’s. This could have important consequences in the future for diagnostics.

The more we learn about epigenetics, the more obvious it becomes that there is more to go wrong than we thought. You not only need the right genes, but you need the right control mechanisms in place to make sure have the right amount of each gene in each cell at all times throughout life. At the same time, we know that most people manage this, reflecting the amazing robustness of the system. Increasing our understanding of these control mechanisms has implications for treatment too. By looking at the underlying cause of a disease, we can treat it more effectively. This has been going on for decades in infection research, but may be applied more to other diseases in the future. For example, two patients presenting with fever and breathing difficulties will be tested for pneumonia. One may have a fungal infection and the other a bacterial infection. These need to be treated very differently, but only a knowledge of the underlying cause can tell us how to treat each patient. Similarly, treatment may be very different for someone lacking a gene completely compared with someone who has the gene in an inaccessible place. Both patients would have the same symptoms, but an analysis of the underlying causes could completely change the nature of the treatment. It is this sort of personalised diagnosis that could help to provide the right treatment for a patient; which would not only help the patient recover more quickly, but could also help to reduce the amount of money wasted on ineffective treatments.

*There are a few notable exceptions. Red blood cells have no nucleus and contain no genetic DNA. Egg/sperm cells have half the amount of DNA as the rest of your cells to make sure an embryo has the correct amount after fusion.

Jargon box

Histone: a type of protein used as a scaffold for DNA. DNA molecules wind themselves around histones to reduce the amount of space needed to house the genome.

Phenotype: observable characteristics of an organism from visible traits e.g. hair colour to cellular traits e.g. cell shape or structure.

References

1)    Dolinoy et al. Maternal nutrient supplementation counteracts bisphenol A-induced DNA hypomethylation in early development. Proc Natl Acad Sci U S A. (2007) 104 (32): 13056–13061. Link. OPEN ACCESS!

2)    Pavlopoulos et al. Molecular Mechanism for Age-Related Memory Loss: The Histone-Binding Protein RbAp48. Science Translational Medicine (2013) 200 (5): 200. Link.

3)    Small et al. A pathophysiological framework of hippocampal dysfunction in ageing and disease. Nat. Rev. Neurosci. (2011) 12: 585–601. Link. OPEN ACCESS!

How To Train Your Brain (Part II)

Can playing a game improve your cognitive abilities or maintain them as you age? We learned from Erica Seigneur’s post on August 15 that evidence in the neuroscience literature is inconclusive. But a new paper in the September 5 issue of Nature claims to have a breakthrough (1). Dr. Joaquin Anguera and colleagues at UCSF trained older adults to multi-task with a custom-made video game called NeuroRacer and declared big improvements not just in multi-tasking but also in working memory and sustained attention. How are their experiments different from those that reported no effect of brain-training games? Anguera and colleagues focused narrowly on improving multi-tasking in older adults to or above the level of multi-tasking ability found in younger adults. They designed NeuroRacer to get participants to simultaneously drive a virtual car and respond to signs flashing on the computer screen. Both the driving and the responding to signs had many levels of difficulty. For each participant, the authors picked a difficulty level of driving and of responding that the participant could do with 80% accuracy. They defined multi-tasking ability as the difference in accuracy between only responding to signs and responding to signs while driving, with smaller difference indicating greater ability. After these preparations, they measured baseline multi-tasking ability for participants aged 20 to 79 and found a linear decline with age. Then they trained a different group of participants aged 60 to 85 with NeuroRacer for one hour three times a week for four weeks, adapting the difficulty levels as participants got better at the game. An active control group, also aged 60 to 85, played a version of NeuroRacer that would alternate between driving and responding to signs without multi-tasking, but was counseled to believe that they were also training in multi-tasking. A passive control group from the same age group did not play NeuroRacer. At the end of 4 weeks of training, both the experimental and the active control groups could multi-task better than passive controls, and the experimental group was better than active controls. About 6 months after training, the experimental group had lost some multi-tasking ability but was still better than not only both control groups but also a group of 20-year-olds that played NeuroRacer for the first time. On the basis of these results, Anguera and colleagues declared success in using NeuroRacer to improve multi-tasking in older adults.

But did the participants actually improve their cognitive abilities or just got really good at NeuroRacer? To address that, Anguera and colleagues put the participants they trained through more tests. They stuck electrodes to their scalps and measured electrical signals from the brain, called theta waves, that have been correlated with multi-tasking, sustained attention, working memory, and general cognitive control, which I interpret to mean healthy brain. They asked participants to complete another video-game-based test called the Test of Variables of Attention (TOVA), which is commonly used to diagnose ADHD (2). From the results, they declared improved sustained attention. Though they also claimed improvements in working memory, they offered only the briefest of descriptions for their method of testing it in Supplementary Figure 12, and it wasn’t sufficient for me to judge its merits. However, their measurements of theta waves are also supposed to support this claim. In all, Anguera and colleagues went to great lengths to demonstrate general cognitive benefit from NeuroRacer to older adults.

But Anguera and colleagues themselves cite a Nature paper from 2010 by Dr. Adrian M. Owen and others that tested many more participants with brain-training games similar to commercially available ones and reported no evidence of general cognitive benefit from their use. What’s going on here? Anguera and colleagues point out that, unlike Owen and co-workers who tested people from the general population, they trained members of a specific sub-population, older adults, in something where they had a measurable impairment, i.e. multi-tasking. They also stressed that because NeuroRacer adapts its difficulty to the abilities of each user, it provides a consistent challenge and more effective training. Anguera’s supervisor Dr. Adam Gazzaley co-founded Akili Interactive Labs to commercialize the concept of NeuroRacer, so perhaps in a few years we will be able to test it out for ourselves (4). In the meantime, let’s set aside the question of benefit from video games and just appreciate how much fun they are.

 Sources

  1. Anguera J A et al. (2013). “Video game training enhances cognitive control in older adults.” Nature. 501:97-101. Paywall.
  1. http://www.tovatest.com
  2. Owen A M et al. (2010). “Putting brain training to the test.” Nature. 465:775–778.
  3. http://www.ucsf.edu/news/2013/09/108616/training-older-brain-3-d-video-game-enhances-cognitive-control

Thomas Südhof and Richard Scheller receive 2013 Lasker Basic Medical Research Award

This morning, the Lasker Foundation announced the recipients of the 2013 Lasker Basic Medical Research Awards. The prize went to Stanford professor Thomas Südhof and former Stanford professor (current Executive VP of Genetech) Richard Scheller, for their work on the molecular machinery underlying rapid release of neurotransmitters. Specifically celebrated are their discoveries of VAMP, synaptobrevin, synaptotagmin, syntaxin, and many additional components of synaptic release machinery.

The Lasker Foundation concluded that:

By systematically exposing and analyzing the proteins involved in neurotransmitter release, Südhof and Scheller have transformed our description of the process from a rough outline to a series of nuanced molecular transactions. Their work has revealed the elaborate orchestrations that lie at the crux of our most simple and sophisticated neurobiological activities. (1)

The Lasker Basic Medical Research Award is given to scientists "whose fundamental investigations have provided techniques, information, or concepts contributing to the elimination of major causes of disability and death (1)." For more on the Awards, visit the Lasker Foundation Award Overview webpage.

For more on the groundbreaking work for which Südhof and Scheller received their award, visit the Lasker Foundation Award Description webpage.

An interview with Südhof and Scheller is also available for your viewing pleasure. Video of the award presentation and acceptance speeches will be available at the Lasker Foundation website, after 2 p.m. on Friday, September 20, 2013

Many congratulations to Drs. Südhof and Scheller!

Other Lasker Awards announced today were:

The Lasker-DeBakey Clinical Medical Research Award, given to Graeme Clark, Ingeborg Hochmair and Blake Wilson, for their development of the modern cochlear implant.

The Lasker-Bloomberg Public Service Award, given to Bill and Melinda Gates, for the work achieved through their foundation.

 

Sources

Evelyn Strauss, Albert Lasker Basic Medical Research Award Description.

Comment

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Thinking about Thinking

Like most neuroscientists, I’ve often thought about consciousness. I’ve worried about free will. And then I’ve gotten goosebumps and given up when I realized that I was consciously, willfully thinking about how consciousness and free will are illusions. Michael Graziano of Princeton University, however, has doubled down and tried to formulate a coherent theory of consciousness. He calls it “Attention Schema Theory.” While it’s far from the only theory of consciousness out there, it’s intriguing enough to me to be worth further consideration here.

Before I describe Attention Schema Theory, let’s do a little preliminary thinking about thinking. Very little actual data exists that tells us much about the nature of consciousness – it is a hard problem (or, as philosopher David Chalmers put it, the hard problem) – but we do have a few things to work with.

Everyone feels that he or she is conscious.

When I write about consciousness, every single reader knows intuitively what I mean. To be reading and understanding this blog post, you’ve got to be conscious. Many philosophers, though, argue that we can only know for sure about our own consciousness. You could all be “zombies,” automatons programmed cleverly to reply my statements in an apparently conscious way.

We are predisposed to assume that others are conscious.

Despite the existence of the zombie theory, in everyday life most people assume other people are conscious. In fact, we go much further. We also often ascribe consciousness to animals (plausible in the case of animals with a reasonably complicated nervous system) as well as teddy bears, cartoon characters, and things with googly eyes stuck to them (even though we know, intellectually, that’s implausible). We even sometimes ascribe consciousness to computers – yelling at them when they break, pleading with them to do what we want, and being tricked into thinking they are human in (admittedly constrained) Turing tests. What about our own consciousness is so inclined to presume consciousness in others? Why do we tend to equate eyes and facial expressions with real emotions?

"Heavy on the Nose" via eyebombing.com

Consciousness is inaccurate.

Many discussions of consciousness focus on its definition as a state of awareness. Awareness, though, can be tricky. If I see a cat and think “a cat!” then I’m having a conscious experience of a cat, for sure. But if I see a crumpled rag and think, just for a moment, “a cat!” then did I just have a conscious experience of a cat? Basically, yes. Our consciousness is easily duped by illusions, which reveal that consciousness involves assumptions made by our brain that can be independent of sensory experience. The delusions suffered by some psychiatric patients offer a stark example. In the article that inspired this post, Michael Graziano describes a patient who knew he had a squirrel in his head, despite the fact he was aware it was an illogical belief and claimed no sensory experience of the squirrel. Another example of the inaccuracy of consciousness is one we can all experience. It’s called the “phi phenomenon.” Two dots flash on a screen sequentially. If the right timing is used, it appears that there’s only one dot, which moves from one place to the other. In other words, although the dot did not, in reality, slide smoothly across the screen from one place to the other, our consciousness inaccurately perceives motion. Daniel Dennett uses the phi example in his exposition of his “Multiple Drafts” model of consciousness.

Consciousness can be manipulated.

Self-awareness is not always what it seems. Humans are programmed to search for patterns and meaning and we are naturally inclined to attribute causation to correlated events even when no such relationship exists. We are suggestible. We can become even more suggestible and less autonomous when hypnotized.  In numerous psychology studies, researchers have described various ways of reliably manipulating participants’ choices (for example using subtle peer pressure). Most of the time, the participants are not even aware of the manipulation and insist they are acting of their own free will. In addition to being a state of awareness, consciousness is conceived of as a feeling of selfhood, a sense of individuality that separates you from the rest of the world and allows you to find meaning in the words “me” and “you.” However, this feeling of selfhood can also be manipulated. Expert meditators such as Buddhist monks have trained themselves to erase this feeling of selfhood in order to experience a feeling of “oneness” while meditating. Brain scans of the meditating monks don’t provide a lot of details on the mechanisms underlying “oneness” but do suggest that the monks have learned to significantly alter their brain activity while mediating. A feeling of oneness can also be thrust upon you: Jill Bolte Taylor, the neuroscientist author of “My Stroke of Insight”, describes a feeling of oneness and loss of physical boundaries as her massive stroke progressed. Hallucinogenic drugs such as LSD can also provoke feelings of oneness. Out-of-body experiences fall into the same category: they can often be induced by meditation, drugs, near-death experiences, or direct brain stimulation of the temporoparietal junction. Damage to the temporoparietal junction on one side of the brain results in “hemispatial neglect,” in which a person essentially ignores the opposite side of their body and may even deny that this side of the body is a part of their self.

-----------------------

Now, let’s get back to Attention Schema Theory. What is this theory and how does it help fit some of our observations about consciousness together? Is it a testable theory? Can it help drive consciousness research forward? At the heart of Attention Schema Theory is an evolutionary hypothesis. It assumes that consciousness is real thing, physically represented in the brain, which evolved according to selective pressure. The first nervous systems were probably extremely simple, something like the jellyfish “nerve net” today. They were made to transduce an external stimulus into a signal within the organism that could be used to effect an adaptive action. The more information an organism could extract from its environment, the greater an advantage it had in surviving and reproducing in that environment, so many sophisticated sensory modalities developed. But there’s lots of information in the world – lights, sounds, smells, etc. coming at you from every angle all the time. It can be overwhelming and distracting. How do you know which bits of information are actually important to your survival and which can be ignored? The theory is that some kind of top-down control network formed that enhanced the most salient signals (think: a sudden crashing sound, the smell of food, anything that you previously learned means a predator is nearby). From this control network came attention. Attention allows you to focus on what’s important, but how do you know what’s important? Slowly, attention increased in sophistication. It went from, for example, always assuming the smell of food is attention-worthy to being able to decide whether it’s attention-worthy by modeling your own internal state of hunger. If you’re not hungry, it’s not worth paying attention to food cues. Finding shelter or a mate might be more important. According to Graziano, this internal model of attention is what constitutes self-awareness. Consciousness evolved so that you can relate information about yourself to the world around you in order to make intelligent decisions. But since consciousness is just a shorthand summary of an extremely complex array of signals, a little pocket reference version of the self, it involves simplifications and assumptions that make it slightly inaccurate.

Attention Schema Theory at a glance: selective signal enhancement to consciousness. via Granziano Lab Website

Consciousness isn’t quantal. Basic self-awareness is only the beginning. What about being able to visualize alternative realities? What about logical reasoning abilities? What about self-reflection and self-doubt? Graziano does not address all of the aspects of consciousness that exist or how they might have evolved, but he does go on to talk a bit about how consciousness informs complex social behaviors. If you’re living in a society, it helps to be able to model what other people are thinking and feeling in order to interact with them productively. To do this, you have to understand consciousness in an abstract way. You have to understand that your consciousness is only your perspective, not an objective account of reality, and that adds an additional level of insight and self-reflection into the equation.  It’s worth noting that there is a specific disorder in which this aspect of consciousness is impaired: autism.

-----------------------

Most of the appeal of Attention Schema Theory, to me, lies in its placement of consciousness as a fully integrated function of the brain. It doesn’t suppose any epiphenomenal aura that happens to be layered on top of normal brain function but that serves no real purpose. Instead, it says that consciousness is used in decision-making. It presents an evolutionary schema of why we might be conscious and also why we tend to attribute consciousness (especially emotions) to others. It explains, somewhat, why consciousness is inaccurate and malleable: it’s not built to represent everything about the real world faithfully, it’s just meant to be a handy reference schematic.

Attention Schema Theory isn’t entirely satisfying, though. It’s the outline of an interesting line of reasoning but not a complete thought. No actual brain mechanisms or areas are identified or even hypothesized. How is consciousness computed in the brain? I agree with Daniel Dennett that there’s no “Cartesian theater,” but there must be some identifiable principle of human brain circuit organization that allows consciousness. To move any theory of consciousness forward scientifically, we need a concrete hypothesis. But we don’t just need a hypothesis: we need a testable hypothesis. Without a way of experimentally measuring consciousness, the scientific method cannot be applied. Currently, our concept of consciousness stems only from our own self-reporting and, as mentioned above, the only consciousness you can really truly be sure of is your own.

Given the suppositions of Attention Schema Theory, though, there may be some proxies of consciousness we can study that would help us flesh out our understanding and piece together reasonable hypotheses. First, attention. Attention is by no means consciousness (I can tune a radio to a certain frequency but that doesn’t mean it’s conscious), but if consciousness evolved from attention then they should share some common mechanisms. Many neuroscientists already study attention, but they may not have considered their research findings in light of Attention Schema Theory. Perhaps there are already some principles of how brain circuits support selective attention that could be adapted and incorporated into Graziano’s schema. If consciousness really evolved from attention, then there should exist some “missing links,” organisms that display (or displayed) transitional states of consciousness somewhere between rudimentary top-down mechanisms for directing attention and the capacity for existential crises. Can we describe these links?

Second, theory of mind. Theory of mind is our ability to understand that other minds exist that may have different perspectives than our own. Having theory of mind should require a sophisticated version of consciousness, but the absence of theory of mind does not imply a lack of consciousness. You don’t need to be aware of others’ minds to be aware of your own. Most children with autism fail tests of theory of mind, but are still clearly conscious beings. Still, theory of mind and consciousness should be related if Graziano is right, and we know a few things about theory of mind. Functional imaging studies point towards the importance of the anterior paracingulate cortex as well as a few other brain areas in understanding the mental states of others. “Mirror neurons,” neurons that respond both when you perform an action and when you watch someone else perform that same action, have been discovered in the premotor cortex of monkeys, and some have argued that monkeys and chimpanzees have theory of mind.  If they do, then we’d at least have a potential animal model to pursue further neurophysiological research (though the ethics of such research could be thorny). There is very little evidence to support theory of mind in lower mammals such as rats. However, in that case comparative anatomy studies of theory of mind-related brain areas identified by functional imaging could be informative. We already know of one interesting mostly hominid-specific class of neurons that exist in suggestive cortical areas (such as anterior cingulate cortex, dorsolateral prefrontal cortex, and frontoinsular cortex): spindle neurons, also known as von Economo neurons (actually, these neurons can also be found in whales and elephants!). Unfortunately, we have no idea what these neurons do, yet. Further studies of von Economo neurons could tell us about the mechanisms underlying theory of mind and, by extension, consciousness. Maybe.

Location of von Economo neurons. via Neuron Bank

I’ll be curious to see where Graziano goes with his Attention Schema Theory. It is, at the very least, a bold attempt at answering a question that has vexed humanity through the ages. I wonder, though, whether the question can ever be answered. Perhaps you are now inspired to go out and do some awesome research. I, for one, am getting goosebumps again, so I think I’ll take a break.