Ask a Neuroscientist! - What is the synaptic firing rate of the human brain?

AskANeuroscientist.jpg

A couple of days ago, we received an email from a high school student named Joseph. Joseph, having spent some time trawling the net and his library, found himself with no answer to the question, "How many synaptic fires [sic] are there (in a human brain) per second?"

An edited version of my response to the question appears below. In my response, I break down the neural complexity that makes answering Joseph's question extremely difficult. I then totally ignore that complexity in order to produce two firing rate ranges: neuronal and synaptic.

Do you have an additional mathematical solution to Joseph's question? A philosophical objection to the idea of quantifying the human brain in terms of synaptic firing rates? The comments section is where it usually is. 

What is the synaptic firing rate of the human brain?

I'd like to be able to provide a single number, but in reality, the brain is pretty complex, so it's difficult to come up with a single number to describe the firing rate of the entire brain.

But let's explore the question a bit.

First, I'm going to simplify the question by looking at neuronal firing rates, rather than synaptic firing rates. So from that starting point, if we wanted to be really simplistic about the whole thing, we could estimate a firing rate. Such a calculation might go something like this:

Current estimates of the number of neurons in the human brain are around 86 billion (see Bradley Voytek's discussion of how scientists extrapolate this numberalso this article on the estimate). Multiply 86 billion by the "average firing rate" of an individual neuron.

Now, the firing rate of an individual neuron can vary quite a bit, and the firing rates of different types of neurons also is extremely variable. This variance in part depends on the intrinsic properties of the neurons (some a tuned to fire very, very quickly - 200+ Hz, whereas some neuron types prefer to fire more slowly, below the 10Hz range). A lot of the variance also depends on what the brain is doing. For example, neurons in the visual system may be practically silent in the dark (or during sleep), but will be firing very fast when visual information is coursing into the nervous system from the eyes. And the exact rate of firing in visual neurons is going to depend on the properties of the visual stimulus (how bright, how fast it moves, what color). Similarly, neurons in your hippocampus, a brain structure important for memory and spatial navigation, may fire quickly as you walk around your room, but may be relatively quiet as you sit in front of a computer reading this email.

All this variability is what makes it so hard to estimate the firing rate of a human brain at any given second.

But, if you press me for a back-of-the envelope calculation, I'd say the best way to estimate the firing rate of a neuron is to come up with a potential range. Now, there's probably been a bunch of research on the distribution of firing rates within various cell populations, and quite frankly, I'd only really believe that rate in the context of a particular activity you are interested (rates can change dramatically between passive sitting and active participation in a task). But generally, the range for a "typical" neuron is probably from <1 Hz (1 spike per second) to ~200 Hz (200 spikes per second).

To ruthlessly simplify, treating all 86 billion neurons in the human brain as copies of that a single "typical" neuron, ignoring all of the glorious cellular specificity that characterizes the brain, we're left with a range of 86 billion to 17.2 trillion action potentials per second.

Let's go back to the question of synaptic firing rates. Even though an action potential produced in a neuron is not guaranteed to produce release of neurotransmitter at a synapse, let's ignore that point and assume the opposite. I've seen people quote a minimum number of synapses as 100 trillion (although I'm not clear where that number came from). So, let's do our math again. 100 trillion synapses, each with an independent firing rate range of < 1Hz to ~200 Hz. So a range of 100 trillion to 20 quadrillion.

Again, and I really cannot stress this enough, these numbers doing reflect what actually goes on in a human brain in any given second. The actual firing rate depends so much on what the brain is doing at that moment, that back-of-the-envelope calculations such as the ones I just wrote down are (in my opinion) absolutely meaningless. But for what its worth, there they are. And if these numbers at least give us a range, you can imagine the sheer computational power that will be required to record all the neurons the human brain.

Ask a Question!

If you have a question for one of our neuroscientist contributors, email Astra Bryant at stanfordneuro@gmail.com, or leave your question in the comment box below.

4 Comments /Source

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Perineuronal Nets, aka Golgi vs Cajal (Round 2)

PNN.png

I'm going to tell you a story about the perineuronal net. The what, now? I hear (some of) you cry.

If neurons and glia are the plants of our brains, the extracellular matrix is the trellis upon which those plants grow and intertwine. The peri neuronal net is a specialized portion of the extracellular matrix, surrounding (primarily) the soma and proximal dendrites of parvalbumin positive interneurons. The mesh-like structure of the perineuronal net, holes accommodating synaptic contacts onto the embedded neurons, appears critical for forming and stabilizing synapses.

The appearance of the perineuronal net coincides with the closing of critical periods; enzymatic breakdown of the perineuronal net can reinstate ocular dominance plasticity in adult animals. (This flavor of plasticity can usually only be triggered in juveniles.) Also, degrading the perineuronal net allows extinction training to fully eliminate fear conditioning in adult animals, a feat usually only possible in juvenile animals (adults respond to extinction training with only a temporary inhibition of fear responses) (1). Thus, many independent studies implicate the perineuronal net as a negative regulator of plasticity. The net stabilizes synapses, preventing unwanted change within established brain circuits.

Interestingly, the perineuronal net appears damaged following status epilepticus, a prolonged seizure event that commonly triggers epileptogenesis, and is followed by axonal sprouting and enhanced synapse numbers within the hippocampus (1). The loss of the perineuronal net may establish a permissive environment for the widespread synaptic reorganization that occurs during temporal lobe epileptogenesis.

So to summarize: the perineuronal net, and its parent structure, the extracellular matrix, may be important for establishing the synaptic stability that maintains the delicate interconnections of the nervous system.

So, the story.

Epic Science Battles of History: The Aftermath

The first thing you need to understand is that it took until the 1980s for the perineuronal net to be accepted as an interesting, important, and in fact existing, structure. Despite the first published description occurring almost 100 years earlier, in 1898.

Why the long delay?

Turns out the perineuronal net was a victim of the fallout of probably the most famous science fight in neuroscience.

That's right, this story involves Santiago Ramon y Cajal and Camillo Golgi.

Cajal and Golgi are, of course, well known for their decades long disagreement over whether the nervous system is a continuous network (the reticular theory, Golgi's view), or comprised of distinct cells (neuronal doctrine, the correct answer). The controversy between the reticularists and the proponents of the neuronal doctrine would have been ongoing in 1898, when Golgi presented his observations of the perineuronal net, "a continuous envelop that enwraps the body of all the nerve cells extending to the protoplasmic prolongements up to second and third order arborizations" (2). Contemporaries of Golgi followed up on his initial observations (most notably: Donaggio, who described the filamentous pattern within the net; Bethe, who differentiated between perineuronal nets and the more diffuse extracellular matrix; and Held, who proposed shared components, and a glial origin for the perineuronal and diffuse nets) (2).

But this initial period of study came to an abrupt halt by the entrance of Ramon y Cajal onto the scene.

Cajal was of the opinion that the perineuronal nets observed by Golgi (and everyone else), were nothing more than a coagulation artifact produced during the staining process that bears Golgi's name.

According to Carlo Besta (neurologist and psychiatrist), Cajal's victory on the subject of the neuronal doctrine made his word automatically superior that of Golgi, Held, Bethe and Donaggio. "It has been sufficient that Cajal claimed that [perineuronal] and diffuse nets were an artifact ... and most of the scientific world took no further interest in the matter" (3).

Although a few individuals continued to investigate the perineuronal net (including G.B. Belloni, who observed structural changes in both perineuronal and diffuse nets in humans suffering from dementia, gliosis and psychiatric diseases), in general, research stagnated until the 1980s, when the advent of better staining techniques made it possible to reveal perineuronal nets as real structures, and not mere artifacts of an imperfect staining technique (2).

The modern study of the perineuronal nets is still in its infancy (as is, for that matter, all of neuroscience), but as the previous section attests, several studies have hinted at a role in restricting plasticity, maintaining a stabilized environment for neuronal (and glial) function.

Which brings me to the final note of this post, which also happens to be the trigger of my recent readings into the history of the perineuronal net.

A Modern "Prophecy"

Back in June, a lab mate of mine passed around a PNAS article with a provocative title, and an attention-grabbing author.

The article? "Very long-term memories may be stored in the pattern of holes in the perineuronal net." By (Nobel Prize Winner) Roger Tsien.

So, I'm not going to do a full analysis of Tsien's article, which reads more like an RO1 than anything else. His basic thesis is based on the assumption that long-term memory storage within the human brain necessarily involves a long-term molecular substrate. Tsien identifies the molecules that make up the perineuronal net as likely candidates for the molecules that encode our long-term memories. He steps beyond the more comment supposition that the perineuronal net is a permissive structure for synaptic stability, claiming that "very long-term memories are stored in the pattern and size of holes in the PNN [perineuronal net]…" (4). Lest we confuse his proposal with the more common understanding of the function of the perineuronal net, Tsien writes: "reviews on the PNN propose permissive, supportive roles… analogous to the importance of insulation on the wiring inside a computer: essential for function but not where bytes are dynamically stored." (4) Tsien maintains that the perineuronal net is the storage device for long-term memories, the location where "bytes are dynamically stored."

Tsien's hypothesis, which he compares to Watson and Crick's theory of DNA, is severely lacking in experimental evidence. Thus the PNAS article, in which Tsien describes experiments he believes will test his hypothesis. Having read the article abstract-to-bibliography multiple times, I remain unconvinced that the proposed experiments would be sufficient to support Tsien's theory. Will the experiments prove insightful? Does the perineuronal net directly encode bytes of long-term memory? We may have to wait another 100 years to find these answers, as Tsien seems to have no plans to actually conduct the experiments he proposes. Instead, he hopes that other scientists will use his PNAS paper as a roadmap for future experiments. Extending his Watson and Crick metaphor, he calls for the Rosalind Franklin's of the world to supply him with the experimental data his hypothesis demands; "Perhaps, in a few years, at least one prophecy can be vindicated" (4). As someone who has, uh, heard of Franklin, I wonder if Tsien realizes what a raw deal his is proposing for his fellow scientists.

Sources

  1. McRae and Porter (2012). The perineuronal net component of the extracellular matrix in plasticity and epilepsy. Neurochemistry International 61: 963-972. Link

  2. Vitellaro-Zuccarello, De Biasi, Spreafico (1998). One hundred years of Golgi's "perineuronal net": history of a denied structure. Ital J Neurol Sci 19:249-253. Link

  3. Besta C (1928) Dati sul reticolo periferico della cellula nervosa, sulla rete interstiziale diffusa e sulla loro probabile derivazione da particolari elementi cellulari. Boll Soc It Biol Sper 3:966-973

  4. Tsien (2013). Very-long term memories may be stored in the pattern of holes in the perineuronal net. PNAS 110(30): 12456-12461. Link

 

Comment /Source

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

One-track mind

I promised myself so faithfully that I would write about neuroscience. Or at least neural-immune interactions. But it’s tricky to think about other topics when something exciting happens in vaccine research and there’s this public outlet just sitting there asking for science news… So here follows a blog post about vaccines with the crowbarred excuse that cerebral malaria is a thing that happens. Vaccines work. In terms of public health benefits, I’d put vaccination up there with antibiotics and soap. This graphic from Leon Farrant [1] earlier in the year gives us a clear idea of just how effective the vaccination programmes of the 20th Century have been. But we don’t hear much these days about new vaccines coming onto the market. One reason for this is that contemporary vaccine researchers have been left some tough nuts to crack. The viruses, bacteria and parasites still causing significant disease have evolved myriad ways to evade not only the natural immune system, but also traditional vaccination strategies. Gone are the days when scientists like Maurice Hilleman could develop multiple vaccines in a single career but, with a perseverance verging on tenacity, modern vaccine researchers are beginning to chip away at these remaining problems. Last week saw the publication of a new vaccine trial performed by Robert Seder’s group at the National Institute of Allergy and Infectious Disease, describing an important step in the development of a malaria vaccine.

 

Malaria is one of the toughest nuts around. This parasite has a long evolutionary history with humanity so has intimate infective knowledge of the human body. Its lifecycle is not only complicated, but involves several different stages (analogous to eggs, larvae and adults) all of which look different and live in different bodily tissues, not to mention different hosts.

Malaria lifecycle

Just look at how complicated this little beast is! This kind of infection makes life very difficult for the immune system. Whether in response to an infection or a vaccine, white blood cells rely on the fact that infectious agents look very different from humans, and that the same infectious agent infecting for a second time will look the same as it did the first time. Like a master of disguise changing outfits and donning false moustaches, parasites can change their surface proteins every few days, leaving the immune system far behind in its attempts at recognition. Not only that, but they’ll squirrel themselves away inside different cells to avoid even coming into contact with white blood cells.

So how do we get ahead of this sneaky beast? Seder’s group think they have the answer [2]. Or at least one potential answer. Instead of trying to remove the parasite once it has a foothold in the body, their PfSPZ vaccine targets sporozoites – that is the parasite as it looks when it’s first injected into the bloodstream by a mosquito. PfSPZ stands for Plasmodium falciparum sporozoite. P. falciparum is one of the most common strains of malaria and is used in malaria research because (in a very rare move) the CDC allow the deliberate infection of humans with this strain in a model known as controlled human malaria infection (CHMI). Here, scientists get to test malarial vaccines in healthy humans by deliberately infecting them with malaria after vaccination and waiting to see if they get sick. To prevent full-blown disease, subjects are treated with approved anti-malarials after the test whether they get sick or not, but they usually do. Up to now, malaria vaccines have proved broadly unsuccessful. The only known method providing robust protection to date involved leaving the task of immunisation to the mosquitoes. In that study [3] conducted by the US military, scientists took mosquitoes infected with malaria, exposed them to radiation to render the parasite uninfectious and then let them bite human subjects. Over 1000 times. After 1000+ bites, over the course of up to 10 years, people were protected from subsequent CHMI. In light of the obvious difficulties and objections to this, Seder and his colleagues have now refined the original technique. It still involves mosquitoes infected with malaria, and those mosquitoes are still irradiated to knock out the parasite. But now, harmless parasites from several thousand mosquitoes can be isolated, purified and injected into people in a faster, more controlled, and less uncomfortable way. After one injection (rather than 1000 mosquito bites) the subjects are protected from subsequent CHMI. One of the things that makes this new study stand out is that the vaccine tested fully protected everyone who was given the highest dose. That is to say that the vaccine protects everyone, even when they’re deliberately injected with infectious malarial parasites.

This is exciting news for anyone in the vaccine field but still comes with some caveats. The participants in this trial were infected with malaria just 3 weeks after their last immunisation so we have no idea how long protection will last. The immunisation itself is also an issue because it only works if you inject it directly into the bloodstream (intravenously). Most vaccines are given into the muscle or skin, which is a relatively unskilled procedure. Intravenous injection requires more expertise and comes with more risk, so rolling out an intravenous vaccine to areas with poor infrastructure and limited skilled medical professionals will be tough. Having said all that, and taking into account yet more caveats given by the authors themselves, this really is exciting news. Malaria kills 0.5-1 million people every year 86% of whom are of children under 5 [4]. This vaccine may have its limitations, but it’s worth a shot.

 

References:

[1] http://www.behance.net/gallery/Vaccine-Infographic/2878481

[2] http://www.sciencemag.org/content/early/2013/08/07/science.1241800.full

[3] http://jid.oxfordjournals.org/content/185/8/1155.full                           OPEN ACCESS!

[4] http://www.who.int/malaria/publications/world_malaria_report_2012/wmr2012_no_profiles.pdf

Why most neuroscience findings are false, part II: The correspondents strike back.

hoth.jpg

In my May post for this blog, I wrote about a piece by Stanford professor Dr. John Ioannidis and his colleagues, detailing why, as they put it "small sample size undermines the reliability of neuroscience." [See previous blog post: Why Most Published Neuroscience Findings are False] As you might imagine, Ioannidis's piece ruffled some feathers. In this month's issue of Nature Reviews Neuroscience, the rest of the neuroscience community has its rejoinder.

Here is a brief play-by-play.

Neuroscience needs a theory.

First up: John Ashton of the University of Otago, New Zealand. He argues that increasing the sample size in neuroscience is not the most important problem facing analysis and interpretation of our experiments. In fact, he says, increasing the sample size just encourages hunting around for ever-smaller and ever-less-meaningful effects. With enough samples, any effect, no matter how small, will eventually pass for statistically significant. Instead, he believes neuroscientists should focus on experiments that directly test a theoretical model. We should conduct experiments that have clear, obviously-nullifiable hypotheses and some predictable effect size (based on the theoretical model). Continuing to chase after smaller and smaller effects, without linking them to a larger framework, he argues, will cause neuroscience research to degenerate into "mere stamp collecting" (a phrase he borrows from Ernest Rutherford...who believed that "all science is either physics or stamp collecting".)

Ioannidis and company reply, first by agreeing that having a theoretical framework and a good estimate of effect size would be great, but these ideals are not always possible. They also state that sometimes very small effects are meaningful, as in genome-wide association studies, and that larger sample size will provide a better estimate of those effect sizes.

“Surely God loves the 0.06 nearly as much as the 0.05”

Next up: Peter Bacchetti of the University of California, San Francisco. Like Ashton, Bacchetti believes that small sample size is not the real problem in neuroscience research. He identifies yet another issue in our research practices, however, arguing that the real problem is a blind adherence to the standard of p = 0.05. Dichotomizing experimental findings into successful and unsuccessful bins (read...publishable and basically unpublishable bins) based on this arbitrary cutoff leads to publication bias, misinterpretation of the state of the field, and difficulty generating meaningful meta-analyses (not to mention the terrible incentive placed on scientists to cherry-pick data, experiments, animals, analyses, etc. that “work”).

Ioannidis and colleagues essentially agree, saying that a more reasonable publication model would involve publishing all experiments’ effect sizes with confidence intervals, rather than just p-values. As this "would require a major restructuring of the incentives for publishing papers" and "has not happened," however, Ioannidis and company argue that we should fix a tractable research/analysis problem and do our experiments with a more reasonable sample size.

Mo samples mo problems.

Finally: Philip Quinlan of the University of York, UK. Quinlan cites a paper titled "Ten ironic rules for non-statistical reviewers" to make the argument that small sample size studies really aren't so bad after all. Besides, he says, experiments that require a large sample size are just hunting for very small effects.

Ioannidis and company essentially dismiss Dr. Quinlan entirely. They respond that underpowered studies will necessarily miss effects that are not truly huge. Larger studies allow a more precise estimation of effect size, which is useful whether the effect is large or small, and finally, what constitutes a "meaningful" effect size is often not known in advance. Such an assessment depends entirely on the question and data already at hand.

There you have it, folks! If you have any of your own correspondence, feel free to post it in the comments section.

The Nature Reviews Neuroscience Commentaries

Commentary by John Ashton

Commentary by Peter Bacchetti

Commentary by Philip Quinlan

Response by Button et al.

Art Exhibit Extravaganza 2013: a postdoc appreciation week event

An email soliciting submissions for an upcoming visual arts exhibit, in celebration of Postdoc Appreciation Week, was recently sent to the NeuWrite West mailing list. The event sounds like fun, and I know we've got some talented postdoc's out there, so I've reposted the message in it's entirety below.

Art Exhibit Extravaganza flyer_08092013

Dear  Postdocs,

The SUPD Art Committee is organizing a Visual Art Exhibit for the postdoc community, “Art Exhibit Extravaganza 2013”. Postdocs (plus significant others) from all Stanford Schools and affiliated institutions are encouraged to apply.

This is a unique opportunity for you to share your artistic talent with the postdoc community at large and to expand your horizons.

The SUPD Art Exhibit Extravaganza will keep you entertained during Postdoc Appreciation Week ( September 16 -20th) at The Lorry I. Lockey Stem Cell Research Building. It will feature paintings/drawings, photography/industrial design, sculpture/pottery and mixed-media works. There is no specific theme, so be creative!

Attached is the flyer of the event. If you are interested in submitting your art work, please click on the link below and fill out the online submission form. The deadline for submission is Friday, August 23rd . Email the SUPD Art Committee at SUPDART@gmail.com if you have any questions.

What are you waiting for? Apply now… Online submission link

Sincerely, SUPD Art Committee

Ermelinda Porpiglia Jun Yan Luqia Hou Ramon Whitson Van Dang Viola Caretti

Antoine de Morree Catherine Gordon

2 Comments

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Ask a Neuroscientist: How to Train Your Brain

TreadmillBrain In this edition of Ask a Neuroscientist, we’ll answer two questions that address a similar principle: Can you train to have a better brain?

The first question comes from Allyson Thomley, who writes:

“I am an elementary science teacher seeking to reach a better understanding of how the brain works. As a novice, it has been difficult to sort out the pseudoscience from valid, data-supported information. Sadly, there is a great deal of misinformation circulating amongst teachers who are genuinely trying to incorporate brain research into their practice.

One such claim that I have come across more frequently has to do with exercises that 'cross the midline.' It is suggested that by engaging in activities in which the right arm or leg is crossed over to the left side, connections between the right and left hemispheres of the brain are strengthened. Any grains of truth here?”

This idea appears to have originated (or is at least most heavily propagated) by Paul and Gail Dennison and their commercial learning program called Brain Gym. They call their program “educational kinesiology,” and claim that engaging in activities that “recall the movements naturally done during the first years of life when learning to coordinate the eyes, ears, hands, and whole body” can dramatically improve concentration and focus, memory, academics, physical coordination, relationships, self-responsibility, organization skills, and attitude.

Those are quite extraordinary claims, and as the saying goes, extraordinary claims require extraordinary evidence, of which they provide little to none. In fact, there are no peer-reviewed, controlled studies testing whether or not these exercises do anything at all. All of the papers they use to support their claims are self-published in the journal The Brain Gym Global Observer. On their website, they address why there are no peer-reviewed articles supporting their claims, explaining that because a scientific study would require that some students receive the Brain Gym training (the experimental group), and some receive no training or a different kind of training (control group), it would be unethical to deprive some student’s of the Brain Gym training.

Any study like this would only last a few weeks or a few months at most, so this excuse is pretty weak, and is a huge red flag with regard to the validity of their claims. That being said, we can’t completely rule out the general idea that engaging in crossing the midline exercises has a positive effect on learning because this idea has not been rigorously tested.

The underlying science – that performing an activity that simultaneously engages both cerebral hemispheres can improve cognition – does appear to be true. The best studied example of this is musicians who began training during early childhood. Neurons on either side of the cortex send axons across the midline, which then make synapses with neurons on the other side. The axons are covered in a white substance called myelin, which acts as an insulator, protecting the electrical communication between neurons from leakage, and increasing the speed at which the signal can travel down the axon. This collection of axons between the midline is called the corpus callosum, and research has shown that the corpus callosum is larger in early-trained musicians compared to late-trained musicians and nonmusicians, especially if the training began before the age of 7.

The hypothesis is that because musical training involves the coordination of multiple modalities – i.e. taking visual and auditory input (reading and listening to music, respectively) and coordinating it with motor output (playing the instrument) – the connections between these brain areas become stronger and more tightly connected, resulting in better sensorimotor integration. And indeed, early-trained musicians have better spatial and verbal memory, attention, mathematics skills, and perform better on other tasks involving the integration of multiple sensory and motor inputs. You can find a nice review on the topic here: The Musician's Brain. 

So, while the Brain Gym technique does not seem like a good candidate, encouraging your students to learn an instrument could go a long way in improving their cognitive functions. Unfortunately, adults who learn an instrument do not see the same improvements.

Our second question comes from Kelly Bertei, who asks:

“Does playing games to improve working memory work? If so, since my brain is only so big, would other parts of my brain reduce in functioning to accommodate for increases in working memory?”

The literature on this is very mixed – some reports show that these games can lead to increased working memory and other measures of cognitive function, whereas other studies show no difference in performance.

For example, in a paper published earlier this year in the online journal PLOSone, researcher Rui Nouchi and colleagues asked 34 volunteers to play either the brain training game Brain Age (which the authors created and profit from, it should be noted) or Tetris. They played for 15 minutes a day, 5 days a week for 4 weeks. The participants were then tested on cognitive performance before and after the training period. Interestingly, both groups performed better after the training than before, and the Brain Age group showed greater improvements on executive functions, working memory, and processing speed compared to the Tetris group, while the Tetris group showed greater improvements on attention and visuo-spatial ability.

So these results seem to support the idea that brain training exercises can improve some aspects of cognitive function. However, another paper published in 2013 in the journal Computers in Human Behavior (which is a real journal, and actually looks pretty awesome) showed no improvement in cognitive function after 3 weeks of training. In this study, volunteers were asked to play either Brain Age, Dr. Kawashima’s Brain Training (a game they designed themselves), Phage Wars (an online strategy game), or no game at all. They were tested on cognitive performance before training, immediately after training, and a week after training had ceased. Most of the groups showed no significant difference in performance, positive or negative, across all time points, the one exception being the Phage Wars group, who performed significantly worse in the follow-up test than they did immediately after the training period.

That is only two papers, there are many more out there, some showing that these brain training games do improve cognition, and some showing that they do not. Basically, science still hasn’t figured this one out yet.

Lest you think there is nothing you can do to make your brain work better, there is one activity that has been shown to improve working and long-term memory, improves mood, staves off dementia in old age, and in general, makes your brain and body happy – cardiovascular exercise. Exercise triggers a molecular cascade in the brain that ultimately results in an increase in synaptic plasticity, that is, the ability of the synapse to strengthen or weaken in response to stimuli. This, in turn, is believed to improve learning, memory, and other forms of cognition.

Exercise also results in an increase in the birth of new neurons in a part of the brain important for learning and memory called the hippocampus. Which brings me to the second part of your question, whether improving memory would result in a decrease in function of another brain area. Cardiovascular exercise does in fact result in an increase in the volume of the hippocampus by about 2%, and it is a reasonable assumption to think that would draw resources away from another brain area. But as we saw with the early-trained musicians, increasing a brain structure could result in better functioning of neighboring regions as the new neurons make more connections. It’s unknown what the limits of this is, though, and as far as I could tell, no one has gone looking for deficits in other brain regions following the increase in hippocampus size, so it’s definitely possible.

Now let’s all go for a run!

If you have a question for one of our neuroscientsist contributors, email Astra Bryant at stanfordneuro@gmail.com, or leave your question in the comment box below.

Studying Sleep the High-Throughput Way

moskaleva814.png

“Sleep remains one of the least understood phenomena in biology,” reads the first sentence of a recent review in Cell (1). Though humans spend a third of their life sleeping, neurobiologists don’t really understand how or why we sleep. The scientific method proceeds from a hypothesis, and formulating a hypothesis requires some initial information, which is not really there for sleep. What can scientists do to gather this initial information? A favorite approach of molecular biologists over the past forty years has been the high-throughput screen, a fancy term for trying a bunch of things and seeing if they affect the process of interest. Until recently, it was impractical in neurobiology because it was too hard to collect and analyze the required large amounts of data. However, advances in computing power and the availability of a certain device called the Zebrabox, which I’ll explain later, made it possible for Jason Rihel and colleagues to apply the high-throughput screen to the neurobiology of sleep (2). To paraphrase from that paper’s abstract, the authors set out to find new drugs that affect sleep and to discover if any known proteins have a previously unknown effect on sleep. I am not a sleep expert or even a neurobiologist, but my background in systems biology has taught me a thing or two about high-throughput screens. Below I will explain what makes a good high-throughput screen, what Rihel and colleagues have accomplished, and what they could have done better. A good high-throughput screen generates hypotheses. It fills the initial void of knowledge with information that can be used to perform more targeted experiments. To perform a screen, biologists set up an experiment that reduces what they are studying to some measurement, change one thing in their experiment and make the measurement, change another thing and make the measurement, and repeat this hundreds or even thousands of times. To keep themselves from going crazy, they try to set up a simple measurement, so that repeating it ad nauseam would not be too tedious, time-consuming, or labor- and resource-intensive. However, the measurement shouldn’t be so simple that it doesn’t relate back to what is being studied. A classic example of a smashingly successful screen is the work of Lee Hartwell and colleagues on cell division cycle mutants in budding yeast in the 1970s (3). They were studying how cells divide. A yeast cell assumes a sequence of distinctive shapes as it divides, so they reduced cell division to whether a cell has a normal or abnormal shape. The things that they were changing were genes. By mutagenizing yeast, examining the shape of the resulting cell, and then mapping the mutant locus, they discovered many genes that affected cell shape. One of their hits, named cdc2, was revealed by subsequent targeted experiments to be the master regulator of how cells of all eukaryotic model organisms divide and is intensively studied to this day. Without the screen for yeast mutants of cell shape, no one may have ever connected this particular gene with the cell division cycle.

What did Rihel and colleagues do in their screen? First, they defined sleep simply enough to make it amenable to screening. Until a decade ago, the scientific definition of sleep was an altered state of electrical activity in the brain, as measured by sticking electrodes onto the scalp (1, 4, 5). Sticking electrodes to scalps is fine for making a handful of measurements, but doing it enough times for a screen is impractical, especially since the animals involved, i.e. primates, monkeys, rats, mice, or birds, are relatively large and expensive to maintain. Rihel and colleagues used the inexpensive and easy-to-maintain zebrafish. They defined sleep as lack of movement, following a push begun in the 1980s to define sleep as a behavior (1). Since, they were looking for new drugs, the things they changed were chemicals that they added to the aquarium water. Detecting lack of movement may seem like a simple measurement to make, but it’s not. Back in Lee Hartwell’s days, some poor grad student would have actually been watching the zebrafish or movies of zebrafish for inactivity. Luckily, technology has progressed, and Rihel and colleagues were able to buy a big blue box called the Zebrabox sold by a company named Viewpoint. The Zebrabox is equipped with a 96-well-plate holder, a video camera, and custom video processing software called Videotrack. Rihel and colleagues placed their more than 50,000 zebrafish larvae into 96-well plates, added one of over 5000 chemicals into each well, popped the plates inside the Zebrabox, recorded movies, and analyzed these movies for lack of motion. The chemicals that made zebrafish move more or less were considered hits. The targets of the chemicals, gleaned from annotations in databases and manual literature searches, were by extension implicated in sleep regulation.

The Zebrabox

[youtube]http://www.youtube.com/watch?v=ot48aM8Isvk[/youtube]

Larval zebrafish locomotor activity assay (A) At four days post fertilization (dpf), an individual zebrafish larva is pipetted into each well of a 96-well plate with small molecules. Automated analysis software tracks the movement of each larva for 3 days. Each compound is tested on 10 larvae. (B) Locomotor activity of a representative larva. The rest and wake dynamics were recorded, including the number and duration of rest bouts (i.e. a continuous minute of inactivity, (7)), the timing of the first rest bout following a light transition (rest latency), the average waking activity (average activity excluding rest bouts), and the average total activity. Together, these measurements generate a behavioral fingerprint for each compound. (Rihel et al, 2010)

So, how does the screen performed by Rihel and colleagues do in terms of generating hypotheses? Some chemicals they tested, including nicotine and mefloquine (see my previous post for more on this drug) make zebrafish move differently. However, their results have little credibility because they do not justify why lack of motion in zebrafish is definitely sleep, and not tiredness or death. Also, it is debatable how good the drug target annotations are. Some of the more surprising new sleep regulators, like inflammatory cytokines, may be genuine. Or they may be just the only annotated targets of drugs, while the effect of a drug on sleep may be due to some side effect unrelated to the annotated target. I hope that the hits are all genuine and that this work leads to new insights in sleep. But tellingly, a review of recent sleep literature (1) focused on how Rihel and colleagues confirmed what has already been known, rather than on how they may have discovered something new.

Part of the problem with credibility has to do with their black-box, or rather blue-box, approach. They put zebrafish into the Zebrabox and out came all their data for the Science paper. Without knowing how the video processing software Videotrack works, and without a positive control of a drug that is known to make zebrafish move a lot and a negative control of a drug that is known to sedate them, I can only trust that Videotrack gave Rihel and colleagues the result that they claim. Automation can make previously impossible experiments possible, but if the results are ambiguous and untrustworthy, it’s of little value.

In summary, Rihel and colleagues applied high-throughput screening, responsible for groundbreaking discoveries in other areas of biology, to sleep. Notably, they were able to use a complicated measurement of zebrafish behavior because they used an automated measurement and analysis device. But the automated device also robbed their results of credibility. Thus, their paper makes me wish for someone to do a similar screen but in a more transparent way and with a more precise experimental definition of sleep. Then we could make some hypotheses and kick-start the scientific method in the study of sleep.

 Sources
  1. Sehgal A and E Mignot. (2011). “Genetics of Sleep and Sleep Disorders.” Cell. 146:194-207. Paywall.
  2. Rihel J et al. (2010). “Zebrafish Behavioral Profiling Links Drugs to Biological Targets and Rest/Wake Regulation.” Science. 327:348-351. Paywall.
  3. Hartwell LH et al. (1973). “Genetic Control of the Cell Division Cycle in Yeast: V. Genetic Analysis of cdc Mutants.” Genetics. 74: 267-286. Open access.
  4. Zimmerman JE et al. (2008). “Conservation of sleep: insights from non-mammalian model systems.” Trends in Neurosciences. 31:371-376. Paywall.
  5. http://en.wikipedia.org/wiki/Electroencephalography

 

Arl-8: The clasp on a fully-loaded synaptic spring.

A series of papers from Kang Shen's lab, which I have recently joined, sheds light on a key and fundamental step in the process of transporting pre-synapse forming proteins down the axon and forming functional synapses at exactly the right locations.  Here, I’ll be reviewing the first in this series, by Klassen and colleagues, from 2010.  

Neurons communicate with each other by sending electrical signals down axons, and across synapses to target other neurons. These synapses along and at the ends of axons can be extremely far away from the cell body, where most of the proteins are created. So both transporting synaptic proteins down the axon and forming synapses at the correct locations are two formidable challenges for the developing neuron. A 2010 paper from Kang Shen’s lab shows that these two processes appear to be intricately linked. The paper provides key evidence that instead of pre-synaptic proteins being transported in separate pieces, and assembled from scratch onsite as a functioning synapse, all the major protein components of the pre-synapse are transported together, ready for quick and easy assembly upon arrival. The paper shows that Arl-8 is the clasp that keeps a lid on the loaded-spring like capacity of these pre-synapse cargos that are in transit, preventing them from jumping off the transport train and assembling functional synapses.

Each neuron in the brain connects to only a small subset of the other neurons in the brain, and the selection of the appropriate target neurons is crucial to forming a well functioning brain. After an axon has reached its target destination, it will connect with other neurons in the target area by forming two types of synapses.  The axon can form terminal synapses, which are the synapses at the very ends of axons, and which are depicted below at the very tips of the axon branches, or the axon can form en passant synapses, which are the bud-like bright spots along the axon, both of which are depicted below in this image of an axon that is targeting the monkey visual cortex.

DL_Fig1

 

In both cases, there are two fundamental challenges that the neuron needs to solve. 1) Neurons somehow need to transport all these synapse-forming proteins from the cell body down the axon to the pre-synaptic specializations in the target area, either to terminal synapses or en passant synapses.  2) Once in the target area, the synapse-forming proteins somehow need to form the right number of en passant and terminal synapses, and in the right locations too!  What a fantastically complicated cell biology problem! Yet, somehow, amazingly, evolution has come up with mechanisms to enable these synapses to form in their correct locations.

Now, how to go about deciphering these mechanisms of axon transport and synapse formation? In the mammalian brain, most neurons are very complicated; they send their axons very long distances in the brain through an absolute forest of dense axon bundles, only to arrive at a destination, composed of cell bodies and their dendritic trees that are just as dense and complex. However, many of the same cell biological mechanisms at work in the mammalian brain are also present to a similar extent in the brains of simpler creatures as well, such as the tiny worm known as C. Elegans, which has only 302 total neurons. One of the C. Elegans neurons, the DA9 motoneuron makes exactly 25 en passant synapses along its single, unbranched axon as it courses along the dorsal nerve cord (shown in Panel A, below), and is an excellent model to study these questions of synapse assembly.

DL_Fig2

Now, in order to begin to understand the mechanisms of pre-synapse axonal transport and formation, one needs to be able to examine the roles of individual proteins in these processes. By disrupting one protein building block in this synapse-formation process, one at a time, we can see what role each of these protein actors plays in the exquisite biological “production” of assembling a pre-synapse. Klassen and Shen therefore created a bunch of mutant worms by chemically inducing mutations in worms. They then took these mutant worms and examined the pattern of distribution of a specific pre-synaptic protein called Rab-3 in the DA9 axon, and if there were any irregularities in the Rab-3 distribution, they would examine the worm’s genome to find the mutant gene, whose defect was responsible for the irregular distribution.

Rab-3 is a protein, which closely associates with small bubbles of membrane at the pre-synaptic specialization containing packets of neurotransmitter, called synaptic vesicles, and helps release these vesicles so they can travel across the synapse. Rab-3 is present in small amounts all along the axon, but large, bright clusters of Rab-3, which are visible in white, occur at sites where synaptic vesicles accumulate. Such vesicle accumulation indicates the presence of a presumptive pre-synaptic site.

By keeping track of where Rab3 clusters would form, Klassen and Shen could examine different mutants to see where the pattern of synaptic vesicle clusters appeared differently from the evenly spaced 25 clusters that normally form along the middle of the axon. Using this strategy, they were able to isolate a mutant worm, where the Rab-3 marked vesicle clusters formed too close to the cell body (Panel C, bottom), and did not get far enough down the axon to form the 25 synapses along the axon like they did in normal worms (Panel B, middle). This mutant had a defective gene encoding a small protein called Arl-8.

In the axons of Arl-8 mutants, the Rab-3 clusters were found to be located very close to the cell-body of the neuron, and much fewer of these clusters were found toward the middle and the end of the axon. It was as if all the synaptic vesicle proteins, marked by the presence of Rab-3, jumped off the transport-train way too early along the axon in the Arl-8 mutant worms. Thus, Arl-8 seems necessary to prevent premature aggregation of Rab-3, and to ensure proper transport of synaptic vesicle associated proteins.

Now, the previously discussed evidence implicated Arl-8 in preventing the aggregation of one of the two major classes of pre-synaptic proteins, the vesicle-associated proteins. As explained earlier, Rab-3 is a member of the class of pre-synaptic proteins called synaptic transport vesicle proteins, which play a role in the primary function of the pre-synapse. Yet, there is an entirely different class of proteins, called active zone proteins, which are a bunch of sticky organizing proteins that collectively form the structural backbone of the pre-synapse. One can think of active zone proteins as being like the engineers and construction workers, who assemble and provide structural support for a missile battery, while the synaptic vesicle release proteins can be thought of as an entirely different group: the soldiers who operate the missile machinery, helping set off the fuse.

Previously, it was thought that each of the two different kinds of proteins were transported down the axon separately from one another. So, all the proteins in the active zone group of proteins would be transported together in the same vesicles, and all of the vesicle associated proteins, like Rab-3, would be transported together, in a class of vesicles that were completely separate from those that transported vesicle associate proteins. In other words, people thought that the engineers travelled to the site of missile assembly in one railroad car, and that the soldiers travelled to the missile site in a totally separate railroad car. However, a second finding concerning Arl-8 challenged this theory that both sets of proteins are transported separately, and then jointly assembled at the synapse.

The second finding is that when one of these sticky, active zone proteins is mutated in addition to Arl-8, the Rab-3 aggregates in the DA9 axons do not have as severe a defect in distribution along the axon. In fact, these double-mutants had a pattern of Rab-3 puncta that looked closer to that of the normal worms, with little bright dots spread out along the entire axon. Klassen and Shen inferred that these sticky active zone proteins were causing Rab-3, and other synaptic vesicle associated proteins to aggregate and cluster together. The fact that mutations in these sticky, clustering, active zone proteins result in less Rab-3 synaptic puncta getting ‘caught’ early in the axon, suggests that both classes of proteins are actually transported together.

This finding provided evidence that instead of transporting the active-zone pre-synapse backbone proteins in certain types of vesicles, and separately transporting vesicle associated proteins in other vesicles, then assembling an entire synapse once all the packaged material arrived onsite, that in fact both types of vesicles were transported together. Or, to return to our analogy, the engineers, military men, and all the components of a mobile missile assembly are transported to a target site together. So, in essence, all the components of the pre-synapse are transported together, ready for quick assembly and deployment when they reach the correct spot.

The reason that this idea of co-transport is cool is that it totally changes the way we might think about how pre-synapses are set up. Instead of building a pre-synapse from scratch each time the neuron wants to form a connection to another neuron, and having to set up all the right active zone and neurotransmitter vesicle release proteins onsite, there might be partially pre-assembled synaptic machinery that’s transported down the axon. And then, when these transported vesicles get to the right place, they are immediately ready to spring into action and form a fully functioning synapse. Arl-8 then is the clasp that prevents these pre-synaptic proteins that are all being transported together from spontaneously aggregating and springing into action to form a synapse too early down the length of the axon.

An immediately exciting future question that is suggested by this research is how Arl-8 might interact with different proteins that set up synapses in particular locations. Perhaps there are other proteins that inhibit the action of Arl-8, in effect releasing this clasp on synapse formation? Perhaps also, there are other proteins, which counterbalance Arl-8, and actually promote the clustering of pre-synaptic proteins and the formation of synapses? Is Arl-8 part of some master switch, which is modulated to set up pre-synapses at specific locations in the brain?  If so, then discovering other proteins, which interact with Arl-8, could give us clues into questions like how an axon from the part of the brain that responds to vision knows to form lots of synapses onto face-processing neurons, and not other irrelevant neurons located close by.

Linky and the Brain: Podcast Edition

Linky-and-the-Brain-e1367303270155.png

Sometimes, doing science is mind-numbingly boring. Slicing your 20th acute brain slice. Re-sectioning your 5th visual cortex. Counting your millionth cfos-positive neuron.

Last week, I started listening to podcasts while patching neurons. In the process I rediscovered a science-themed delight, that I'd like to share with you all.

(Warning: the podcast I am about to recommend may preclude the fine motor control required to successfully patch a 10 um diameter neuron.)

imc

From BBC Radio 4, The Infinite Monkey Cage is a panel show about science featuring the talented and charming duo of Professor Brian Cox (physicist, nature program host), and Robin Ince (comedian). These hosts are joined by a rotating panel of diverse guests, including both scientists and comedians.

The episode that inspired me to type this post, "What is Death?", features comedian Katy Brand (who has a degree in theology from Oxford), biochemist Nick Lane and forensic anthropologist Sue Black. As a preview of the witty banter you can expect from an episode of The Infinite Monkey Cage, here is some dialogue I have transcribed from that first episode.

During a prolonged discussion regarding what is the definition of "alive", and in reference to an even longer discussion of whether strawberries are alive or dead:

Nick Lane: If you put a plastic bag over your head, you’ll be dead in about a minute and a half.

Brian Cox: But I put strawberries in bags all the time.

During a discussion on reproductive fitness:

Are pandas living in a Beckett play?

Introducing an upcoming episode, recorded at the Glastonbury Festival:

Robin Ince: We are going to be discussing quantum theory on the Many Worlds stage to an audience who will already be approaching a point of questioning their own existence, so therefore we will be using, in front of their cider-drenched eyes, imaginary numbers in an infinite dimensional phase space, and question the possibility of free will in a probabilistic universe. And that’s before we lead them into the Mexican Wave.

Brian Cox: Or the Mexican Particle.

Currently, series eight of The Infinite Monkey Cage is ongoing. Episodes from the current and previous series can be downloaded from iTunes, or can be listened to via web browser at the BBC Radio 4 website.

As of this moment, 42 episodes are available. Episode titles/subjects include: "Does Size Matter", "Oceans: The Last Great Unexplored Frontier", "Science Mavericks", "Is Cosmology Really a Science", "So You Want to Be An Astronaut", and "I'm a Chemist Get Me Out of Here".

Comment /Source

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Open Channel: Advice to Pre-Quals Grad Students

Hey folks. This week, I thought we could tap into the accumulated experience of those Neuroblog readers who have made an excellent life decision, and entered graduate school. In honor of my first Neuroscience Superfriends* seminar, let’s discuss the many potential answers to the following question:

What advice would you give to first-year students in a Neuroscience PhD program?

I’ve had a response to this question ready and waiting for the past 2 years. Here it is, verbatim from communications with the talented graduate student who introduced me before my seminar.

Over the next few years, your science is going to fail. A lot. It'll suck. Like, soul-crushing levels of suckitude. Lest you take all that failure to heart, get yourself an external control. Pick an activity, one you aren't already extremely good at. An activity where there is a reasonable chance that as you continue to do said activity, you'll get better at it. Practice that activity. Watch yourself get better. Remind yourself that you are capable of learning; that working at something will make you better at it. Then, when your experiments fail for the 50 billionth time in your fourth year, you can remind yourself that you aren't a complete fuck up. Science just sucks, sometimes.

So that’s my bit of advice.

Senior graduate students (or not so senior students), what is your advice? What do you all think is the one bit of advice you’d give to the newly minted Graduate Student in Neurosciences?

The comment section is open. Send in your thoughts!

 

*A seminar series wherein senior-level graduate students give 30 minute talks to the Stanford Neuroscience PhD community.

2 Comments

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog