Studying Sleep the High-Throughput Way

moskaleva814.png

“Sleep remains one of the least understood phenomena in biology,” reads the first sentence of a recent review in Cell (1). Though humans spend a third of their life sleeping, neurobiologists don’t really understand how or why we sleep. The scientific method proceeds from a hypothesis, and formulating a hypothesis requires some initial information, which is not really there for sleep. What can scientists do to gather this initial information? A favorite approach of molecular biologists over the past forty years has been the high-throughput screen, a fancy term for trying a bunch of things and seeing if they affect the process of interest. Until recently, it was impractical in neurobiology because it was too hard to collect and analyze the required large amounts of data. However, advances in computing power and the availability of a certain device called the Zebrabox, which I’ll explain later, made it possible for Jason Rihel and colleagues to apply the high-throughput screen to the neurobiology of sleep (2). To paraphrase from that paper’s abstract, the authors set out to find new drugs that affect sleep and to discover if any known proteins have a previously unknown effect on sleep. I am not a sleep expert or even a neurobiologist, but my background in systems biology has taught me a thing or two about high-throughput screens. Below I will explain what makes a good high-throughput screen, what Rihel and colleagues have accomplished, and what they could have done better. A good high-throughput screen generates hypotheses. It fills the initial void of knowledge with information that can be used to perform more targeted experiments. To perform a screen, biologists set up an experiment that reduces what they are studying to some measurement, change one thing in their experiment and make the measurement, change another thing and make the measurement, and repeat this hundreds or even thousands of times. To keep themselves from going crazy, they try to set up a simple measurement, so that repeating it ad nauseam would not be too tedious, time-consuming, or labor- and resource-intensive. However, the measurement shouldn’t be so simple that it doesn’t relate back to what is being studied. A classic example of a smashingly successful screen is the work of Lee Hartwell and colleagues on cell division cycle mutants in budding yeast in the 1970s (3). They were studying how cells divide. A yeast cell assumes a sequence of distinctive shapes as it divides, so they reduced cell division to whether a cell has a normal or abnormal shape. The things that they were changing were genes. By mutagenizing yeast, examining the shape of the resulting cell, and then mapping the mutant locus, they discovered many genes that affected cell shape. One of their hits, named cdc2, was revealed by subsequent targeted experiments to be the master regulator of how cells of all eukaryotic model organisms divide and is intensively studied to this day. Without the screen for yeast mutants of cell shape, no one may have ever connected this particular gene with the cell division cycle.

What did Rihel and colleagues do in their screen? First, they defined sleep simply enough to make it amenable to screening. Until a decade ago, the scientific definition of sleep was an altered state of electrical activity in the brain, as measured by sticking electrodes onto the scalp (1, 4, 5). Sticking electrodes to scalps is fine for making a handful of measurements, but doing it enough times for a screen is impractical, especially since the animals involved, i.e. primates, monkeys, rats, mice, or birds, are relatively large and expensive to maintain. Rihel and colleagues used the inexpensive and easy-to-maintain zebrafish. They defined sleep as lack of movement, following a push begun in the 1980s to define sleep as a behavior (1). Since, they were looking for new drugs, the things they changed were chemicals that they added to the aquarium water. Detecting lack of movement may seem like a simple measurement to make, but it’s not. Back in Lee Hartwell’s days, some poor grad student would have actually been watching the zebrafish or movies of zebrafish for inactivity. Luckily, technology has progressed, and Rihel and colleagues were able to buy a big blue box called the Zebrabox sold by a company named Viewpoint. The Zebrabox is equipped with a 96-well-plate holder, a video camera, and custom video processing software called Videotrack. Rihel and colleagues placed their more than 50,000 zebrafish larvae into 96-well plates, added one of over 5000 chemicals into each well, popped the plates inside the Zebrabox, recorded movies, and analyzed these movies for lack of motion. The chemicals that made zebrafish move more or less were considered hits. The targets of the chemicals, gleaned from annotations in databases and manual literature searches, were by extension implicated in sleep regulation.

The Zebrabox

[youtube]http://www.youtube.com/watch?v=ot48aM8Isvk[/youtube]

Larval zebrafish locomotor activity assay (A) At four days post fertilization (dpf), an individual zebrafish larva is pipetted into each well of a 96-well plate with small molecules. Automated analysis software tracks the movement of each larva for 3 days. Each compound is tested on 10 larvae. (B) Locomotor activity of a representative larva. The rest and wake dynamics were recorded, including the number and duration of rest bouts (i.e. a continuous minute of inactivity, (7)), the timing of the first rest bout following a light transition (rest latency), the average waking activity (average activity excluding rest bouts), and the average total activity. Together, these measurements generate a behavioral fingerprint for each compound. (Rihel et al, 2010)

So, how does the screen performed by Rihel and colleagues do in terms of generating hypotheses? Some chemicals they tested, including nicotine and mefloquine (see my previous post for more on this drug) make zebrafish move differently. However, their results have little credibility because they do not justify why lack of motion in zebrafish is definitely sleep, and not tiredness or death. Also, it is debatable how good the drug target annotations are. Some of the more surprising new sleep regulators, like inflammatory cytokines, may be genuine. Or they may be just the only annotated targets of drugs, while the effect of a drug on sleep may be due to some side effect unrelated to the annotated target. I hope that the hits are all genuine and that this work leads to new insights in sleep. But tellingly, a review of recent sleep literature (1) focused on how Rihel and colleagues confirmed what has already been known, rather than on how they may have discovered something new.

Part of the problem with credibility has to do with their black-box, or rather blue-box, approach. They put zebrafish into the Zebrabox and out came all their data for the Science paper. Without knowing how the video processing software Videotrack works, and without a positive control of a drug that is known to make zebrafish move a lot and a negative control of a drug that is known to sedate them, I can only trust that Videotrack gave Rihel and colleagues the result that they claim. Automation can make previously impossible experiments possible, but if the results are ambiguous and untrustworthy, it’s of little value.

In summary, Rihel and colleagues applied high-throughput screening, responsible for groundbreaking discoveries in other areas of biology, to sleep. Notably, they were able to use a complicated measurement of zebrafish behavior because they used an automated measurement and analysis device. But the automated device also robbed their results of credibility. Thus, their paper makes me wish for someone to do a similar screen but in a more transparent way and with a more precise experimental definition of sleep. Then we could make some hypotheses and kick-start the scientific method in the study of sleep.

 Sources
  1. Sehgal A and E Mignot. (2011). “Genetics of Sleep and Sleep Disorders.” Cell. 146:194-207. Paywall.
  2. Rihel J et al. (2010). “Zebrafish Behavioral Profiling Links Drugs to Biological Targets and Rest/Wake Regulation.” Science. 327:348-351. Paywall.
  3. Hartwell LH et al. (1973). “Genetic Control of the Cell Division Cycle in Yeast: V. Genetic Analysis of cdc Mutants.” Genetics. 74: 267-286. Open access.
  4. Zimmerman JE et al. (2008). “Conservation of sleep: insights from non-mammalian model systems.” Trends in Neurosciences. 31:371-376. Paywall.
  5. http://en.wikipedia.org/wiki/Electroencephalography

 

Arl-8: The clasp on a fully-loaded synaptic spring.

A series of papers from Kang Shen's lab, which I have recently joined, sheds light on a key and fundamental step in the process of transporting pre-synapse forming proteins down the axon and forming functional synapses at exactly the right locations.  Here, I’ll be reviewing the first in this series, by Klassen and colleagues, from 2010.  

Neurons communicate with each other by sending electrical signals down axons, and across synapses to target other neurons. These synapses along and at the ends of axons can be extremely far away from the cell body, where most of the proteins are created. So both transporting synaptic proteins down the axon and forming synapses at the correct locations are two formidable challenges for the developing neuron. A 2010 paper from Kang Shen’s lab shows that these two processes appear to be intricately linked. The paper provides key evidence that instead of pre-synaptic proteins being transported in separate pieces, and assembled from scratch onsite as a functioning synapse, all the major protein components of the pre-synapse are transported together, ready for quick and easy assembly upon arrival. The paper shows that Arl-8 is the clasp that keeps a lid on the loaded-spring like capacity of these pre-synapse cargos that are in transit, preventing them from jumping off the transport train and assembling functional synapses.

Each neuron in the brain connects to only a small subset of the other neurons in the brain, and the selection of the appropriate target neurons is crucial to forming a well functioning brain. After an axon has reached its target destination, it will connect with other neurons in the target area by forming two types of synapses.  The axon can form terminal synapses, which are the synapses at the very ends of axons, and which are depicted below at the very tips of the axon branches, or the axon can form en passant synapses, which are the bud-like bright spots along the axon, both of which are depicted below in this image of an axon that is targeting the monkey visual cortex.

DL_Fig1

 

In both cases, there are two fundamental challenges that the neuron needs to solve. 1) Neurons somehow need to transport all these synapse-forming proteins from the cell body down the axon to the pre-synaptic specializations in the target area, either to terminal synapses or en passant synapses.  2) Once in the target area, the synapse-forming proteins somehow need to form the right number of en passant and terminal synapses, and in the right locations too!  What a fantastically complicated cell biology problem! Yet, somehow, amazingly, evolution has come up with mechanisms to enable these synapses to form in their correct locations.

Now, how to go about deciphering these mechanisms of axon transport and synapse formation? In the mammalian brain, most neurons are very complicated; they send their axons very long distances in the brain through an absolute forest of dense axon bundles, only to arrive at a destination, composed of cell bodies and their dendritic trees that are just as dense and complex. However, many of the same cell biological mechanisms at work in the mammalian brain are also present to a similar extent in the brains of simpler creatures as well, such as the tiny worm known as C. Elegans, which has only 302 total neurons. One of the C. Elegans neurons, the DA9 motoneuron makes exactly 25 en passant synapses along its single, unbranched axon as it courses along the dorsal nerve cord (shown in Panel A, below), and is an excellent model to study these questions of synapse assembly.

DL_Fig2

Now, in order to begin to understand the mechanisms of pre-synapse axonal transport and formation, one needs to be able to examine the roles of individual proteins in these processes. By disrupting one protein building block in this synapse-formation process, one at a time, we can see what role each of these protein actors plays in the exquisite biological “production” of assembling a pre-synapse. Klassen and Shen therefore created a bunch of mutant worms by chemically inducing mutations in worms. They then took these mutant worms and examined the pattern of distribution of a specific pre-synaptic protein called Rab-3 in the DA9 axon, and if there were any irregularities in the Rab-3 distribution, they would examine the worm’s genome to find the mutant gene, whose defect was responsible for the irregular distribution.

Rab-3 is a protein, which closely associates with small bubbles of membrane at the pre-synaptic specialization containing packets of neurotransmitter, called synaptic vesicles, and helps release these vesicles so they can travel across the synapse. Rab-3 is present in small amounts all along the axon, but large, bright clusters of Rab-3, which are visible in white, occur at sites where synaptic vesicles accumulate. Such vesicle accumulation indicates the presence of a presumptive pre-synaptic site.

By keeping track of where Rab3 clusters would form, Klassen and Shen could examine different mutants to see where the pattern of synaptic vesicle clusters appeared differently from the evenly spaced 25 clusters that normally form along the middle of the axon. Using this strategy, they were able to isolate a mutant worm, where the Rab-3 marked vesicle clusters formed too close to the cell body (Panel C, bottom), and did not get far enough down the axon to form the 25 synapses along the axon like they did in normal worms (Panel B, middle). This mutant had a defective gene encoding a small protein called Arl-8.

In the axons of Arl-8 mutants, the Rab-3 clusters were found to be located very close to the cell-body of the neuron, and much fewer of these clusters were found toward the middle and the end of the axon. It was as if all the synaptic vesicle proteins, marked by the presence of Rab-3, jumped off the transport-train way too early along the axon in the Arl-8 mutant worms. Thus, Arl-8 seems necessary to prevent premature aggregation of Rab-3, and to ensure proper transport of synaptic vesicle associated proteins.

Now, the previously discussed evidence implicated Arl-8 in preventing the aggregation of one of the two major classes of pre-synaptic proteins, the vesicle-associated proteins. As explained earlier, Rab-3 is a member of the class of pre-synaptic proteins called synaptic transport vesicle proteins, which play a role in the primary function of the pre-synapse. Yet, there is an entirely different class of proteins, called active zone proteins, which are a bunch of sticky organizing proteins that collectively form the structural backbone of the pre-synapse. One can think of active zone proteins as being like the engineers and construction workers, who assemble and provide structural support for a missile battery, while the synaptic vesicle release proteins can be thought of as an entirely different group: the soldiers who operate the missile machinery, helping set off the fuse.

Previously, it was thought that each of the two different kinds of proteins were transported down the axon separately from one another. So, all the proteins in the active zone group of proteins would be transported together in the same vesicles, and all of the vesicle associated proteins, like Rab-3, would be transported together, in a class of vesicles that were completely separate from those that transported vesicle associate proteins. In other words, people thought that the engineers travelled to the site of missile assembly in one railroad car, and that the soldiers travelled to the missile site in a totally separate railroad car. However, a second finding concerning Arl-8 challenged this theory that both sets of proteins are transported separately, and then jointly assembled at the synapse.

The second finding is that when one of these sticky, active zone proteins is mutated in addition to Arl-8, the Rab-3 aggregates in the DA9 axons do not have as severe a defect in distribution along the axon. In fact, these double-mutants had a pattern of Rab-3 puncta that looked closer to that of the normal worms, with little bright dots spread out along the entire axon. Klassen and Shen inferred that these sticky active zone proteins were causing Rab-3, and other synaptic vesicle associated proteins to aggregate and cluster together. The fact that mutations in these sticky, clustering, active zone proteins result in less Rab-3 synaptic puncta getting ‘caught’ early in the axon, suggests that both classes of proteins are actually transported together.

This finding provided evidence that instead of transporting the active-zone pre-synapse backbone proteins in certain types of vesicles, and separately transporting vesicle associated proteins in other vesicles, then assembling an entire synapse once all the packaged material arrived onsite, that in fact both types of vesicles were transported together. Or, to return to our analogy, the engineers, military men, and all the components of a mobile missile assembly are transported to a target site together. So, in essence, all the components of the pre-synapse are transported together, ready for quick assembly and deployment when they reach the correct spot.

The reason that this idea of co-transport is cool is that it totally changes the way we might think about how pre-synapses are set up. Instead of building a pre-synapse from scratch each time the neuron wants to form a connection to another neuron, and having to set up all the right active zone and neurotransmitter vesicle release proteins onsite, there might be partially pre-assembled synaptic machinery that’s transported down the axon. And then, when these transported vesicles get to the right place, they are immediately ready to spring into action and form a fully functioning synapse. Arl-8 then is the clasp that prevents these pre-synaptic proteins that are all being transported together from spontaneously aggregating and springing into action to form a synapse too early down the length of the axon.

An immediately exciting future question that is suggested by this research is how Arl-8 might interact with different proteins that set up synapses in particular locations. Perhaps there are other proteins that inhibit the action of Arl-8, in effect releasing this clasp on synapse formation? Perhaps also, there are other proteins, which counterbalance Arl-8, and actually promote the clustering of pre-synaptic proteins and the formation of synapses? Is Arl-8 part of some master switch, which is modulated to set up pre-synapses at specific locations in the brain?  If so, then discovering other proteins, which interact with Arl-8, could give us clues into questions like how an axon from the part of the brain that responds to vision knows to form lots of synapses onto face-processing neurons, and not other irrelevant neurons located close by.

Linky and the Brain: Podcast Edition

Linky-and-the-Brain-e1367303270155.png

Sometimes, doing science is mind-numbingly boring. Slicing your 20th acute brain slice. Re-sectioning your 5th visual cortex. Counting your millionth cfos-positive neuron.

Last week, I started listening to podcasts while patching neurons. In the process I rediscovered a science-themed delight, that I'd like to share with you all.

(Warning: the podcast I am about to recommend may preclude the fine motor control required to successfully patch a 10 um diameter neuron.)

imc

From BBC Radio 4, The Infinite Monkey Cage is a panel show about science featuring the talented and charming duo of Professor Brian Cox (physicist, nature program host), and Robin Ince (comedian). These hosts are joined by a rotating panel of diverse guests, including both scientists and comedians.

The episode that inspired me to type this post, "What is Death?", features comedian Katy Brand (who has a degree in theology from Oxford), biochemist Nick Lane and forensic anthropologist Sue Black. As a preview of the witty banter you can expect from an episode of The Infinite Monkey Cage, here is some dialogue I have transcribed from that first episode.

During a prolonged discussion regarding what is the definition of "alive", and in reference to an even longer discussion of whether strawberries are alive or dead:

Nick Lane: If you put a plastic bag over your head, you’ll be dead in about a minute and a half.

Brian Cox: But I put strawberries in bags all the time.

During a discussion on reproductive fitness:

Are pandas living in a Beckett play?

Introducing an upcoming episode, recorded at the Glastonbury Festival:

Robin Ince: We are going to be discussing quantum theory on the Many Worlds stage to an audience who will already be approaching a point of questioning their own existence, so therefore we will be using, in front of their cider-drenched eyes, imaginary numbers in an infinite dimensional phase space, and question the possibility of free will in a probabilistic universe. And that’s before we lead them into the Mexican Wave.

Brian Cox: Or the Mexican Particle.

Currently, series eight of The Infinite Monkey Cage is ongoing. Episodes from the current and previous series can be downloaded from iTunes, or can be listened to via web browser at the BBC Radio 4 website.

As of this moment, 42 episodes are available. Episode titles/subjects include: "Does Size Matter", "Oceans: The Last Great Unexplored Frontier", "Science Mavericks", "Is Cosmology Really a Science", "So You Want to Be An Astronaut", and "I'm a Chemist Get Me Out of Here".

Comment /Source

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Open Channel: Advice to Pre-Quals Grad Students

Hey folks. This week, I thought we could tap into the accumulated experience of those Neuroblog readers who have made an excellent life decision, and entered graduate school. In honor of my first Neuroscience Superfriends* seminar, let’s discuss the many potential answers to the following question:

What advice would you give to first-year students in a Neuroscience PhD program?

I’ve had a response to this question ready and waiting for the past 2 years. Here it is, verbatim from communications with the talented graduate student who introduced me before my seminar.

Over the next few years, your science is going to fail. A lot. It'll suck. Like, soul-crushing levels of suckitude. Lest you take all that failure to heart, get yourself an external control. Pick an activity, one you aren't already extremely good at. An activity where there is a reasonable chance that as you continue to do said activity, you'll get better at it. Practice that activity. Watch yourself get better. Remind yourself that you are capable of learning; that working at something will make you better at it. Then, when your experiments fail for the 50 billionth time in your fourth year, you can remind yourself that you aren't a complete fuck up. Science just sucks, sometimes.

So that’s my bit of advice.

Senior graduate students (or not so senior students), what is your advice? What do you all think is the one bit of advice you’d give to the newly minted Graduate Student in Neurosciences?

The comment section is open. Send in your thoughts!

 

*A seminar series wherein senior-level graduate students give 30 minute talks to the Stanford Neuroscience PhD community.

2 Comments

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

You work on stem cells, right? OK, here's your space suit.

The-author-as-astronaut-crop.png

I’m really only half joking when I say I want to be an astronaut when I grow up. When experiments aren’t going well, my friends and I discuss the various ways in which we could convince the International Space Station (ISS) that we’re Star Fleet material. Since the relaxation of military and flight time requirements, we’ve been looking forward to stepping off this chaos of hard clay to wander darkling in the eternal space. Romance aside, the effects of space travel on health and the unique physiological and psychological conditions of long-range space travel really do interest a growing number of research scientists, myself included. How exciting, then, to hear from my friend Rishi that CASIS (the Center for the Advancement of Science in Space) have issued a request for proposals looking into the effects of microgravity on stem cells. I’m a biologist; surely I work on stem cells. Well, not quite. I work on the immune system, the cells of which do indeed derive from stem cells, but my interests lie much further down the developmental line. I study how a fully mature immune system works to protect the body against infection and how vaccines use the same machinery to protect against diseases before we encounter them. To me, this has obvious applications for interplanetary space exploration.

 

An artist's impression of the author as an astronaut

 

A recent publication in the Journal of Clinical Immunology1 shows altered immune function following space flight. Levels of inflammation-driving molecules in the blood go up, but specific responses to viruses by T cells go down. Is this a Big Deal? Let’s assume that people aren’t allowed into space if they have a serious illness and that the vacuum of space is clean enough that disease-causing bugs are unlikely to get on board a spaceship and make everyone evolve backwards or suddenly challenge the entire crew to a duel. Does it matter if the immune system is depressed in space? Well, one of the observations highlighted in the paper is that virus-specific responses are reduced. This might not be a problem if we weren’t all riddled with a large number of viruses that are only kept in check by constant immune surveillance. Almost everyone is infected with JC virus, Cytomegalovirus, Epstein-Barr virus, Varicella-Zoster virus, and several other strains of common herpesviruses. Many diseases, including shingles, are associated with a drop in T cell responses and can be severely debilitating. Surely we need someone doing research up there to see how immune responses to these long-term resident viruses change. An outbreak of shingles during a long voyage may not make compelling television, but could severely compromise an astronaut’s ability to function when far from access to antiviral drugs and pain medication. Clearly (if perhaps tinged with a little bias), research into the immune system is much more directly applicable to survival in space than stem cell research, which is still very much in the early stages.

As Commander Chris Hadfield showed so wonderfully, there are many and varied experiments that can be conducted in space that will have an impact on future astronauts. Part of me is tempted to speculate as to why CASIS have limited their remit to stem cells. Is it just because they’re cool and in the news a lot these days? Have the people controlling the purse strings been sweet-talked by a lobbyist with stem cell leanings? Or was this really the result of a long and in-depth study section on the biological priorities of extraterrestrial research? The cynic in me is clamouring for a rant about the whims of science policy but, since I know so little about the process, I should probably save my energies for more productive tasks. Like thinking of ways to convince CASIS that I work on stem cells.

References

1)    Crucian et al (2013). Immune system dyregulation occurs during short duration spaceflight on board the space shuttle. Journal of Clinical Immunology 33 (2): 456-465.

 

Editor's Note: The author wishes me to express that she would prefer that CASIS used the correct spelling of 'Centre' in its name. Unfortunately, there was not time for the Center to make this official name change prior to this article going to press. Watch this space for updates.

Splitting the Column: new data reveals an overlooked wrinkle of cortical organization

Splitting the Column: new data reveals an overlooked wrinkle of cortical organization

I want to let you in on a little secret. We neuroscientists are actually quite jealous of the physicists. They may lament the fact that their unified theory of everything hasn’t turned up yet, but they’re sitting pretty with a bevy of universal laws, forces, constants and equations that do a bang up job explaining the universe. We neuroscientists are still on the hunt for some all-encompassing laws and principles that would explain brain function at a larger scale than the operation of single neurons (on which, I must say, we’ve done a pretty awesome job).

Read More

Linky and the Brain: mice, allergies, chickens and pee (of mice, for science)

Linky-and-the-Brain_small.png

Hi folks! The theme of my Linky and the Brain this week is: model organisms, a love/hate relationship.

First off, I will direct you to Kelly Zalocusky’s hot-off-the-presses Neuroblog post, Of Mice and Men: On the Validity of Animal Models of Psychiatric Disease. Kelly discusses the difference between homology and analogy, and how the distinction between the two may be at the heart of why the translation between model organism and human disease state is such a difficult one. My newest goal for the Neuroblog is to start getting some conversations going in the comments – can we all meet up at Kelly’s post and chat about evaluating our model systems in terms of homology/analogy? I, for one, think about this question often (read: whenever I’m writing a grant). I’d love to learn where other folks are on the axis of “strive to be as close to the human disease state as possible so as to increase chances of translation” versus “not-strictly homologous/translation-able is fine, as long as we are learning basic things about the brain that will reasonably contribute to our greater understanding of how brains work”.

On the subject of model organisms, and more accurately dissatisfaction with model organisms, I read, with great interest, an article in the New York Times about researchers who develop allergies during the course of their research, to model organisms or common lab substances. Now, this concept is hardly new to me, or likely to anyone who has ever worked in a rodent lab. (I work closely with a postdoc who is so allergic to rodents he wears Teflon-coated gloves when handling them, to prevent contact/scratches.) What I found so interesting, was that the New York Times published the article in the first place; a trend piece about the travails of the common researcher. Awesome.

To read about a researcher who became allergic to cicadas, plus other stories, check out Allergies in the Time of Research, by Hillary Rosner.

One more feature regarding model organisms, this time my own. From the blog Last Word on Nothing, a delightful post describing the glory that is The Art of Chicken Sexing. Go for a description of a process so mysterious, that even the practitioners themselves can’t describe what they are looking for. Stay for the quotes from folks for whom the process of chicken sexing has become either an addiction, or a compulsion (or both).

And lastly, a hysterically funny post by scicurious, over at Scientopia, entitled Mopey Mice Pee Their Feelings. A blog post that is all about a novel method for evaluating the anxiety/depression in rodents. The method: urine tracking. Some of my favorite quotes from the post:

To introduce the concept of rodent urination:

"If you've held a lot of mice, you've been peed on a lot. Everywhere you put a mouse, that mouse WILL pee. It's part of the game and one of the things you get used to (probably one of the things we should warn new grad students about, too. "Congratulations! Be prepared to be peed on!")."

To describe a figure of the urination pattern of a "control" mouse:

"… a mouse I would have nicknamed "the dragger". He doesn't sprinkle it around, he DRAGS it around, and he is letting that lady know that he is HERE and READY."

And finally:

"It's like mousey feelings on paper."

 

Comment

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Of mice and men: on the validity of animal models of psychiatric disease

HomologyCover1.png

As biomedical researchers, we use animal models as a compromise. We hope to understand human disorders and improve human health, but the experiments we do are often too risky for human subjects. One largely unspoken concern about this compromise is the degree to which these animals’ behaviors accurately model the disorder in question. What do we even mean when we say that a particular rodent behavior “models” a human syndrome? And why is it that, very often, treatments that work in animal models fail once they reach the clinical setting (1)?

There is an extensive literature in psychology on the various ways to assess the validity of tests and models (2), and the biomedical research community would do well to consider this long philosophical struggle. But, as a behavioral ecologist and ethologist, there seems to be one potential gold-standard question for animal models that is rarely, if ever, discussed. Are apparent similarities between the human and the animal behavior driven by homology,  or are they analogies, driven  by convergent evolution?

Analogy vs. Homology

As I see it, one major flaw in the design of animal models is in mistaking analogy for homology. That is, neuroscientists often study an animal’s behavior because it resembles an interesting human behavior. Take, for example, mouse models of obsessive-compulsive disorder. The goal is not to understand why some mice groom too much, but instead to understand why some humans wash their hands too much. Mouse grooming is an analogy for hand washing. These studies are only useful, then, if mouse grooming and human hand-washing rely on the same neural circuitry. For these studies to be meaningful, the two behaviors must be homologous.

What does it mean to be homologous?

http://askabiologist.asu.edu/

Homology means evolved from the same ancestral structure or behavior. If, for example, you wanted to understand the structure of bat wings, but could not get the permits to study bats, you could reasonably study bird wings as a model. You could also study human arms, or even whale flippers. The only reason such studies would be useful is that bat wings, bird wings, human arms, and whale flippers have very similar, evolutionarily homologous, structures (see figure). Even though whale flippers are not used for flight (“And the rest, after a sudden wet thud, was silence…”), their structure can tell you a lot about how bat wings are likely put together.

An analogous behavior or structure, on the other hand, is one that looks similar across species but likely occurs for different reasons or through entirely different mechanisms. A bat wing and a butterfly wing are analogous—while they look similar, and evolved to promote the same behavior, they are evolutionarily and structurally distinct. Attempting to learn about the skeletal structure of bats’ wings by studying butterflies would be a largely fruitless endeavor.

The difficulty, of course, in studying psychiatric disease is that most psychiatric diseases are defined by a cluster of symptoms—not by an underlying physiological process. For the researcher, this means that it is challenging to know whether you are studying the right physiological process at all. If a particular assay, based originally on analogy, repeatedly fails to translate in clinical trials—for example, if social behavior assays in mouse autism, or over-grooming in mouse OCD, or refusing to swim in mouse depression repeatedly let clinicians down—perhaps we, as a community, should consider this potential reason why.

Sources

  1. http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1000245
  2. Messick, S. (1989). Meaning and Values in Test Validation: The Science and Ethics of Assessment. Educational Researcher, 18, 5-11.

 

Bizarre Side Effects

Why would a drug designed to prevent and treat malaria, a parasitic infection of the blood and liver, also affect the central nervous system? The drug in question is mefloquine, marketed as Lariam, and I first learned about its bizarre side effects, including amnesia, psychosis, and hallucinations, while listening to “Contents Unknown,” an episode of the radio program This American Life. The episode intrigued me because it told the story of David MacLean, who was taking mefloquine while on a Fulbright scholarship in India and one day found himself in a train station in a different city from where he lived with no memory of who he was or how he got there. I went looking for primary scientific literature on how mefloquine affects both the malarial parasite and the human brain, and here is what I found. Before exploring the side effects of mefloquine, let’s tackle a more basic question: why do drugs have side effects? The answer lies in how drugs are discovered. In a perfect world, scientists would know so much about a disease that they would be able to design a precisely targeted drug, highly effective against the cause of the disease and benign for the patient. That is a major goal of biomedical research and highly desirable for malaria, which every year afflicts half a billion people (1) and kills one million children in Africa alone (2), but much more of this research is still needed. For many diseases, sadly including malaria, our knowledge is too limited to allow rational design of drugs, and most drugs are discovered either accidentally, like the first anti-malarial agent quinine (3), or by trial and error, which means taking some chemical compound, using it against the disease in an animal model, and seeing if the animal gets better. Therefore, a chemical compound often becomes a drug not because we understand how it works against the disease but rather because we have observed it to work. For a drug discovered in this way, we do not know whether its desired effect is its only effect until we try it out.

Mefloquine is an example of the trial-and-error approach. In the 1960s and 1970s, the Walter Reed Army Institute of Research tested over 300,000 chemical compounds for their ability to kill Plasmodium falciparum and Plasmodium vivax, the two most common malarial parasites, in owl monkeys, the best animal model at the time (4). Mefloquine showed the most promise and went on to clinical trials in humans that are meant, among other things, to test for side effects. In the case of mefloquine, these initial clinical trials showed no serious side effects, but they were conducted in vulnerable populations unable to give full consent, namely male prisoners, military personnel, and residents of developing countries, and may have been biased (5). More recent epidemiological research has shown that side effects in the central nervous system severe enough to require hospitalization occur in 1:10,000 patients taking mefloquine for malaria prevention and in 1:200 to 1:1200 patients using mefloquine for malaria treatment (6). Though mefloquine is still widely available, medical practitioners have an increased appreciation of its side effects, and it is now the drug of last resort, rather than of choice, for the U.S. military (5).

How does mefloquine act to kill Plasmodium parasites? This turned out to be a hard question to answer. Mefloquine is thought to inhibit growth of Plasmodium inside human red blood cells (7). The Plasmodium parasites have a complicated life cycle, proceeding from the saliva of mosquitoes to penetrate inside human red blood cells and liver cells. One possible target of mefloquine may be the food vacuole, a sort of microscopic stomach inside Plasmodium cells where they digest nutrients obtained from the cytoplasm of red blood cells, because the food vacuole changes shape in response to mefloquine treatment (8). I was unable to find any publications that identified the targets of mefloquine more specifically. This may be because laboratory experiments on Plasmodium species are difficult. My classmate Hao Li, a graduate student in the lab of Professor Matt Bogyo at Stanford, works with Plasmodium falciparum and has often described that the procedure of culturing it in blood cells is laborious, yielding precious little material for experimentation. And that’s the easiest parasite species to cultivate. The only way to obtain Plasmodium vivax for experiments is to let it infect and reproduce inside mice or monkeys (1).

Research on how mefloquine may be causing central nervous system side effects in humans was somewhat easier to find, though it is far from conclusive. Mefloquine doesn’t dissolve well in water but sticks quite well to the outside of blood cells and brain cells (5). Post-mortem examinations of both mice and humans exposed to mefloquine have found it to accumulate in the limbic system, a region of the brain responsible for emotions and memory (5) (for a bit more background on the limbic system, see my post “Linguistic Disconnect between the Brain and Emotions”). There it may be blocking connexins (5), proteins that are components of gap junctions. Gap junctions are channels in neuronal membranes that go from the cytoplasm of one neuron into the cytoplasm of the adjacent one. Thus, gap junctions are responsible for synchronizing the activity of neurons (5). Cap junctions are channels that form direct links between the cytoplasms of neighboring cells. Gap junctions are a critical pathway for direct cell-to-cell communication in both neurons (5) and glia (10). These channels pass both electrical current (in the form of charged ions) and intracellular signaling molecules, playing a role in synchronizing neuronal activity, as well as metabolic coupling and chemical signaling (11).

The blocking effect of mefloquine is strong enough and specific enough to have been used to study the signaling behavior of connexins (9). The blockage of connexins may impede the ability of the brain to control emotional impulses and may interfere with memory formation (5). However, the picture may be more complicated because I also found publications claiming blockage of a different class of proteins, called 5-HT3 receptors (7), and an effect on the ability of rat neurons to control their internal concentration of calcium ions, which is essential for neuronal signaling (6). Hopefully, future research will elucidate the relative importance of these various effects.

Current understanding of both the desired effect and the side effects of mefloquine is incomplete and more research is needed. But mefloquine also is a cautionary tale of pitfalls in the process of drug design that may misrepresent a compound with dangerous side effects as a perfectly safe one. It is an illustration of how much more we still need to understand about the human body and its parasites to be able to effectively treat malaria without driving anyone insane.

 

Sources

  1. Carlton JM et al. “Comparative genomics of the neglected human malaria parasite Plasmodium vivax.” Nature. 455:757, 2008. Paywall.
  2. Gardner MJ et al. “Genome sequence of the human malaria parasite Plasmodium falciparum.” Nature. 419:498, 2002. Paywall.
  3. http://www.cdc.gov/malaria/about/history/
  4. Maugh TH. “Malaria drugs: new ones are available, but little used.” Science. 196:415, 1977. Paywall.
  5. Ritchie EC et al. “Psychiatric Side Effects of Mefloquine: Applications to Forensic Psychiatry.” The Journal of the American Academy of Psychiatry and the Law. 41:224, 2013. Paywall.
  6. Dow GS et al. “The acute neurotoxicity of mefloquine may be mediated through a disruption of calcium homeostasis and ER function in vitro.” Malaria Journal. 2:14, 2003. Open access.
  7. Thompson AJ et al. “The antimalarial drugs quinine, chloroquine and mefloquine are antagonists at 5-HT3 receptors.” British Journal of Pharmacology. 151:666, 2007. Paywall.
  8. Jacobs GH et al. “An ultrastructural study of the effects of mefloquine on malaria parasites.” Am J Trop Med Hyg. 36:9, 1987. Paywall.
  9. Cruikshank SJ et al. “Potent block of Cx36 and Cx50 gap junction channels by mefloquine.” PNAS. 101:12364, 2004.
  10. WIREs Membr Transp Signal 2013, 2:133–142. doi: 10.1002/wmts.87
  11. Bennett and Zukin. Electrical coupling and neuronal synchronization in the mammalian brain. Neuron 2004 Feb 19;41(4):495-511.