[Edited 5/29/13: It was pointed out in the comments that, given the length of this post, readers may get the incorrect impression that I am suggesting that a certain published study contains incorrect data. This is emphatically not the case. To paraphrase a bolded section below: "I am absolutely confident in the original finding; no reader should entertain Ioannidis-inspired thoughts of statistical unreliability. The issue here is one of significant differences in [my] experimental conditions [versus those in the original study]."]
I realized something scary in lab last week.
For the past month, I've been running a set of experiments that test whether gamma oscillations in local field potentials (a type of brain rhythm associated with attention and gaze control) are influenced by cholinergic signaling (also associated with attention, enhancement of sensory information processing).
A crash course in gamma oscillations and acetylcholine, as relevant for my research. Both gamma oscillations and cholinergic signaling are associated with the process of attention, and both appear to play critical roles in the enhancement of sensory information processing associated with attention. Numerous psychiatric diseases, such as ADHD, Alzheimer's and schizophrenia, involve a cluster of symptoms that include disruptions in gamma oscillations and cholinergic signaling. At the most general, elevator-pitch-y level, my research questions whether it is a coincidence that disruptions in gamma oscillations and cholinergic signaling often occur together. Does cholinergic feedback modulate gamma oscillations? Are deficits in one system leading to the deficits in the other?
A study carried out and published by 2 talented post docs in my lab, showed that concurrent blockade of muscarinic and nicotinic acetylcholine receptors is sufficient to reduce the duration and the power of gamma oscillations that are generated by a midbrain gamma oscillator (Goddard et al 2012). I'm not going to go into further detail about these experiments, the motivations behind them, or the relevance for attentional processing of the midbrain gamma oscillator.
(Interested parties should either check out the original paper, or ambush me in the halls of the Fairchild Building. We've got a poster of this work on display. Also the study's authors. They like cookies.)
Instead of going into detail, I will just quote the discussion section of the article, which set up the series of experiments that currently comprise the core of my thesis research.
Acetylcholine receptors (ACh-Rs) regulate the overall excitability of the midbrain oscillator. Blockade of ACh-Rs reduces the duration and power of the oscillations without affecting their periodicity. ... A combination of these [aforementioned] pre- and postsynaptic effects likely explains the decrease in oscillation power and duration that we observed after ACh-R blockade (Figure 3D). However, more investigation is required to understand how acetylcholine modulates the various elements utilized by the midbrain oscillator.
So. Scary realization.
The original finding - that blocking all (a.k.a both muscarinic and nicotinic) acetylcholine receptors causes a decrease in the power and duration of gamma oscillations? I can't replicate it.
Reproducibility is a problem in science. The problem isn't really that studies are published, and then not replicated (either by the same lab, or by another group). The problem is that we just don't know how often studies can't be replicated by some poor graduate student laboring in obscurity. Peer reviewed publications like novel research findings. By and large, they aren't interested in publishing replication studies (or negative findings, under which category i would place failure-to-replicate reports). A relatively recent example of succesful publication of failure-to-replicate studies is the kerfufle involving the purported arsenic-DNA integrating bacteria GFAJ-1 (subject not familiar? Check out my coverage of the subject from back in 2011).
But let's not put all the blame on the journals - scientists hold the same priorities (novel research > redoing old studies). Personally, I much prefer to do experiments that have never been done before, asking questions no one else has yet answered. Replication studies are, to be honest, a bit boring. Yes, some scientists have called for a concerted effort to replicate high profile scientific research. The Reproducibility Initiative, a collaboration between Science Exchange, PLOS ONE, figshare, and Mendeley, launched in 2012, offers to independently replicate scientific studies. If funding becomes available. Also in 2012, a group of cancer researchers set out to replicate the findings of 57 landmark publications. They successfully replicated 6. If our goal, as scientists, is to produce real research findings, one component of our strategy to achieve that goal surely should be to double check our findings. This point is especially true if we remember the warnings of John Ioannidis; statistical reliability in neuroscience research is shockingly low. To quote Kelly Zalocusky, my local expert on all things Ioannidis:
If our intuitions about our research are true, fellow graduate students, then fully 70% of published positive findings are “false positives”. This result furthermore assumes no bias, perfect use of statistics, and a complete lack of “many groups” effect. (The “many groups” effect means that many groups might work on the same question. 19 out of 20 find nothing, and the 1 “lucky” group that finds something actually publishes). Meaning—this estimate is likely to be hugely optimistic.
-- excerpted from Why Most Published Neuroscience Findings are False
Back to my research. I am faced with a published finding; that pharmacologically blocking the receptors to both nicotinic and muscarinic acetylcholine receptors reduces the power and duration of gamma oscillations by around 50%. Last week, I decided that given differences in the recording setup (and the experimenter) between my research and the published findings, I should confirm that blocking both nicotinic and muscarinic acetylcholine receptors significantly diminishes gamma oscillation power and duration.
Four times, I’ve applied the acetylcholine receptor-blocking drugs to a slice of brain tissue containing a neural network that generates gamma oscillations. Four times, I’ve diminished the power, but not the duration of those gamma oscillations. Four times, I’ve failed to replicate a published finding.
I’m not happy about this.
What are my options?
First off, I should emphatically state that I am absolutely confident in the original finding; no reader should entertain Ioannidis-inspired thoughts of statistical unreliability. The issue here is one of significant differences in experimental conditions. In addition, an n of 4 is still too small to draw any major conclusions. It is quite possible that further replications of the experiment will reveal the expected decrease in oscillation duration. But these findings may be accurate; I need a research strategy to deal with this unexpected result.
So. Given unlimited time, I’d like to chase down the precise experimental variables that are contributing to the differences between my results and the previous set. Off the top of my head, these could include differences in; the physiology rig (submerged chamber versus interface chamber), the pharmacology (alpha-bungarotoxin, mecamylamine and atropine versus DHßE and atropine), the ionic concentrations in my extracellular solution (1.5 mM magnesium versus 2 mM magnesium). Identifying which constellation of variables are critical contributors could take 1 week of experiments, or it could take 6 months. From the perspective of “I want to graduate within the next 2 years”, 1 week is acceptable; 6 months is not.
One option I have seriously considered, is to not diagnose the problem. Accept the result, and justify it by enumerating all the differences in the experimental conditions. Focus on the robust result - that cholinergic modulation contributes to the power of gamma oscillations. Discover the neural mechanisms underlying this phenomenon (a good 6 months of work), and lay aside the question of under what conditions blocking acetylcholine receptors is sufficient to drive a substantial reduction in gamma oscillation duration. As a methodological scientist, I find this option fundamentally unsatisfying. But it is a measured approach to time management.
So what am I going to do?
After a week of alternatively glaring at my data plots and staring off into space, I’ve come to the realization that I cannot, in good conscience, just ignore the possibility that some subtle, easily adjusted, experimental variable is responsible for the absence of half of the published effect of acetylcholine receptor antagonists on gamma oscillations. It is worth my time to explore adjustments to some of the more malleable experimental conditions (namely the magnesium concentration and the specific drugs I use). On the flip side, I have decided that it is not worth testing the hypothesis that the type of physiology rig is to blame. Relearning a rig I have not used in over 5 years, one that is being used by other researchers - such a tangent could engulf my summer months; this does not seem, to me, the best use of my time. Especially since, if the rig turns out to be the key variable, it is very unlikely, for multiple reasons, that I would conduct all my planned pharmacology experiments on the interface rig. So I’d be back to where I am now, in terms of tractable research options.
On Wednesday, I have rig time. Likely I’ll be adjusting the external magnesium concentration from 1.5 mM to 2 mM. Then I’ll wash on the nicotinic and muscarinic acetylcholine receptor antagonists I have been using. And see what happens. Hopefully, I’ll see a decrease in oscillation duration to accompany the decrease in gamma power I know I will see. Maybe I won’t. In any case, I’ll be ordering the original drugs (DHßE, to replace the alpha-bungarotoxin and mecylamine). In another week, that drug will be delivered to my desk, and I’ll test whether it is capable of halving the duration of my gamma oscillations.
And if neither of these modification produce the elusive changes in duration?
Then I’m moving on. I’m focusing on the changes in gamma power. I’m failing to replicate a finding due to unknown methodological issues. I’m succeeding to examine the role of cholinergic modulation on a critical aspect of the generation of gamma oscillations in the midbrain network. And I’m finishing my PhD sooner rather than later.
Aside: For an additional perspective on when and how graduate students should replicate previous research, read To Replicate or Note to Replicate? by Michael Price, staff writer for Science Careers.