A prison is perhaps the easiest place to see the power of bad incentives. And yet in many other places in our society, we find otherwise normal men and women caught in the same trap and busily making life for everyone much less good than it could be. Elected officials ignore long-term problems because they must pander to the short-term interests of voters. People working for insurance companies rely on technicalities to deny desperately ill patients the care they need. CEOs and investment bankers run extraordinary risks—both for their businesses and for the economy as a whole—because they reap the rewards of success without suffering the penalties of failure. Lawyers continue to prosecute people they know to be innocent (and defend those they know to be guilty) because their careers depend upon winning cases. Our government fights a war on drugs that creates the very problem of black market profits and violence that it pretends to solve….
We need systems that are wiser than we are. We need institutions and cultural norms that make us better than we tend to be. It seems to me that the greatest challenge we now face is to build them.
MARCO IACOBONI
Neuroscientist; professor of psychiatry & biobehavioral sciences, David Geffen School of Medicine, UCLA; author, Mirroring People: The Science of Empathy and How We Connect with Others
We should be worried about science publishing. When I say “science publishing,” I am really thinking about the peer-reviewed life-science and biomedical literature. We should be worried about it because it seems that the only publishable data in life science and biomedical literature are novel findings. That’s a serious problem, because one of the crucial aspects of science is reproducibility of results. The problem in life science is that if you replicate an experiment and its results, no one wants to publish your replication data. “We know that already,” is the typical response. Even when your experiment is not really a replication but resembles a previously published one, and your results aren’t even exactly identical to previously published ones but close enough, unless you find a way of discussing your data under a new light, nobody wants to see your study published. Only experiments that produced results opposite those of previously published studies are likely to be published. Here, the lack of replication makes the experiment interesting.
The other big problem is that experiments that produce negative findings, or “null results”—that is, do not demonstrate any experimental effect—are also difficult to publish unless they show lack of replication of a previously published important finding.
These two practices combined make it very difficult to figure out, on the basis of the literature alone, which results are solid and replicable and which are not. And that’s clearly a problem.
Some have argued that to fix this problem we should publish all our negative results and publish positive results only after replicating them ourselves. I think that’s a great idea, although I don’t see the life-science and biomedical community embracing it anytime soon. But let me give you some practical examples as to why things are messed up in the life-science and biomedical literature and how they could be fixed.
One of the most exciting recent developments in human neuroscience is what’s called noninvasive neuromodulation . It consists of a number of techniques using either magnetic fields or low currents to stimulate the human brain painlessly and with no, or negligible, side effects. One of these techniques has already been approved by the Food and Drug Administration to treat depression. Other potential uses include reducing seizures in epileptic patients, improving recovery of function after brain damage, and in principle even improving cognitive capacities in healthy subjects.
In my lab, we are doing a number of experiments using neuromodulation, including two studies in which we stimulate two specific brain sites of the frontal lobe to improve empathy and reduce social prejudice. Every experiment has a rationale that is obviously based on previous studies and theories inspired by those studies. Our experiment on empathy is based mostly on our previous work on mirror neurons and empathy. Having done a number of studies ourselves, we are pretty confident about the background on which we base the rationale for our experiment. The experiment on social prejudice, however, is inspired by a clever paper recently published by another group that also used neuromodulation of the frontal lobe. The cognitive task used in that study shares similarities with the cognitive mechanisms of social prejudice. However, here is the catch: We know about that published paper (because it was published), but we have no idea whether a number of groups attempted to do something similar and failed to get any effect—simply because anegative findings don’t get published. We also can’t possibly know how replicable the study is that inspires our experiment, because replication studies don’t get published either. In other words, we have many more unknowns than we would like.
Publishing replications and negative findings would make it much easier to know what is empirically solid and what is not. If twenty labs perform the same experiment, and eighteen get no experimental effects, while the remaining two get contrasting effects, and all these studies are published, then you know, simply from reading the literature, that there isn’t much to be pursued in that line of research. But if fourteen labs get the same effect, three get no effect, and three get the opposite effect, it is likely that the effect demonstrated by the fourteen labs is much more solid than the effects demonstrated by the six other labs.
Given the current publishing system, achieving these conclusions will be complicated. One way of doing it is to pool experiments that share a number of features. For instance, our group and others have investigated mirror neurons in autism and concluded that mirror neuron activity is reduced in autism. Some other groups failed to demonstrate it. The studies showing mirror neuron impairment in autism largely outnumber the studies failing to show it. In this instance, it is reasonable to draw solid conclusions from the scientific literature. In many others, however, as in the example of neuromodulation of the frontal lobe and social prejudice, there is much uncertainty, because of the selectivity regarding what gets published and what doesn’t.
The simplest way to fix the problem is to evaluate whether or not a study should be published only on the basis of the soundness of its experimental design, data collection, and analysis. If the experiment is well done, it should be published whether it is a replication or not, no matter what kind of results it shows. A minority in the life-science and biomedical community are finally voicing this alternative to the current dominant practices in scientific publishing. If this minority eventually becomes a majority, we will at last have a scientific literature that can be evaluated in quantitative terms ( x number of studies show this, while y number of studies show that) rather than in qualitative terms (this study shows x but that study shows y ).
This approach will make it even more difficult for irrational claims (denial of evolution or denial of climate change are the most dramatic examples) to pretend to be “scientific.” It would also limit the number of controversies in life science to those issues that are truly unclear, saving all of us the time we spend arguing about questions that should long ago have been settled by the empirical data.
ERIC R. WEINSTEIN
Читать дальше