Last year, Harvard College senior Kelsey Ichikawa (shown in the photo above) interviewed the Martinos Center’s Bruce Rosen and Bruce Fischl for a general audience article about functional MRI, which she was writing for a science journalism course. Earlier this year, the article won the Harvard Bowdoin Prize for Essay in the Natural Sciences. Ultimately, she hopes to publish a version of the essay in a magazine. Following are excerpts from the article, including sections about the origins of the technique and some of its more controversial applications.
Chances are, you have heard the phrase that “your brain lights up” in response to certain stimuli. That metaphor comes from the technology of functional magnetic resonance imaging (fMRI). But how exactly do neuroscientists “see” the brain and its activity? How do those visual representations shape how we describe and imagine the mind? The full version of this narrative essay follows a human brain in an MRI machine to the final image in a scientific publication, tracing the steps of data collection, processing, and statistical analysis that ultimately allow us to visualize signal amidst the noise in fMRI. Through original research and interviews, I explore the scientific and societal complexities of neuroimages, including challenges with the BOLD signal as a proxy, non-standardized statistical procedures, and consequences of neuroimaging hype in legal and commercial settings. In writing this essay, I drew on the expertise of many scientists, including several at the Martinos Center, and my own experiences conducting fMRI research at Harvard’s Center for Brain Science.
fMRI’s Origins
fMRI is not the first technique to alter our visual imaginations about the brain. When EEG (electroencephalography) came on the scene in the 1930s, scientists hailed it as the pivotal technological development that would allow them to elucidate the physicality of thought and cognition. Perhaps most indicative of the hype surrounding this new brain imaging technology, 1951 featured a famous public experiment in which several scientists strapped an EEG cap onto Albert Einstein’s head and told him to think about his theory of relativity. This public, performative act seemed to suggest that wavy lines on a page could capture high-level physics in the brain of a genius [2].
“New technology will surface and suddenly everyone will think that teaches us how the brain works because now we have these new things we can measure. But usually a few years into that technology, you realize, oh, this is actually just what we already knew, but using a new language to describe it,” Justin Baker said. “Things tend to move in circles where we don’t necessarily make a ton of fundamental progress in my view, at least with respect to the human brain.”
Ultimately, EEG did not deliver on its optimistic promises and hopes, opening a ripe opportunity for fMRI to enter center stage in the 1990s.
In 1990, Seiji Ogawa at the University of Minnesota published the first paper about the existence of the BOLD signal [14]. Then on a May evening at Massachusetts General Hospital, postdoctoral fellow Kenneth Kwong performed an experiment that produced what Martinos Center Director Bruce Rosen called “a eureka moment.” Earlier that year, Jack Belliveau, Kwong’s colleague at the Center, had demonstrated that MRI could track blood flow in the brain, but his method required injecting a chemical to create differences in magnetic properties of tissues. Kwong sought a means of revealing brain activity using only the internal properties of the body, without any need to inject potentially risky substances. The first video of the BOLD signal, a grainy black and white film less than 20 seconds, showed the lower half of a gray oval—the brain—flashing on and off in time with the subject’s visual stimulus [15]. Inspired by Kwong’s video at a conference in San Francisco that August, Peter Bandettini at University of Wisconsin-Madison later published another paper showing the efficacy of fMRI techniques [1].
With these three foundational proof-of-concept experiments, fMRI was off to the races. But has it truly revolutionized our insights into our minds, our thoughts, and ourselves?
Dr. Bruce Rosen, Director of the Martinos Center for Biomedical Imaging, thinks so. It took a barrage of emails to get in touch with him, but when we finally talked on the phone, he hardly needed prompting to share his thoughts. “The field has held up remarkably well over the years. The fact that patients are being operated on based on fMRI findings, that our statistical methods continue to refine and get better so we can see subtler things—that’s fantastic,” he said. “The fact that in looking for subtler things we are occasionally going to go astray, I would just say, that’s science.”
Bruce Fischl, one of the early pioneers of fMRI analysis algorithms, also seemed optimistic about fMRI’s progress. He has salt and pepper hair, a single earring, and a skeptical tone of voice whenever he replies with “okay.” “Imaging has gotten astonishingly better,” he said. “If we look back at the images that we got in the mid to late 90s, they were awful by today’s standards, it took fifteen minutes to get what we would today call medium resolution. Today we can get those images in four minutes, at way better quality. Improvement in image quality and the decrease in acquisition time has really made MRI clinically relevant.”
Statistical Analysis
However, many aspects of the fMRI analysis pipeline still need to be standardized and carefully evaluated. The sheer size of neuroimaging data has made statistical analysis one of the thorniest steps. Multiple comparisons, the problem of too many statistical tests such that you are guaranteed to encounter false positive readouts, presents numerous potential pitfalls.
This became eminently clear in 2009 when an fMRI scan detected something fishy in a dead salmon. Craig Bennett, then a postdoctoral researcher at the University of California, Santa Barbara, wanted to test how far he could push the envelope with fMRI analysis. He slid a single Atlantic salmon into an MRI scanner, showed it pictures with emotional situations, and then followed typical pre-processing and statistical analysis procedures. Lo and behold, the dead fish’s brain exhibited increased activity for emotional images—implying a sensitive, if not alive, salmon [17]. Even in a dead salmon’s brain, the MRI scanner detected enough noise that some voxels exhibited statistically significant correlations. By failing to correct for multiple comparisons, Bennett and his colleagues demonstrated that they could “discover” illusory brain activity.
Neuroimagers have to correct for multiple comparisons by establishing stringent thresholds for statistical significance.
“It’s hard because we don’t know what the right directions are,” Fischl told me over coffee. “As a researcher, you’re left with a choice: Are you going to live with missing stuff? Or are you going to live with showing stuff that’s not real?”
Fischl is talking about how statistical thresholds have to strike a balance between a scientist’s two deepest fears: false positives (mis-identifying noise as signal) and false negatives (losing the signal amidst the noise). Therein lies the rub: the field has not yet settled on a best practice solution to multiple comparisons because people cannot agree on the right balance of strictness. Moreover, all the possible corrective procedures have important weaknesses, like assuming the voxels are independent of each other even though they definitely are not.
In 2016, another paper rocked the neuroimaging world. Anders Eklund, Thomas E. Nichols, and Hans Knutsson published an empirical investigation of glitches in common fMRI software analysis packages [5]. These software bugs greatly increased the chance of false positive results, in some cases to over 70% instead of the 5% error rate most researchers assume. This revelation called into question previous published studies—findings about brain correlates of personality, neural representations of knowledge, even the neural signatures of decision-making. None of them were free from suspicion.
The first time I read the Bennett and Eklund papers, I sat in a chilled stupor. Was the research I was doing, was the research I was reading, were they even real? Okay, I told myself. There’s always the option of returning to the zebra finches or fruit flies. But model organisms are paltry alternatives for someone like me who wants to study human social processes in the brain. Mice won’t help me understand poverty’s impact on the brain’s cognitive abilities.
Rosen offered a more tempered take on these developments. “The salmon paper highlighted that you can do stupid things if you don’t know what you’re doing,” he said. “It showed that you could do analysis in a way that seems reasonable and get this really dumb result. Am I surprised that you get a dumb result? Hardly. fMRI detects a remarkably big signal, but it’s still a signal of 1% change from baseline. Is it easy to screw things up so that you see changes of 1%? Pretty easy.”
As for the Eklund paper, he conceded, “It was an excellent point of statistics. I see the underlying math as being very solid. It’s actually a point we understood, but there are no doubt lots of people that didn’t understand.” But he thinks the paper’s implications were overblown. “The impact of that paper was actually pretty modest in terms of the number of results that were invalidated, that were important results. And whether we were significantly misleading people or sending doctors astray or anything like that, was negligible as best as I could tell. But the paper got a lot of press, and then suddenly, now fMRI has a black eye.”
The Images Circulate Outside of the Lab
That black eye has not halted optimistic speculations about the technology. In a Wall Street Journal article in April 2019, Dr. Jerry Kaplan wrote about “The Machines That Will Read Your Mind”: “With improved imaging technology, it may become possible to ‘eavesdrop’ on a person’s internal dialogue” [9] Kaplan highlights the convergence of machine learning advances and fMRI data, which may allow neuroscientists to “decode” the thought and content in brain data. His article explores the possibility of using neuroimaging data for detecting lies, judging guilt in legal settings, determining when someone is truly in pain from a disease, and brain activity surveillance. “Someday it may be possible to learn to some level of precision whether your spouse really loves you, finds you attractive or is having an affair.”
Is it just me, or does this sound like the premise to a Black Mirror episode?
The interest in deception and the brain is not new. Since 2008, companies like No Lie MRI and Cephos have raced to develop adequate research to use fMRI in lie detection, offering their services to legal defendants looking to validate their alibis [12]. Most scientists and legal scholars agree that the technology is not ready for applications in law, and several courts have denied requests to use fMRI evidence in arguments [6]. But as recently as 2016, Dr. Robert Huizenga, investor in No Lie MRI, was still promoting the company on Dr. Oz’s show. He touted fMRI as the “first unbiased, scientifically-backed way to differentiate a lie from truth telling” [3]. Unbiased. Hm.
As a practicing neuroimager, Rosen understands how fMRI images take on a special persuasiveness. “The clarity of the images lead to the implication of something more than what the image is,” he said. “You see a spot on the brain and you feel like, Oh, this is so clear. And then when you look at the underlying data you realize, eh, it’s a pretty small signal that the statistics probably suggest. It’s more a probabilistic than real result. Whereas when you see the bright spot, it doesn’t seem probable at all, because there it is, right? Inarguable.”
It is precisely this power of brain images’ projected scientific authority that led anthropologist of science Joseph Dumit to comment on the “undue risk in courtrooms that brain images will not be seen as prejudiced, stylized representations of correlation, but rather as straightforward, objective photographs” [4].
In attempts to mitigate sentences, lawyers have adduced brain images as evidence of pathologies in criminal defendants [11]. Arguments often take the following form: the defendant has a neurological or psychiatric disorder that impairs cognitive and moral reasoning. This image reveals the brain abnormalities associated with the psychopathology.
One of the best-known historical examples of this is John W. Hinckley, Jr.’s insanity defense. In 1981, Hinckley attempted to assassinate President Ronald Reagan. During his trial, the defense team argued that Hinckley was schizophrenic and provided CT (computerized tomography) scans of Hinckley’s brain to illustrate abnormalities in its folds and shape. Although the judge initially denied the request to show the images to the jury, he later changed his mind and permitted it, justifying his decision by the need to view all evidence related to the insanity plea [8]. Ultimately, Hinckley was found not guilty, and the case made a strong point about the kinds of expertise, and the mediums of that expertise, that American law trusted to validate mental illness. CT scans use a different technology from MRI, but the case’s legacy pertains to fMRI as well. For instance, in 2016, neuroscientist Ruben Gur used anatomical and functional data from MRI scans to argue to a jury that Steven Northington was intellectually disabled and therefore should not receive capital punishment [10].
In court, neuroimaging evidence can literally become a matter of life and death. At the same time, this kind of evidence reifies conceptions of distinct human kinds: the mad and the sane, the pathological and the healthy, with these categories borne out in shining pictures of brain activity. In doing so, it privileges biological conceptions of personhood over other more holistic notions of a human life.
One concerning upshot of this is the medicalization of deviance, which can ultimately motivate biological intervention to eliminate behaviors considered non-normative or wrong. That is, fMRI often purports to show that a brain is “broken,” and a broken brain demands to be fixed. This can lead to ethically fraught initiatives like recent efforts to electrically stimulate prisoners’ brains in order to reduce aggression [13]. Here, the issue of interpersonal violence, a social issue influenced by many structural and cultural factors, comes to be located in the brain at the level of the individual. This is in no small part because of the tight hold that brain images have on our imagination of the mechanisms structuring human behavior, especially stigmatized dispositions like mental illness and criminality.
—–
Kelsey Ichikawa graduated from Harvard College in 2020 with a joint concentration in Neurobiology and Philosophy and secondary in History of Science. She completed her joint senior thesis with the Harvard Intergroup Neuroscience Lab, advised by Mina Cikara and Susanna Siegel. She can be reached at ks.ichikawa@gmail.com.
Works Cited
- Bandettini, Peter A. “The Birth of Functional MRI at the Medical College of Wisconsin.” In FMRI: From Nuclear Spins to Brain Functions, edited by Kamil Uludag, Kamil Ugurbil, and Lawrence Berliner, 11–18. Biological Magnetic Resonance. Boston, MA: Springer US, 2015.
- Borck, Cornelius. “Recording the Brain at Work: The Visible, the Readable, and the Invisible in Electroencephalography.” Journal of the History of the Neurosciences 17, no. 3 (July 16, 2008): 367–79.
- Calderone, Julia. “There Are Some Big Problems with Brain-Scan Lie Detectors.” Business Insider, April 19, 2016. Accessed April 7, 2020.
- Dumit, Joseph. “Objective Brains, Prejudicial Images.” Science in Context 12, no. 1 (1999): 173–201.
- Eklund, Anders, Thomas E. Nichols, and Hans Knutsson. “Cluster Failure: Why FMRI Inferences for Spatial Extent Have Inflated False-Positive Rates.” Proceedings of the National Academy of Sciences 113, no. 28 (July 12, 2016): 7900–7905.
- Farah, Martha J., J. Benjamin Hutchinson, Elizabeth A. Phelps, and Anthony D. Wagner. “Functional MRI-Based Lie Detection: Scientific and Societal Challenges.” Nature Reviews Neuroscience 15, no. 2 (February 2014): 123–31.
- Gewin, Virginia. “Turning Point: Craig Bennett.” Nature 490, no. 7420 (October 2012): 437–437.
- Jr, Stuart Taylor, and Special To the New York Times. “Cat Scans Said to Show Shrunken Hinckley Brain.” The New York Times, June 2, 1982, sec. U.S.
- Kaplan, Jerry. “The Machines That Will Read Your Mind.” Wall Street Journal, April 5, 2019, sec. Life.
- Kelkar, Kamala. “Can a Brain Scan Uncover Your Morals?” The Guardian, January 17, 2016, sec. Science.
- Kulynych, Jennifer. “BRAIN, MIND, AND CRIMINAL BEHAVIOR: NEUROIMAGES AS SCIENTIFIC EVIDENCE.” Jurimetrics 36, no. 3 (1996): 235–44.
- Miller, Greg. “Truthiness? No Lie MRI Hits the Legal System.” Science | AAAS, March 17, 2009.
- Molero-Chamizo, Andrés, Raquel Martín Riquel, Juan Antonio Moriana, Michael A. Nitsche, and Guadalupe N. Rivera-Urbina. “Bilateral Prefrontal Cortex Anodal TDCS Effects on Self-Reported Aggressiveness in Imprisoned Violent Offenders.” Neuroscience 397 (January 15, 2019): 31–40.
- Ogawa, S, T M Lee, A R Kay, and D W Tank. “Brain Magnetic Resonance Imaging with Contrast Dependent on Blood Oxygenation.” Proceedings of the National Academy of Sciences of the United States of America 87, no. 24 (December 1990): 9868–72.
- Boas, Gary. “On Introducing Noninvasive FMRI: A Conversation With Ken Kwong,” fMRI25. November 29, 2016.