-Me, a professional goober. Take my class!
I’ve written before about how the brains of blind people can show us which aspects of our mental life are strictly tied to certain senses and which are, let’s call them, “sensory-flexible.” Imagining how a person might navigate their world in the absence of visual input, I think, can stir up interesting new ideas about how and why people use each of their senses. But blindfolding yourself isn’t even a close approximation of the lifetime of experiences and neural changes that come with blindness. Imagination can only take us so far.
I am increasingly disenchanted with brain scanning studies that claim to prove the obvious: “Humans like fun, brain scans show!” Much of the time, you don’t need an expensive brain scanner to tell you these things, and it’s frustrating to see team after team of researchers use these big toys to confirm what we already know. In the case of blindness, however, I see a truly important use of brain scanning technologies. Esoteric philosophical thought experiments can come life when you compare blind and sighted brains.
In these experiments, the brains of sighted individuals can be thought of as being “exposed to” a really overwhelming treatment, visual input, while the brains of blind individuals are not. Which is an interesting flipping of the script: we’re not comparing blind individuals to “normal controls.” Vision is a manipulation, making sighted people our experimental group and blind people our control group. By comparing the two, we can begin to answer questions about what brain functions are “innate” or “hard-wired” and which are shaped by experience (i.e., visual experience).
How do we tell the difference? Remember those “sensory-flexible” functions I mentioned? Well, if a brain function happens the same way regardless of whether it’s fed information through visual, auditory, or even tactile pathways, then there’s probably something about that function that never needed a particular sensory input to help it wire up in the first place. The trouble is, humans overwhelmingly rely on visual input, making it difficult to tell “visual” brain regions apart from others. That’s where the blind brain comes in.
I want to focus on a study out last year in the Journal of Neuroscience, by Johns Hopkins researchers Connor Lane, Shipra Kanjlia, Akira Omaki, & Marina Bedny. This is one of a bunch of interesting studies coming out in this line of work, but I chose this one because it appeals to my inner grammar nerd. Lane & pals explored the role of the “unemployed” visual cortex of people who had been blind since birth. It’s been known for a while now that this brain region light ups instead of languishing unused. It seems to help out with a variety of things, from reading Braille to maintaining information in memory. We know this from studies that use brain scanning techniques like fMRI (functional magnetic resonance imaging) to see the brain at work. We also know that if we disrupt activity in the blind visual cortex using a brain-zapping technique like TMS (transcranial magnetic stimulation), these superpowers seem to go away. Correlation AND causation, neat!
Sensory substitution devices, which translate visual information into “soundscapes,” can be used to present words and pictures to blind people in the scanner, feeding the brain information through a previously unavailable sensory code. Eerily, the brain areas specialized for processing visual information about “what and where” in the brain are still there in the blind brain. Same goes for areas specialized for even more specific things, like words and body parts. These results show that the brain’s weird specializations and selectivities aren’t loyal to any one sensory input. They suggest that there’s something about the way we operate as humans that would’ve resulted in similar neural architecture whether or not we relied on vision. What that something is, however, is up for debate.
Lane & colleagues looked at the way the blind visual cortex processes language. A previous study had found that the response here was greater when sentences had a scrambled word order than when they didn’t. This raised the question of whether these responses were driven by something truly useful, like making sense of words, or simply by the surprise, novelty, complexity, or general weirdness of these scrambled sentences. Another study showed that making a memory task more difficult didn’t drive up the response here, meaning it seems unlikely that the response to scrambled sentences was driven by the difficulty people had in making sense of the words. But Lane & colleagues wanted to get more specific.
They studied something called syntactic movement, a concept proposed by Noam Chomsky. I didn’t know what this was, but it turns out I am certainly guilty of its overuse. Lane gives an example of a sentence without movement: “The creator of the gritty HBO crime series admires that the actress often improvises her lines.” And one with movement: “The actress that the creator of the gritty HBO crime series admires often improvises her lines.” See the difference? The latter sentence taxes the ol’ memory banks in a way that the former doesn’t. You have to hang onto the fact that we’re talking about the actress while I interrupt what I’m about to tell you about her with this little factoid about her having the admiration of the series creator. Now, if you are the blind visual cortex, do you light up for this because it’s difficult, or because you’re helping in some vision-like way to extract meaning from the complicated sentence?
Sidebar to any of my students that may be reading: start taking notes now. I want to see good experimental controls in your final grant proposals. Remember: beat your reviewers to the punch, anticipate their worries and do some cool magic tricks (rhetorical power posing, cleverly matched control groups, and reassuring graphs) to placate them. In other words, if you’re lying awake at night scared of a lurking variable and think your reviewers are definitely going to come for you about it, take the fall. Act hurt. Get indignant. And then get up. Look sickening. And make them eat it. Watch and learn.
In fMRI, we’re often comparing one thing to another thing. This subtractive approach has its limits, but can be very useful for dispatching lurking variables like difficulty. Here, the researchers compared the brain responses during listening and comprehension of sentences to two other responses: while people worked on verbal sequence memory (i.e., remembering a series of words not in a sentence) and while they worked on math problems. They also went a step further, asking whether the brain response increased with the difficulty of sentence comprehension and of math problems. So they’re first establishing that there’s some truly language-specific component of these responses by comparing them to the responses to a similarly language-y, but structurally different task (verbal sequence memory) and to a structurally similar, but non-language-y task (math). And then they further test for language-specific involvement by asking whether the visual cortex is sensitive to a change in difficulty for just language, or for either language or math. If it knows what’s hard and easy, that’s a good sign it’s doing the heavy lifting for that computation. And if it knows this for only one or the other, then this heavy lifting is specific to that task, not a general grunt of difficulty.
They found, first, that only in their blind subjects did the visual cortex respond more to sentences than to non-words and to math problems. They also found that, in blind people, sentences with syntactic movement elicited even greater responses. Meanwhile, when presented with easy and hard math problems, this area didn’t ramp up its activity for the hard ones. And if you’re wondering if maybe they just didn’t make the problems hard enough, check out the prefrontal cortex, where the brain responses definitely scaled up for harder problems.
This helps to rule out the hand-waving explanation that would say that these responses are just a general response to difficulty. So it does seem like the blind visual cortex takes on specific roles in language processing that the sighted visual cortex does not. And it does seem to really be helping the blind individuals get the sentence comprehension job done: the more they activated their “visual” cortex, the better they understood the sentences! The authors dangle a promise of a TMS experiment in front of the reader–I will bet you a set of tickets to Dolly’s upcoming tour that zapping this brain region knocks sentence comprehension way down, in a causal confirmation of this intriguing correlation.
What does this mean? Well, it’s pretty weird that this brain area that we think of as a glorified piece of camera film seems to be capable of participating in information processing as uniquely, complexly hierarchical and rule-based as human language (remember, these words were heard, not read). Prominent/vocal brain-development researchers Steven Pinker and Mark Hauser have proposed that the perisylvian language areas, a family of brain regions clustered around the auditory cortex, are where the real heavy lifting occurs during syntactic movement. The blind participants did also activate these regions, so it’s not like they used a different part of their brain entirely. But at the same time, their brains differed from the brains of sighted individuals in other ways. For one thing, they were less lopsided in how they process language. Normally, the brain’s left hemisphere becomes specialized for language while the specialization for abilities like visually analyzing faces gets pushed over into the right hemisphere. Or, erm, it’s unclear who shoved who first, actually. Regardless, in the blind subjects, Lane and colleagues found evidence that this left-sided language specialization was reduced. Absent this turf war between pushy roommates language and visuospatial analysis, brain organization sort of evens out.
In any case, it seems kind of ludicrous to think that the visual cortex could just switch teams so easily. Since we think of language as uniquely human, while vision is something that even many very lowly animals have, what, then, do we do with this idea that different brain areas are somehow “suited” to their jobs? In other words, what made the visual cortex visual in the first place? And how did language sneak in there when (sorry) no one was looking?
Well, it could be that language is cultural, and that there aren’t brain networks that are innately specialized for it. On other hand, the authors argue, it could be that evolution wired us for language. We know that the perisylvian language areas retain their function in both deafness and blindness. It could be the case that you only need these to acquire language–maybe these areas contain the seeds of linguistic processing, which later spread to other areas, and in blindness, this includes visual areas.
But wait, they say, what about critical periods for language acquisition? Isn’t it true that certain things need to occur at certain points in development, or they’ll never occur? This is why learning languages is supposed to be harder as you get older, and it’s also why a cat with its eyes sutured shut during certain developmental stages will never be able to process visual information in the same way. We know that only blind people who lost their vision before age 9 will activate their visual cortex, in general. After that, it’s thought that too much visual processing software had been installed, and you can no longer teach the old dog new tricks, to make a metaphor cocktail. So maybe, the authors write, if you’re blind, the connections from language areas to the visual cortex don’t get pruned and sculpted by experience during development. Maybe it’s these residual connections that get language information into the visual cortex. Alternatively, maybe the visual cortex picks up new tricks from all the other aforementioned specialized “visual” areas (those responding to words, faces, places, body parts, etc) that stay specialized even when words are in Braille or faces are converted using vision-to-audio devices.
Wherever these responses came from, there are likely to be other specializations within the blind visual cortex. The authors note that locating sound sources in space, mentally rotating a tactile object, and discriminating between two sounds or tactile objects have also been known to light up the blind visual cortex. But it’s in comparing these types of tasks, subtracting off any thought process too general to give us new information, and asking whether these responses ramp up with difficulty, that we can start to tease out what the common, non-visual computations underlying all of these processes are. What is it that you’re doing when you follow the syntactic wiggle-worms I call sentences? Does it feel similar to something you’re doing when your eyes dart around, analyzing a complex visual scene? Similar to anticipating the chorus of a song? Similar to the feel of physically wrestling with a wiggle worm? Somehow you turn sentences into stories and stories into lives and identities. This storytelling ability is part of what makes us human, and weirdly/miraculously, it’s something that springs from the same brain parts, whether or not you rely on vision.
I had a paper accepted yesterday! This is my first first-authored paper, which means I saw this thing through from start to finish. And lived to tell the tale. I can hardly believe it myself.
This project began when I was a wee first-year graduate student some years ago (I will tell you flat-out that I am 30 years of age, loud and proud, but etiquette dictates that you NEVER ask a scientist how old their data are). I had just spent two years at the National Institute of Mental Health (NIMH) learning to scan brains, and I hoped to earn a spot in Rich Ivry’s lab at Berkeley by showing off my new skills. So when Rich, who would eventually become my PhD advisor, told me that he had a project in mind for me, I said great, I can totally do that in a ten-week lab rotation (HAHAHAHAHA I was so young and stupid).
The brain scanning technique I use is called functional magnetic resonance imaging, fMRI for short. fMRI is used to create pretty pictures of the brain in action, “lighting up” to reveal hotbeds of activity. But it’s not tracking the activity of brain cells, or neurons. It’s tracking blood flow, which is and isn’t a good proxy for neuronal activity, and I’ll tell you why.
The scanner is essentially a giant magnet, and the pretty pictures are made possible by the iron in your blood. Recently-active neurons receive fresh shipments of oxygen bound by hemoglobin, and the hemoglobin (heme = iron) changes its conformation (and thus, its magnetism) depending on whether it’s carrying oxygen or not. Follow the blood, the thinking goes, and it will lead you to active neurons.
Except when it doesn’t. Unfortunately, your blood is pumped up by your heart, which has a pesky tendency to beat faster or slower in response to THE SAME KINDS OF STUFF PEOPLE ARE TRYING TO STUDY. Scary pictures, math problems, ethical dilemmas, and even small movements all cause your heart rate to go way up or way down, so if you’re trying to learn how the brain responds to these things by looking for subtle changes in blood flow, well, godspeed to you.
Before there were fancy, expensive brain scanners, psychophysiologists in labs would hook people up to heart rate monitors, measure pupil dilation, monitor the small changes in sweatiness known as the galvanic skin response, and track the rate and depth of breaths, all to get clues about what’s going on in your head. This is, for instance, the basis of the polygraph, or lie detector test: the name just means “many graphs” (I assume), and by the way there’s a great book on the weirdo who conned everyone into thinking this was a good idea. I mean, in a way, it was: it’s a hell of a lot cheaper than fMRI-based lie detection and just as crappy.
I’ve said it before and I’ll say it again: Don’t bother studying the brain, the heart tells you everything. For example, when you’re anticipating something, it slows down just the right amount to allow more blood to build up–it’s thought that this happens so that when the time comes, you get a bigger pump of blood through your body. If you randomize the timing of events so that they can’t be anticipated, the heart learns the average and slows down that much. This gives me the creeps, but it also means that, like Peter Pan trying to dissociate from his shadow, researchers will have a hell of a time telling the difference between brain activity and “brain activity.”
I came into this problem where a fellow grad student and extremely wise mentor, John Schlerf, left off. John had previously had a dark night of the soul when, expecting to find a big fat “error” signal in the brain’s error processing center, the cerebellum, he instead found zilch. That is, until he noticed that errors caused the heart to more or less literally skip a beat, effectively canceling out that fresh supply of blood he was counting on detecting. So, using statistics and magic, he “corrected for” this change in heart rate and boom. Beautiful error signal, just where he knew it would be.
When I joined the lab, John and Rich wanted to do something very principled: to take a step back and ask how pervasive a problem this was likely to be. If errors could be masked in this way, what about other kinds of brain processing? My job was to study the brain’s response to simple, stripped-down arm movements. Poignantly, this kind of simple arm movement is what was used, in the early days of fMRI, to create a sort of template response to help predict brain activity. This template, known as the hemodynamic response function, can be thought of as a description of a person suspected of a crime. Say you believe a brain area is involved in some process, like movement, or memory, or reasoning. That area should give its location away every time that process occurs by “fitting the description.” Neurons fire, and the fMRI signal, known as the blood oxygen level dependent (or BOLD) signal, should go up in this sluggish, wavelike way. And wherever you see this happen in the brain, you color-code the area and say it “lit up.”
But what if that’s all just a bunch of blood being pumped up by the heart? Rich and John suspected that, if you removed the parts of the BOLD signal that fit a different description, based on recorded heartbeats, you might be left with, well, nothing at all, in a worst case scenario. This would have meant that all of fMRI was in serious trouble. And let’s be honest: I sort of selfishly wanted to be the person who showed this and published it more than I wanted all of the fMRI studies that had ever been done to be “real.” Don’t worry: it’s not all that bad, and I didn’t get to be the supervillain, the Hemodynamic Angel of Death, after all, as we shall soon see.
At the time, I knew this project would be a good way for me to become acquainted with the brain imaging community at Berkeley, learning to use a new scanner and new software packages. Because this was a methods project, I’d even have to dig deep into the guts of my code, hacking away at software written by, let’s be real, a total madman (I won’t name names but those in the fMRI community will feel my pain as I cursed myself for not sticking with AFNI, an NIMH-based package). This was wildly intimidating to me, but I knew I’d learn a lot and feel basically just real butch about my science. The goal wasn’t to figure out how the brain works but rather to figure out how we can best figure out how the brain works. It was not what I’d come for: I just wanted to make pretty pictures. But I also knew, from my time at NIMH, that methods projects were important, and to ignore these kinds of issues as an fMRI researcher is to consign oneself to reading very expensive tea leaves.
So! What did we do, and what did we learn? Well, first, we had people make some simple arm movements in the scanner. The rule was: Every time the crosshair turns green, you move.
We recorded their heart rate and breathing while they were in the scanner. Note that heart rate isn’t something you have a lot of control over, where breathing sort of is. You tend not to think about your breathing, but when we averaged together everyone’s breathing data, some people were rock-steady while others were more erratic, and so the effects this had on the BOLD signal were kind of a mixed bag. Heart rate, on the other hand, reliably soared after each movement:
This graph should scare you, because it looks so very much like the thing we’re trying to detect: the hemodynamic response, mentioned earlier.
We looked at two regions of interest, or ROIs, in the brain: the primary motor cortex (also known as M1) and the cerebellum.
See that nice, clean edge on the cerebellum ROI? It stops right before spilling out into the visual cortex above it, and that’s no accident. That’s months of hand-editing, a task I later outsourced to my undergrad minions, hoping it eventually took on roughly the same meditative quality as a mandala. Sidebar: I just recently learned that by going in and zapping blood vessels and other misidentified chunks of tissue, we were becoming intimately acquainted with the very same distinction (vessel or tissue?) that had, a decade earlier, caused Ben Carson to botch a high-profile separation of conjoined twins. Such a rich and storied legacy, that.
ANYWAY. Using statistics and magic, you take the files that mark every time your participant moved, you look ahead in time by creating a series of lagged files, and you pull out the BOLD signal from your regions of interest at each of those times to make a graph that, hopefully, looks like the canonical HRF and is a faithful representation of what happens in motor areas when you move.
Phew. Looks a lot like what we expected. So far so good. Then, you say WAIT, there’s ALL THIS OTHER CRAP WE RECORDED, like heart rate and respiration and whether the movements came just before or just after a heartbeat or breath and it’s all here! Let’s just throw that in and see what happens.
Once your statistics and magic account for all that other crap, guess what. It’s not THAT different. fMRI is saved. You can all go home, and go back to fighting with crappy code and bashing your heads against your keyboards.
Now, this isn’t the whole story, or the most recent version of it (these images were taken from old talks, because I’m not totally sure if I’m allowed to plagiarize my own figures before they’re even published. So, final results may vary slightly, but not much). And no, you shouldn’t really give up on the study of the brain just because it’s cheaper to study the heart (although there were times when I felt I should, and in my scannerless future, it’s definitely an appealing notion). All this says is that, on our hunts for brain activation, using the current description of the suspect should work out OK.
But then we do painstakingly show that monitoring and correcting for changes in heart rate and respiration, the way we did, can really clean up your data. We did a bunch of other stuff and made some really hideously complicated flowcharts that show exactly how much good each of our statistical corrections did–definitely worth looking into if you plan on scanning any brains attached to hearts. EVEN THOUGH we didn’t prove that fMRI is a sham and it’s all just heart rate getting in the way.
Shout out to the help and patience of John, Rich, and also Ben and Rick at the Brain Imaging Center (Ben’s blog, PractiCal fMRI, is fantastic–truly, he is doing God’s Work for fMRI researchers everywhere, and Rick heroically fixed our extremely expensive robot arm after I BROKE IT in a highly traumatic incident I was sure would cost me my spot in the lab. Extra shout-out to John for breaking the news to Rich for me, and to Rich, for taking me in anyway). And even though this took me YEARS (yes, if you’ve read this far, you get to know at least part of the secret) from data collection to publication (in my defense, this was not my only project), I’m so glad my baby is out in the world now. I didn’t know what I was doing when I started, but I do now. I mean, as much as anyone does.
“Happiness depends not on how well things are going…but whether they are going better than expected.”
Opinion papers are fun. It’s like going to a bar with a scientist and asking what they really think. They might be wrong, but there are very few people far out enough on that ledge to even be spitballing here, so they slap the “opinion” label on and start riffing away. Here, in this churn of conjecture, where bits of evidence are sized up like puzzle pieces, is where new hypotheses are born. Here is the engine that makes science go.
I’ve been dying to write about a recent opinion paper in Trends in Cognitive Sciences. The paper, called Mood as Representation of Momentum, is by Eran Eldar, Robb Rutledge, Raymond Dolan, and Yael Niv, a group from University College London and Princeton. In it, they attempt to bring together two bodies of research that don’t usually interact much. The first area of research focuses on the causes of moods and what happens when they go awry in disorders like anxiety, depression, and bipolar disorder. The second focuses on reinforcement learning (learning from rewards, which can roughly be thought of as trial-and-error learning) and decision making.
Moods, for our purposes, are similar to emotions, but longer-lasting and less specific. Your mood can be an “up” state or “down” state, depending on whether you are happy (in a good mood) or sad (in a bad mood). Your moods then make you more or less likely to experience more specific emotions: for instance, a bad mood can make it easier for you to become angry or frustrated or both.
Moods can be affected by all sorts of things, music, social interaction, self-reflection, sunshine, or even simply viewing the facial expressions of others. In labs, psychologists and economists use pictures of emotional faces, monetary rewards, or even pictures of the outcomes of sporting events to try to manipulate mood. Using smartphone apps, alarms for set reflection times, daily mood journals, brain scans, and more, scientists are attempting to understand what causes moods and what they are for.
The authors write: “The upshot of this research is that mood induced by a stimulus can affect judgments about other, potentially unrelated, stimuli. Indeed, this property may have given mood its reputation as a rich fountain for irrational behavior.”
Irrational, indeed. I may have blood coming out of my wherever when I go to bat for my right to mood, but I am not alone. The authors in believe that moods are actually evolutionarily advantageous, that far from being irrational or counterproductive, moods serve a purpose. This argument is a tough sell: moods are typically associated with mood swings, explosive tempers, and generally being a bitch. Coincidentally, many rationality fetishists have a pesky misogyny problem.
To me, this gender-related distaste for moodiness reeks of generation after generation of men taught to bury their feelings. Feelings bad. Boys don’t cry. To aspire to be master of the universe is to aspire to an unattainable objectivity, to become some sort of stoic thetan, freed from the sway of irrational emotional forces.
Toxic masculinity remains a top rant for me. So I was pretty dang thrilled to see badass Princeton computational modeler Yael Niv arguing that we have evolved to be moody because moods make us optimal learners. That’s right: mood is a feat of evolutionary engineering, a Goddess-given engine of practical efficiency for all people of all genders. Finally, evo psych in the service of something I can get behind. Niv, whose work I know best among the authors, is an expert in feelings, if ever you could call someone one. Her work creates and tweaks actual formulae for happiness. Equations. Plug and chug and Happiness = X. How’s that for rational?
Let me back up for a moment. I’ll get to the formula for happiness momentarily. But first, I want to talk about a shirt I once saw. It was emblazoned with a molecular structure and the slogan “Dopamine: technically the only thing you like.” This, while clever, is not technically true. You also like opioids (like morphine and your body’s natural equivalent, endorphins) and arguably serotonin (the chemical behind the chemical imbalance that is depression, and a common target of antidepressant medications), along with who knows what other mysterious neurotransmitters exist but that we have yet to understand. Dopamine, however, is more like the only thing (that we know of) that you want. It drives craving.
When you receive an unexpected reward, you get a burst of dopamine in an area of the brain known as the ventral striatum. A pet peeve of mine is when people show brain responses to cocaine and then do a side-by-side of whatever it is they’re arguing acts similarly. Sugar. Video games. Gambling. I myself spent most of grad school programming an Atari-like shuffleboard game which, though primitive, robustly “lit up” the ventral striatum just like cocaine. This doesn’t mean I’ve created a cohort of shuffleboard addicts. All it means is that this mechanism, this burst of dopamine that signals to you that your expectations have been exceeded, is a very general mechanism.
Now, mind you, this reward must be unexpected in order for it to change your future decision-making behavior. If all you get is the reward you expected, your expectations go unchallenged and you aren’t learning anything, per se. In fact, studies have shown that when monkeys expect a reward and then do not receive one, their dopamine neurons skip a beat, ceasing their firing as though in indignation. Activation in the dopamine-rich ventral striatum has also been measured in humans using functional magnetic resonance imaging (fMRI) in response to all kinds of pleasurable stimuli. And if you give people a pharmacological boost in dopamine, they report greater happiness from rewards than they normally would. Dopamine, in addition to feeling good, makes you want more dopamine. And, helpfully, it’s critical for teaching you how to get it.
In labs, this is sometimes studied by people playing a game where they can either win or lose money. Scientists are interested in the role rewards can play in sculpting three main things: people’s subjective reports of happiness, the brain response to future rewards, and the effect of these rewards in sculpting subsequent decisions. Based on these three measurements, computational modelers can design algorithms that can accurately predict people’s feelings of happiness, brain responses, and decision-making behavior.
For example: people report greater feelings of happiness after winning. These rewards also lead to future rewards having bigger impact on their subsequent decisions–they may, for instance, feel themselves on a hot streak and take bigger risks. Similarly, losing money reduces feelings of happiness. It also reduces the impact future rewards have on their choices, and furthermore, it measurably dampens the brain response to these rewards. Negative events throw a bucket of cold water on us, making us pessimistic.
These patterns are exacerbated for people who are less emotionally stable, suggesting that the study of how people learn from rewards may offer clues to the origins of mood disorders. They may also help explain why these disorders cause people to make the decisions they do. For instance, when people are asked to choose between a sure bet and a risky gamble, with varied gains and losses, their decisions help train an algorithm to produce a model of happiness. These algorithms calculate happiness as a function of their choices (sure bets or risky gambles, or in other words, choosing or avoiding risk), the expected payoff of the gamble, and the difference between the actual and expected payoff. Throw in some weighting variables and constants, like a “forgetting factor” that determines the relative influence of more recent events and events further in the past, and you’ve got an actual formula for happiness.
Scientists have used these types of formulae to make inferences from data acquired in smartphone-based field studies. These found that, despite what you’ve heard about the power of positive thinking, it’s not so much your expectations that impact happiness and learning. These are determined far more directly by the surprise you experience about the outcomes of your decisions. These surprises are also known as prediction errors: the difference, or error, between your predicted outcome and the actual outcome. Happiness can be calculated as a running average of recent reward prediction errors, where some prediction errors are weighted more heavily than others. And wouldn’t you know it: by modeling happiness quantitatively, you can search for activity fitting this model in fMRI scans. And this approach to looking for the seat of happiness will pay off: you will find it right there in that needy ratcheter-upper of need, the ventral striatum.
Mood, as we said, can be biased by all sorts of things. Just seeing frowny faces can bias your perception of subsequent rewards. More seriously, being depressed can mean that future rewards have less of an impact on your choices. You are de-sensitized to the meaning of these rewards. You don’t see it, because the part of you that values these rewards has been blunted. Critically, though, it’s not necessarily because your learning is impaired. This is a motivational issue, not accessible to the realm of rational appeal.
Anxiety, like depression, enhances responses to aversive stimuli: you respond to events as if they are worse than they really are. While depression manifests as a greater sensitivity to negative outcomes, positive mood can enhance risk-taking in lab settings as well as in financial markets. A positive mood biases the perceived likelihood of future positive outcomes–in other words, you see everything as coming up roses. Repeated positive prediction errors, or good surprises, can “invigorate reward-seeking behavior.” Dopamine craves dopamine. Good surprises have a way of making you believe that there are lots more good things to come.
That’s because reinforcement learning is all about tracking which states were rewarding and making choices that will get you back there. Think of choosing a “good” slot machines, or, in the wild, of animals seeking out the trees that have the most fruit. Scientists believe they are on to something in using slot machines to test people’s decision making, because when people play this type of game, their behavior seems to pick out optimal strategies.
Mood, the authors argue, smooths out inefficiencies in reward learning. Going back to the example of the animals searching for food in the trees, they write: “Increased rainfall or sunshine may cause fruit to become more abundant in all trees simultaneously. In this situation, it makes little sense to update expectations for each tree independently.” Mood, in the landscape of learning how to reliably reap rewards, is the rising tide that lifts all ships. You don’t want to be constantly surprised to find fruit–this would not be advantageous. Instead, you want to infer that something bigger is going on. So your mood helps you to ratchet up your expectations more quickly than you normally would by allowing all the happy surprises to have an even bigger impact on how much you learn from subsequent happy surprises.
Mood gives you a way of calibrating your learning apparati to account for multiple factors in your environment, without having to account precisely for each one and its impact individually. This form of generalization, far from being sloppy, actually improves the efficiency of learning when multiple reward sources are interdependent, which is pretty much the norm. It’s rare that good things happen in your life that have nothing to do with anything else you did, except maybe winning the lottery. Unless you are that powerball winner, it would be irrational to ignore the connections and treat these things as independent.
The authors write: “Indeed, such interdependencies may be the rule rather than the exception, for both animals and humans, because success in acquiring skills, material resources, social status, and even mating partners can be tightly correlated.” I’ve watched enough of the Real Housewives of Orange County to tell you that this is not necessarily true, nor would having it all necessarily lead to an improved mood.
But still, I wonder: Is the otherwise-iffy idea that women are better planners and multitaskers rooted in a real perception of this mood-driven increase in efficiency? Is this form of mathematically generalizing over all currently relevant sources of reward what gives us our supposedly innate abilities to plan? Do men do themselves a disservice by faking such an even keel, and do women suffer disproportionately from anxiety and depression due to waves of dopaminergic dysfunction?
Just wild speculation going on here, by the way. Really going off book. I’m just saying, ignoring or suppressing moods and emotions is probably not the best practice. That’s the only real flame war I am down to start here. Moods are useful! What the authors argue is that if you infer a positive momentum from an increase in the availability of fruit in your orchard, you may be on to something: spring is coming. Same goes for the negative momentum as we head into winter. Better hibernate. By adjusting your expectations as quickly as you need to to catch up the rate of rewards, you save yourself from perpetual shock, perpetual disappointment.
As a quadriplegic man I read about said, “you can get used to anything.”
Did everyone see Inside Out? If not, lookout, spoiler. Do you remember when Sadness touched all the memories? And we learned that Riley’s Disgust, Fear, Anger, and Sadness were just as important in guiding her through life as her Joy? Well, so, too, are all sources of information useful in learning how to navigate our environments. Given a good enough probabilistic model of environment, plus some Bayesian magic, you can come up with an optimal learning algorithm for a particular environment.
You want this algorithm to be able to account for environmental factors that are general enough to affect multiple states, or situations, instead of treating all the states as independent. Sure, you might over-generalize sometimes: maybe the increase in fruit is local to the trees in this particular valley. But you weight how recent and how local the changes are to try to account for this as much as you can. You assume that neighboring states have been changed in similar ways to the ones you’re currently being surprised by. Even if you’re not integrating over multiple states, or multiple sources of reward, this generalization can occur over time, too, allowing you to infer momentum from your running average of how many good surprises you seem to be getting lately.
If emotional reactions have an appropriate intensity and duration, then mood is helping you out. Good and bad moods should only stick around as long as there’s still a change in momentum registering–that is, as long as you’re being surprised and having to adjust your expectations. But once your expectations are updated and seem to be in line with your new normal, your happiness levels should reach a more neutral place. Similarly, if you keep encountering bad outcomes, you will get in a bad mood but your expectations will level off appropriately eventually (You can get used to anything). The authors point out that happiness levels return to baseline even after winning the lottery, which is maybe why they say money can’t buy you happiness.
But what happens if you have a mood disorder? These can be serious. Excessive happiness or sadness would lead to behaviors that are maladaptive. If you learn less from negative surprises than you do from positive ones, you develop an overly optimistic expectation, which means you’re slammed harder by the negative surprises. High expectations lead to low mood.
A mood that keeps pace appropriately with changes in the environment acts as a homeostatic mechanism to keep your learning processes on track. It’s when your mood is out of sync with the rate of change in the environment that you might run into trouble.
People with depression are thought to have some dysfunction in regulating the levels of the neurotransmitter serotonin in their brains. Low serotonergic function has been known to lead people to learn less quickly from negative outcomes. Depression may result in (or from) negative outcomes having less of an impact on behavior. Normal feedback loops get out of whack, with expectations falling further and further behind realities. To avoid perpetual disappointment, expectations need to be adjusted to match outcomes. But as mismatches grow, bigger mood swings can result. These oscillations may form the basis for bipolar disorder, causing expectations and mood to pitch wildly up and down even when nothing in the environment is changing.
Interestingly, in the general population, positive mood & risk aversion predominate. Risk aversion can make you happy to have what you have, in a good mood as long as nothing is going wrong. This predominance may arise because people learn more, in general, from negative surprises than positive ones, changing their decisions and expectations more markedly when things get bad than when they are going well. This keeps people happy in the face of unpleasantness. The stronger biasing effect that negative outcomes have is likely due to the greater evolutionarily adaptive significance of learning quickly from negative momentum. In other words, it’s more important for our survival to avoid negative outcomes than to maximize the positive ones. If you don’t find the most fruit, you’ll probably be fine, but if you don’t learn to run from predators, you’re dead.
I come from a long line of pessimists and worriers. I can’t help but think that sensitizing oneself to negative outcomes is a helpful form of vigilance that is maybe not such a bad character trait. But so where do we land on the power of positive thinking versus setting one’s expectations low so that you will never be disappointed? Well, it’s telling that if you treat people with major depressive disorders by giving them serotonergic drugs, their perceptions change before their mood does. In other words, putting on the rose colored glasses comes first, and seems to be the cause of the improvement in mood. So what antidepressants are really giving you is not a direct mood boost, but rather a shift in your perception that results in one. How can you do this without drugs? Who knows. If I knew that, I wouldn’t be writing this stuff for free.
Bottom line: Mood can sensitize (or de-sensitize) you to the outcomes of your decisions, increasing (or decreasing) your responsivity to them. Emotional instability could, in theory, arise from either moods having too strong a sensitizing effect or from weakening people’s ability to habituate to new normals. The evidence suggests that people who are emotionally unstable tend, if anything, to show stronger effects of outcomes on their feelings (they are sensitive) and a stronger influence on their evaluation of subsequent outcomes. Their hair-trigger reflex for inferring momentum may lead to overgeneralization, and inappropriate optimism or pessimism.
But without this generalization, come on. We’d be simple idiots. We’d be rats bar-pressing for our rewards. We’d be stuck in a railbound behaviorist hellscape of rote stimulus-response associations, Skinner boxes made of skin and bone.
The authors clinch an important win for moody bitches everywhere by closing the paper with: “Moods can reflect inference of momentum even when there is none in the environment, leading to excessive optimism or pessimism. However, the ubiquity of moods and the extent of their impact on our lives tells us that, throughout the course of evolution, our moodiness must have conferred a significant competitive advantage. Being moody at times may be a small price to pay for the ability to adapt quickly when facing momentous environmental changes.”
Give me, then, the power of mentally smearing the causal influence of many unrelated outcomes together, or give me death.
If you use social media sites like Facebook or Twitter, you’re part of a massive social network. Think about your personal network. How many different social circles are represented? Do you communicate with people in some circles more than others? Does that change sometimes? For example, among your old friends, there may be a flurry of activity around an upcoming high school reunion, and then silence for months.
Companies like Facebook dig deep to find patterns in your habits. Using algorithms originally developed for airline schedules, they get a sense of who you’re connected to and how. Now, these algorithms are being repurposed yet again: instead of mapping social networks, they are mapping neural ones.
With over 100 billion cells and 100 trillion connections, the brain staggeringly complex. Danielle Bassett, a professor of bioengineering at the University of Pennsylvania, uses community detection algorithms to make sense of it all. In a recently published study, Bassett and graduate student Shi Gu scanned the brains of a whopping 780 people, aged 8-22. These scans relied on functional magnetic resonance imaging (fMRI), which tracks changes in blood flow that reflect neural activity. Sophisticated algorithms then flagged important similarities and differences between the scans of people in different age groups. By identifying the brain’s tight-knit microneighborhoods and information superhighways, she hopes to create road maps for guiding learning or diagnosing mental illness.
“Far from a spaghetti like mess, the connections between different parts of our brain are fairly organized, but by a rule that none of us have been able to define,” Bassett wrote in a recent Reddit AMA (Ask-Me-Anything) session. “We would have loved the answer to be simple: That brain regions connect to other brain regions that are close by (similar to what might happen in grade school when you become friends with kids in your own school more so than with kids in the school district next door). But interestingly, the brain shows long-distance connections as well.”
Some areas of the brain only communicate with nearby areas, forming tightly modular hotbeds of activity. Other areas act as hubs, connecting faraway areas to each other. Based on these characteristics, Bassett and Gu assigned brain areas to networks, each with a different job. Their algorithms boiled a massive amount of data down into just two dimensions: communication within brain networks, and communication between brain networks.
These patterns of communication change as children become adults. Bassett and Gu found that networks for sensory and motor function wire up early, becoming self-sufficient and independent from the rest of the brain in childhood. Throughout adolescence, the influence of networks involved in abstract thought becomes wider-reaching, linking up with many different brain areas. Bassett and Gu believe that as these networks expand their influence, adults achieve greater control over the flow of their own thoughts, focusing their attention and pausing for reflection much more easily than children can. For example, in the “default mode network,” a network involved in daydreaming and mind wandering, synchronized waves of activity grow stronger with age. These waves become so strong, they can affect activity clear across the brain. At the same time, networks for abstract thought processes like decision making and rule switching also start to influence increasingly distant areas, but with flexibly changing codes instead of uniform waves.
These patterns show key similarities between individuals that can act as a road map for development. Deviations from the map could be warning signs of mental illness. But it is important to note that some variation is also normal. “Each of us have different task-switching abilities,” Bassett wrote. “For some of us, these transitions are quick and for others, these transitions are slow. Part of my research program is focused on explaining what makes us different!”
Bassett and Gu also found a trade-off in the maturation of sensorimotor and cognitive networks. This could reflect developmental delays, where book-smart children have simply fallen behind on their sensorimotor development. However, it may represent more permanent individual differences that make everyone unique.
Members of Bassett’s laboratory are currently working to identify distinct learning styles associated with different configurations of brain networks. With this information, it might be possible to tailor more efficient learning environments. “What I would really love to do next is to understand how we can use our new knowledge to enhance learning,” Bassett told Reddit. “What interventions could enhance learning? What environments are most conducive to learning, and how [do] they change the brain to enable learning to occur?”