If your visual cortex weren’t such a good camera it’d be a storyteller

I’ve written before about how the brains of blind people can show us which aspects of our mental life are strictly tied to certain senses and which are, let’s call them, “sensory-flexible.” Imagining how a person might navigate their world in the absence of visual input, I think, can stir up interesting new ideas about how and why people use each of their senses. But blindfolding yourself isn’t even a close approximation of the lifetime of experiences and neural changes that come with blindness. Imagination can only take us so far.

I am increasingly disenchanted with brain scanning studies that claim to prove the obvious: “Humans like fun, brain scans show!” Much of the time, you don’t need an expensive brain scanner to tell you these things, and it’s frustrating to see team after team of researchers use these big toys to confirm what we already know. In the case of blindness, however, I see a truly important use of brain scanning technologies. Esoteric philosophical thought experiments can come life when you compare blind and sighted brains.

In these experiments, the brains of sighted individuals can be thought of as being “exposed to” a really overwhelming treatment, visual input, while the brains of blind individuals are not. Which is an interesting flipping of the script: we’re not comparing blind individuals to “normal controls.” Vision is a manipulation, making sighted people our experimental group and blind people our control group. By comparing the two, we can begin to answer questions about what brain functions are “innate” or “hard-wired” and which are shaped by experience (i.e., visual experience).

How do we tell the difference? Remember those “sensory-flexible” functions I mentioned? Well, if a brain function happens the same way regardless of whether it’s fed information through visual, auditory, or even tactile pathways, then there’s probably something about that function that never needed a particular sensory input to help it wire up in the first place. The trouble is, humans overwhelmingly rely on visual input, making it difficult to tell “visual” brain regions apart from others. That’s where the blind brain comes in.

I want to focus on a study out last year in the Journal of Neuroscience, by Johns Hopkins researchers Connor Lane, Shipra Kanjlia, Akira Omaki, & Marina Bedny. This is one of a bunch of interesting studies coming out in this line of work, but I chose this one because it appeals to my inner grammar nerd. Lane & pals explored the role of the “unemployed” visual cortex of people who had been blind since birth. It’s been known for a while now that this brain region light ups instead of languishing unused. It seems to help out with a variety of things, from reading Braille to maintaining information in memory. We know this from studies that use brain scanning techniques like fMRI (functional magnetic resonance imaging) to see the brain at work. We also know that if we disrupt activity in the blind visual cortex using a brain-zapping technique like TMS (transcranial magnetic stimulation), these superpowers seem to go away. Correlation AND causation, neat!

Sensory substitution devices, which translate visual information into “soundscapes,” can be used to present words and pictures to blind people in the scanner, feeding the brain information through a previously unavailable sensory code. Eerily, the brain areas specialized for processing visual information about “what and where” in the brain are still there in the blind brain. Same goes for areas specialized for even more specific things, like words and body parts. These results show that the brain’s weird specializations and selectivities aren’t loyal to any one sensory input. They suggest that there’s something about the way we operate as humans that would’ve resulted in similar neural architecture whether or not we relied on vision. What that something is, however, is up for debate.

Lane & colleagues looked at the way the blind visual cortex processes language. A previous study had found that the response here was greater when sentences had a scrambled word order than when they didn’t. This raised the question of whether these responses were driven by something truly useful, like making sense of words, or simply by the surprise, novelty, complexity, or general weirdness of these scrambled sentences. Another study showed that making a memory task more difficult didn’t drive up the response here, meaning it seems unlikely that the response to scrambled sentences was driven by the difficulty people had in making sense of the words. But Lane & colleagues wanted to get more specific.

They studied something called syntactic movement, a concept proposed by Noam Chomsky. I didn’t know what this was, but it turns out I am certainly guilty of its overuse. Lane gives an example of a sentence without movement: “The creator of the gritty HBO crime series admires that the actress often improvises her lines.” And one with movement: “The actress that the creator of the gritty HBO crime series admires often improvises her lines.” See the difference? The latter sentence taxes the ol’ memory banks in a way that the former doesn’t. You have to hang onto the fact that we’re talking about the actress while I interrupt what I’m about to tell you about her with this little factoid about her having the admiration of the series creator. Now, if you are the blind visual cortex, do you light up for this because it’s difficult, or because you’re helping in some vision-like way to extract meaning from the complicated sentence?

Sidebar to any of my students that may be reading: start taking notes now. I want to see good experimental controls in your final grant proposals. Remember: beat your reviewers to the punch, anticipate their worries and do some cool magic tricks (rhetorical power posing, cleverly matched control groups, and reassuring graphs) to placate them. In other words, if you’re lying awake at night scared of a lurking variable and think your reviewers are definitely going to come for you about it, take the fall. Act hurt. Get indignant. And then get up. Look sickening. And make them eat it. Watch and learn.

In fMRI, we’re often comparing one thing to another thing. This subtractive approach has its limits, but can be very useful for dispatching lurking variables like difficulty. Here, the researchers compared the brain responses during listening and comprehension of sentences to two other responses: while people worked on verbal sequence memory (i.e., remembering a series of words not in a sentence) and while they worked on math problems. They also went a step further, asking whether the brain response increased with the difficulty of sentence comprehension and of math problems. So they’re first establishing that there’s some truly language-specific component of these responses by comparing them to the responses to a similarly language-y, but structurally different task (verbal sequence memory) and to a structurally similar, but non-language-y task (math). And then they further test for language-specific involvement by asking whether the visual cortex is sensitive to a change in difficulty for just language, or for either language or math. If it knows what’s hard and easy, that’s a good sign it’s doing the heavy lifting for that computation. And if it knows this for only one or the other, then this heavy lifting is specific to that task, not a general grunt of difficulty.

They found, first, that only in their blind subjects did the visual cortex respond more to sentences than to non-words and to math problems. They also found that, in blind people, sentences with syntactic movement elicited even greater responses. Meanwhile, when presented with easy and hard math problems, this area didn’t ramp up its activity for the hard ones. And if you’re wondering if maybe they just didn’t make the problems hard enough, check out the prefrontal cortex, where the brain responses definitely scaled up for harder problems.

This helps to rule out the hand-waving explanation that would say that these responses are just a general response to difficulty. So it does seem like the blind visual cortex takes on specific roles in language processing that the sighted visual cortex does not. And it does seem to really be helping the blind individuals get the sentence comprehension job done: the more they activated their “visual” cortex, the better they understood the sentences! The authors dangle a promise of a TMS experiment in front of the reader–I will bet you a set of tickets to Dolly’s upcoming tour that zapping this brain region knocks sentence comprehension way down, in a causal confirmation of this intriguing correlation.

What does this mean? Well, it’s pretty weird that this brain area that we think of as a glorified piece of camera film seems to be capable of participating in information processing as uniquely, complexly hierarchical and rule-based as human language (remember, these words were heard, not read). Prominent/vocal brain-development researchers Steven Pinker and Mark Hauser have proposed that the perisylvian language areas, a family of brain regions clustered around the auditory cortex, are where the real heavy lifting occurs during syntactic movement. The blind participants did also activate these regions, so it’s not like they used a different part of their brain entirely. But at the same time, their brains differed from the brains of sighted individuals in other ways. For one thing, they were less lopsided in how they process language. Normally, the brain’s left hemisphere becomes specialized for language while the specialization for abilities like visually analyzing faces gets pushed over into the right hemisphere. Or, erm, it’s unclear who shoved who first, actually. Regardless, in the blind subjects, Lane and colleagues found evidence that this left-sided language specialization was reduced. Absent this turf war between pushy roommates language and visuospatial analysis, brain organization sort of evens out.

In any case, it seems kind of ludicrous to think that the visual cortex could just switch teams so easily. Since we think of language as uniquely human, while vision is something that even many very lowly animals have, what, then, do we do with this idea that different brain areas are somehow “suited” to their jobs? In other words, what made the visual cortex visual in the first place? And how did language sneak in there when (sorry) no one was looking?

Well, it could be that language is cultural, and that there aren’t brain networks that are innately specialized for it. On other hand, the authors argue, it could be that evolution wired us for language. We know that the perisylvian language areas retain their function in both deafness and blindness. It could be the case that you only need these to acquire language–maybe these areas contain the seeds of linguistic processing, which later spread to other areas, and in blindness, this includes visual areas.

But wait, they say, what about critical periods for language acquisition? Isn’t it true that certain things need to occur at certain points in development, or they’ll never occur? This is why learning languages is supposed to be harder as you get older, and it’s also why a cat with its eyes sutured shut during certain developmental stages will never be able to process visual information in the same way. We know that only blind people who lost their vision before age 9 will activate their visual cortex, in general. After that, it’s thought that too much visual processing software had been installed, and you can no longer teach the old dog new tricks, to make a metaphor cocktail. So maybe, the authors write, if you’re blind, the connections from language areas to the visual cortex don’t get pruned and sculpted by experience during development. Maybe it’s these residual connections that get language information into the visual cortex. Alternatively, maybe the visual cortex picks up new tricks from all the other aforementioned specialized “visual” areas (those responding to words, faces, places, body parts, etc) that stay specialized even when words are in Braille or faces are converted using vision-to-audio devices.

Wherever these responses came from, there are likely to be other specializations within the blind visual cortex. The authors note that locating sound sources in space, mentally rotating a tactile object, and discriminating between two sounds or tactile objects have also been known to light up the blind visual cortex. But it’s in comparing these types of tasks, subtracting off any thought process too general to give us new information, and asking whether these responses ramp up with difficulty, that we can start to tease out what the common, non-visual computations underlying all of these processes are. What is it that you’re doing when you follow the syntactic wiggle-worms I call sentences? Does it feel similar to something you’re doing when your eyes dart around, analyzing a complex visual scene? Similar to anticipating the chorus of a song? Similar to the feel of physically wrestling with a wiggle worm? Somehow you turn sentences into stories and stories into lives and identities. This storytelling ability is part of what makes us human, and weirdly/miraculously, it’s something that springs from the same brain parts, whether or not you rely on vision.

Posted in Science | Leave a comment

Throw away the brain and study the heart instead: A science story

Screen Shot 2016-02-22 at 8.54.06 AM

I had a paper accepted yesterday! This is my first first-authored paper, which means I saw this thing through from start to finish. And lived to tell the tale. I can hardly believe it myself.

This project began when I was a wee first-year graduate student some years ago (I will tell you flat-out that I am 30 years of age, loud and proud, but etiquette dictates that you NEVER ask a scientist how old their data are). I had just spent two years at the National Institute of Mental Health (NIMH) learning to scan brains, and I hoped to earn a spot in Rich Ivry’s lab at Berkeley by showing off my new skills. So when Rich, who would eventually become my PhD advisor, told me that he had a project in mind for me, I said great, I can totally do that in a ten-week lab rotation (HAHAHAHAHA I was so young and stupid).

The Cognition & Action Lab, circa when all this was going on

The Cognition & Action Lab, circa when all this was going on

The brain scanning technique I use is called functional magnetic resonance imaging, fMRI for short. fMRI is used to create pretty pictures of the brain in action, “lighting up” to reveal hotbeds of activity. But it’s not tracking the activity of brain cells, or neurons. It’s tracking blood flow, which is and isn’t a good proxy for neuronal activity, and I’ll tell you why.

The scanner is essentially a giant magnet, and the pretty pictures are made possible by the iron in your blood. Recently-active neurons receive fresh shipments of oxygen bound by hemoglobin, and the hemoglobin (heme = iron) changes its conformation (and thus, its magnetism) depending on whether it’s carrying oxygen or not. Follow the blood, the thinking goes, and it will lead you to active neurons.

Except when it doesn’t. Unfortunately, your blood is pumped up by your heart, which has a pesky tendency to beat faster or slower in response to THE SAME KINDS OF STUFF PEOPLE ARE TRYING TO STUDY. Scary pictures, math problems, ethical dilemmas, and even small movements all cause your heart rate to go way up or way down, so if you’re trying to learn how the brain responds to these things by looking for subtle changes in blood flow, well, godspeed to you.

Screen Shot 2016-02-24 at 11.00.26 AM

Here’s me sporting a fetching “bite bar” to hold my head in place while I use my cerebellum to move my arm. The red you see isn’t blood, it’s just statistics (OR IS IT)

Before there were fancy, expensive brain scanners, psychophysiologists in labs would hook people up to heart rate monitors, measure pupil dilation, monitor the small changes in sweatiness known as the galvanic skin response, and track the rate and depth of breaths, all to get clues about what’s going on in your head. This is, for instance, the basis of the polygraph, or lie detector test: the name just means “many graphs” (I assume), and by the way there’s a great book on the weirdo who conned everyone into thinking this was a good idea. I mean, in a way, it was: it’s a hell of a lot cheaper than fMRI-based lie detection and just as crappy.

I’ve said it before and I’ll say it again: Don’t bother studying the brain, the heart tells you everything. For example, when you’re anticipating something, it slows down just the right amount to allow more blood to build up–it’s thought that this happens so that when the time comes, you get a bigger pump of blood through your body. If you randomize the timing of events so that they can’t be anticipated, the heart learns the average and slows down that much. This gives me the creeps, but it also means that, like Peter Pan trying to dissociate from his shadow, researchers will have a hell of a time telling the difference between brain activity and “brain activity.”

Brain activity (we hope) in response to movement: the left M1 (primary motor cortex) and the right cerebellum contain maps of the right hand.

Brain activity (we hope) in response to movement: the left M1 (primary motor cortex) and the right cerebellum contain maps of the right hand.

I came into this problem where a fellow grad student and extremely wise mentor, John Schlerf, left off. John had previously had a dark night of the soul when, expecting to find a big fat “error” signal in the brain’s error processing center, the cerebellum, he instead found zilch. That is, until he noticed that errors caused the heart to more or less literally skip a beat, effectively canceling out that fresh supply of blood he was counting on detecting. So, using statistics and magic, he “corrected for” this change in heart rate and boom. Beautiful error signal, just where he knew it would be.

When I joined the lab, John and Rich wanted to do something very principled: to take a step back and ask how pervasive a problem this was likely to be. If errors could be masked in this way, what about other kinds of brain processing? My job was to study the brain’s response to simple, stripped-down arm movements. Poignantly, this kind of simple arm movement is what was used, in the early days of fMRI, to create a sort of template response to help predict brain activity. This template, known as the hemodynamic response function, can be thought of as a description of a person suspected of a crime. Say you believe a brain area is involved in some process, like movement, or memory, or reasoning. That area should give its location away every time that process occurs by “fitting the description.” Neurons fire, and the fMRI signal, known as the blood oxygen level dependent (or BOLD) signal, should go up in this sluggish, wavelike way. And wherever you see this happen in the brain, you color-code the area and say it “lit up.”

Screen Shot 2016-02-24 at 10.05.39 AM

Neurons fire, and the BOLD signal goes up. This HRF (hemodynamic response function, shown here as implemented by the software package SPM) is used to generate predictions about what brain activity looks like, so we can find areas that behave as predicted.

But what if that’s all just a bunch of blood being pumped up by the heart? Rich and John suspected that, if you removed the parts of the BOLD signal that fit a different description, based on recorded heartbeats, you might be left with, well, nothing at all, in a worst case scenario. This would have meant that all of fMRI was in serious trouble. And let’s be honest: I sort of selfishly wanted to be the person who showed this and published it more than I wanted all of the fMRI studies that had ever been done to be “real.” Don’t worry: it’s not all that bad, and I didn’t get to be the supervillain, the Hemodynamic Angel of Death, after all, as we shall soon see.

At the time, I knew this project would be a good way for me to become acquainted with the brain imaging community at Berkeley, learning to use a new scanner and new software packages. Because this was a methods project, I’d even have to dig deep into the guts of my code, hacking away at software written by, let’s be real, a total madman (I won’t name names but those in the fMRI community will feel my pain as I cursed myself for not sticking with AFNI, an NIMH-based package). This was wildly intimidating to me, but I knew I’d learn a lot and feel basically just real butch about my science. The goal wasn’t to figure out how the brain works but rather to figure out how we can best figure out how the brain works. It was not what I’d come for: I just wanted to make pretty pictures. But I also knew, from my time at NIMH, that methods projects were important, and to ignore these kinds of issues as an fMRI researcher is to consign oneself to reading very expensive tea leaves.

So! What did we do, and what did we learn? Well, first, we had people make some simple arm movements in the scanner. The rule was: Every time the crosshair turns green, you move.

"Green means go," and try not to fall asleep in there. You never know when that light is gonna change (every 4-20 seconds, it'll be a SURPRISE).

“Green means go,” and try not to fall asleep in there. You never know when that light is gonna change (every 4-20 seconds, it’ll be a SURPRISE, and it’ll only be on for half a second, so DON’T MISS IT).

We recorded their heart rate and breathing while they were in the scanner. Note that heart rate isn’t something you have a lot of control over, where breathing sort of is. You tend not to think about your breathing, but when we averaged together everyone’s breathing data, some people were rock-steady while others were more erratic, and so the effects this had on the BOLD signal were kind of a mixed bag. Heart rate, on the other hand, reliably soared after each movement:

Screen Shot 2016-02-24 at 10.02.05 AM

After movements (red line), heart rate went up by about 1%, which doesn’t sound like much, but believe me, it’s plenty to swamp the even-tinier changes in the BOLD signal.

This graph should scare you, because it looks so very much like the thing we’re trying to detect: the hemodynamic response, mentioned earlier.

We looked at two regions of interest, or ROIs, in the brain: the primary motor cortex (also known as M1) and the cerebellum.

Screen Shot 2016-02-24 at 10.06.08 AM

See that nice, clean edge on the cerebellum ROI? It stops right before spilling out into the visual cortex above it, and that’s no accident. That’s months of hand-editing, a task I later outsourced to my undergrad minions, hoping it eventually took on roughly the same meditative quality as a mandala. Sidebar: I just recently learned that by going in and zapping blood vessels and other misidentified chunks of tissue, we were becoming intimately acquainted with the very same distinction (vessel or tissue?) that had, a decade earlier, caused Ben Carson to botch a high-profile separation of conjoined twins. Such a rich and storied legacy, that.

ANYWAY. Using statistics and magic, you take the files that mark every time your participant moved, you look ahead in time by creating a series of lagged files, and you pull out the BOLD signal from your regions of interest at each of those times to make a graph that, hopefully, looks like the canonical HRF and is a faithful representation of what happens in motor areas when you move.

Screen Shot 2016-02-24 at 10.03.43 AM

The timing of 30 movements, at 12 2-second lags, is used to figure out the shape of the actual brain response to those movements.

Phew. Looks a lot like what we expected. So far so good. Then, you say WAIT, there’s ALL THIS OTHER CRAP WE RECORDED, like heart rate and respiration and whether the movements came just before or just after a heartbeat or breath and it’s all here! Let’s just throw that in and see what happens.

This isn't an appropriated geometric throw rug from Urban Outfitters, it's the recorded heart rate, and respiration, and some related stuff, being "corrected for."

This isn’t an appropriated geometric throw rug from Urban Outfitters, it’s the recorded heart rate, and respiration, and some related stuff, being “corrected for.”

Once your statistics and magic account for all that other crap, guess what. It’s not THAT different. fMRI is saved. You can all go home, and go back to fighting with crappy code and bashing your heads against your keyboards.

Now, this isn’t the whole story, or the most recent version of it (these images were taken from old talks, because I’m not totally sure if I’m allowed to plagiarize my own figures before they’re even published. So, final results may vary slightly, but not much). And no, you shouldn’t really give up on the study of the brain just because it’s cheaper to study the heart (although there were times when I felt I should, and in my scannerless future, it’s definitely an appealing notion). All this says is that, on our hunts for brain activation, using the current description of the suspect should work out OK.

But then we do painstakingly show that monitoring and correcting for changes in heart rate and respiration, the way we did, can really clean up your data. We did a bunch of other stuff and made some really hideously complicated flowcharts that show exactly how much good each of our statistical corrections did–definitely worth looking into if you plan on scanning any brains attached to hearts. EVEN THOUGH we didn’t prove that fMRI is a sham and it’s all just heart rate getting in the way.

Shout out to the help and patience of John, Rich, and also Ben and Rick at the Brain Imaging Center (Ben’s blog, PractiCal fMRI, is fantastic–truly, he is doing God’s Work for fMRI researchers everywhere, and Rick heroically fixed our extremely expensive robot arm after I BROKE IT in a highly traumatic incident I was sure would cost me my spot in the lab. Extra shout-out to John for breaking the news to Rich for me, and to Rich, for taking me in anyway). And even though this took me YEARS (yes, if you’ve read this far, you get to know at least part of the secret) from data collection to publication (in my defense, this was not my only project), I’m so glad my baby is out in the world now. I didn’t know what I was doing when I started, but I do now. I mean, as much as anyone does.

1436569453

The Ivry lab as I left it, in 2015

Posted in Science | 2 Comments

Resting Bitch: How your moods make you a more optimal you

“Happiness depends not on how well things are going…but whether they are going better than expected.”

Opinion papers are fun. It’s like going to a bar with a scientist and asking what they really think. They might be wrong, but there are very few people far out enough on that ledge to even be spitballing here, so they slap the “opinion” label on and start riffing away. Here, in this churn of conjecture, where bits of evidence are sized up like puzzle pieces, is where new hypotheses are born. Here is the engine that makes science go.

I’ve been dying to write about a recent opinion paper in Trends in Cognitive Sciences. The paper, called Mood as Representation of Momentum, is by Eran Eldar, Robb Rutledge, Raymond Dolan, and Yael Niv, a group from University College London and Princeton. In it, they attempt to bring together two bodies of research that don’t usually interact much. The first area of research focuses on the causes of moods and what happens when they go awry in disorders like anxiety, depression, and bipolar disorder. The second focuses on reinforcement learning (learning from rewards, which can roughly be thought of as trial-and-error learning) and decision making.

Moods, for our purposes, are similar to emotions, but longer-lasting and less specific. Your mood can be an “up” state or “down” state, depending on whether you are happy (in a good mood) or sad (in a bad mood). Your moods then make you more or less likely to experience more specific emotions: for instance, a bad mood can make it easier for you to become angry or frustrated or both.

Moods can be affected by all sorts of things, music, social interaction, self-reflection, sunshine, or even simply viewing the facial expressions of others. In labs, psychologists and economists use pictures of emotional faces, monetary rewards, or even pictures of the outcomes of sporting events to try to manipulate mood. Using smartphone apps, alarms for set reflection times, daily mood journals, brain scans, and more, scientists are attempting to understand what causes moods and what they are for.

***

The authors write: “The upshot of this research is that mood induced by a stimulus can affect judgments about other, potentially unrelated, stimuli. Indeed, this property may have given mood its reputation as a rich fountain for irrational behavior.”

Irrational, indeed. I may have blood coming out of my wherever when I go to bat for my right to mood, but I am not alone. The authors in believe that moods are actually evolutionarily advantageous, that far from being irrational or counterproductive, moods serve a purpose. This argument is a tough sell: moods are typically associated with mood swings, explosive tempers, and generally being a bitch. Coincidentally, many rationality fetishists have a pesky misogyny problem.

To me, this gender-related distaste for moodiness reeks of generation after generation of men taught to bury their feelings. Feelings bad. Boys don’t cry. To aspire to be master of the universe is to aspire to an unattainable objectivity, to become some sort of stoic thetan, freed from the sway of irrational emotional forces.

Toxic masculinity remains a top rant for me. So I was pretty dang thrilled to see badass Princeton computational modeler Yael Niv arguing that we have evolved to be moody because moods make us optimal learners. That’s right: mood is a feat of evolutionary engineering, a Goddess-given engine of practical efficiency for all people of all genders. Finally, evo psych in the service of something I can get behind. Niv, whose work I know best among the authors, is an expert in feelings, if ever you could call someone one. Her work creates and tweaks actual formulae for happiness. Equations. Plug and chug and Happiness = X. How’s that for rational?

***

Let me back up for a moment. I’ll get to the formula for happiness momentarily. But first, I want to talk about a shirt I once saw. It was emblazoned with a molecular structure and the slogan “Dopamine: technically the only thing you like.” This, while clever, is not technically true. You also like opioids (like morphine and your body’s natural equivalent, endorphins) and arguably serotonin (the chemical behind the chemical imbalance that is depression, and a common target of antidepressant medications), along with who knows what other mysterious neurotransmitters exist but that we have yet to understand. Dopamine, however, is more like the only thing (that we know of) that you want. It drives craving.

When you receive an unexpected reward, you get a burst of dopamine in an area of the brain known as the ventral striatum. A pet peeve of mine is when people show brain responses to cocaine and then do a side-by-side of whatever it is they’re arguing acts similarly. Sugar. Video games. Gambling. I myself spent most of grad school programming an Atari-like shuffleboard game which, though primitive, robustly “lit up” the ventral striatum just like cocaine. This doesn’t mean I’ve created a cohort of shuffleboard addicts. All it means is that this mechanism, this burst of dopamine that signals to you that your expectations have been exceeded, is a very general mechanism.

Now, mind you, this reward must be unexpected in order for it to change your future decision-making behavior. If all you get is the reward you expected, your expectations go unchallenged and you aren’t learning anything, per se. In fact, studies have shown that when monkeys expect a reward and then do not receive one, their dopamine neurons skip a beat, ceasing their firing as though in indignation. Activation in the dopamine-rich ventral striatum has also been measured in humans using functional magnetic resonance imaging (fMRI) in response to all kinds of pleasurable stimuli. And if you give people a pharmacological boost in dopamine, they report greater happiness from rewards than they normally would. Dopamine, in addition to feeling good, makes you want more dopamine. And, helpfully, it’s critical for teaching you how to get it.

In labs, this is sometimes studied by people playing a game where they can either win or lose money. Scientists are interested in the role rewards can play in sculpting three main things: people’s subjective reports of happiness, the brain response to future rewards, and the effect of these rewards in sculpting subsequent decisions. Based on these three measurements, computational modelers can design algorithms that can accurately predict people’s feelings of happiness, brain responses, and decision-making behavior.

For example: people report greater feelings of happiness after winning. These rewards also lead to future rewards having bigger impact on their subsequent decisions–they may, for instance, feel themselves on a hot streak and take bigger risks. Similarly, losing money reduces feelings of happiness. It also reduces the impact future rewards have on their choices, and furthermore, it measurably dampens the brain response to these rewards. Negative events throw a bucket of cold water on us, making us pessimistic.

These patterns are exacerbated for people who are less emotionally stable, suggesting that the study of how people learn from rewards may offer clues to the origins of mood disorders. They may also help explain why these disorders cause people to make the decisions they do. For instance, when people are asked to choose between a sure bet and a risky gamble, with varied gains and losses, their decisions help train an algorithm to produce a model of happiness. These algorithms calculate happiness as a function of their choices (sure bets or risky gambles, or in other words, choosing or avoiding risk), the expected payoff of the gamble, and the difference between the actual and expected payoff. Throw in some weighting variables and constants, like a “forgetting factor” that determines the relative influence of more recent events and events further in the past, and you’ve got an actual formula for happiness.

Scientists have used these types of formulae to make inferences from data acquired in smartphone-based field studies. These found that, despite what you’ve heard about the power of positive thinking, it’s not so much your expectations that impact happiness and learning. These are determined far more directly by the surprise you experience about the outcomes of your decisions. These surprises are also known as prediction errors: the difference, or error, between your predicted outcome and the actual outcome. Happiness can be calculated as a running average of recent reward prediction errors, where some prediction errors are weighted more heavily than others. And wouldn’t you know it: by modeling happiness quantitatively, you can search for activity fitting this model in fMRI scans. And this approach to looking for the seat of happiness will pay off: you will find it right there in that needy ratcheter-upper of need, the ventral striatum.

***

Mood, as we said, can be biased by all sorts of things. Just seeing frowny faces can bias your perception of subsequent rewards. More seriously, being depressed can mean that future rewards have less of an impact on your choices. You are de-sensitized to the meaning of these rewards. You don’t see it, because the part of you that values these rewards has been blunted. Critically, though, it’s not necessarily because your learning is impaired. This is a motivational issue, not accessible to the realm of rational appeal.

Anxiety, like depression, enhances responses to aversive stimuli: you respond to events as if they are worse than they really are. While depression manifests as a greater sensitivity to negative outcomes, positive mood can enhance risk-taking in lab settings as well as in financial markets. A positive mood biases the perceived likelihood of future positive outcomes–in other words, you see everything as coming up roses. Repeated positive prediction errors, or good surprises, can “invigorate reward-seeking behavior.” Dopamine craves dopamine. Good surprises have a way of making you believe that there are lots more good things to come.

That’s because reinforcement learning is all about tracking which states were rewarding and making choices that will get you back there. Think of choosing a “good” slot machines, or, in the wild, of animals seeking out the trees that have the most fruit. Scientists believe they are on to something in using slot machines to test people’s decision making, because when people play this type of game, their behavior seems to pick out optimal strategies.

Mood, the authors argue, smooths out inefficiencies in reward learning. Going back to the example of the animals searching for food in the trees, they write: “Increased rainfall or sunshine may cause fruit to become more abundant in all trees simultaneously. In this situation, it makes little sense to update expectations for each tree independently.” Mood, in the landscape of learning how to reliably reap rewards, is the rising tide that lifts all ships. You don’t want to be constantly surprised to find fruit–this would not be advantageous. Instead, you want to infer that something bigger is going on. So your mood helps you to ratchet up your expectations more quickly than you normally would by allowing all the happy surprises to have an even bigger impact on how much you learn from subsequent happy surprises.

Mood gives you a way of calibrating your learning apparati to account for multiple factors in your environment, without having to account precisely for each one and its impact individually. This form of generalization, far from being sloppy, actually improves the efficiency of learning when multiple reward sources are interdependent, which is pretty much the norm. It’s rare that good things happen in your life that have nothing to do with anything else you did, except maybe winning the lottery. Unless you are that powerball winner, it would be irrational to ignore the connections and treat these things as independent. 

The authors write: “Indeed, such interdependencies may be the rule rather than the exception, for both animals and humans, because success in acquiring skills, material resources, social status, and even mating partners can be tightly correlated.” I’ve watched enough of the Real Housewives of Orange County to tell you that this is not necessarily true, nor would having it all necessarily lead to an improved mood.  

But still, I wonder: Is the otherwise-iffy idea that women are better planners and multitaskers rooted in a real perception of this mood-driven increase in efficiency? Is this form of mathematically generalizing over all currently relevant sources of reward what gives us our supposedly innate abilities to plan? Do men do themselves a disservice by faking such an even keel, and do women suffer disproportionately from anxiety and depression due to waves of dopaminergic dysfunction?

Just wild speculation going on here, by the way. Really going off book. I’m just saying, ignoring or suppressing moods and emotions is probably not the best practice. That’s the only real flame war I am down to start here. Moods are useful! What the authors argue is that if you infer a positive momentum from an increase in the availability of fruit in your orchard, you may be on to something: spring is coming. Same goes for the negative momentum as we head into winter. Better hibernate. By adjusting your expectations as quickly as you need to to catch up the rate of rewards, you save yourself from perpetual shock, perpetual disappointment.

As a quadriplegic man I read about said, “you can get used to anything.”

***

Did everyone see Inside Out? If not, lookout, spoiler. Do you remember when Sadness touched all the memories? And we learned that Riley’s Disgust, Fear, Anger, and Sadness were just as important in guiding her through life as her Joy? Well, so, too, are all sources of information useful in learning how to navigate our environments. Given a good enough probabilistic model of environment, plus some Bayesian magic, you can come up with an optimal learning algorithm for a particular environment.

You want this algorithm to be able to account for environmental factors that are general enough to affect multiple states, or situations, instead of treating all the states as independent. Sure, you might over-generalize sometimes: maybe the increase in fruit is local to the trees in this particular valley. But you weight how recent and how local the changes are to try to account for this as much as you can. You assume that neighboring states have been changed in similar ways to the ones you’re currently being surprised by. Even if you’re not integrating over multiple states, or multiple sources of reward, this generalization can occur over time, too, allowing you to infer momentum from your running average of how many good surprises you seem to be getting lately.

If emotional reactions have an appropriate intensity and duration, then mood is helping you out. Good and bad moods should only stick around as long as there’s still a change in momentum registering–that is, as long as you’re being surprised and having to adjust your expectations. But once your expectations are updated and seem to be in line with your new normal, your happiness levels should reach a more neutral place. Similarly, if you keep encountering bad outcomes, you will get in a bad mood but your expectations will level off appropriately eventually (You can get used to anything). The authors point out that happiness levels return to baseline even after winning the lottery, which is maybe why they say money can’t buy you happiness.

***

But what happens if you have a mood disorder? These can be serious. Excessive happiness or sadness would lead to behaviors that are maladaptive. If you learn less from negative surprises than you do from positive ones, you develop an overly optimistic expectation, which means you’re slammed harder by the negative surprises. High expectations lead to low mood.

A mood that keeps pace appropriately with changes in the environment acts as a homeostatic mechanism to keep your learning processes on track. It’s when your mood is out of sync with the rate of change in the environment that you might run into trouble.

People with depression are thought to have some dysfunction in regulating the levels of the neurotransmitter serotonin in their brains. Low serotonergic function has been known to lead people to learn less quickly from negative outcomes. Depression may result in (or from) negative outcomes having less of an impact on behavior. Normal feedback loops get out of whack, with expectations falling further and further behind realities. To avoid perpetual disappointment, expectations need to be adjusted to match outcomes. But as mismatches grow, bigger mood swings can result. These oscillations may form the basis for bipolar disorder, causing expectations and mood to pitch wildly up and down even when nothing in the environment is changing.

Interestingly, in the general population, positive mood & risk aversion predominate. Risk aversion can make you happy to have what you have, in a good mood as long as nothing is going wrong. This predominance may arise because people learn more, in general, from negative surprises than positive ones, changing their decisions and expectations more markedly when things get bad than when they are going well. This keeps people happy in the face of unpleasantness. The stronger biasing effect that negative outcomes have is likely due to the greater evolutionarily adaptive significance of learning quickly from negative momentum. In other words, it’s more important for our survival to avoid negative outcomes than to maximize the positive ones. If you don’t find the most fruit, you’ll probably be fine, but if you don’t learn to run from predators, you’re dead.

***

I come from a long line of pessimists and worriers. I can’t help but think that sensitizing oneself to negative outcomes is a helpful form of vigilance that is maybe not such a bad character trait. But so where do we land on the power of positive thinking versus setting one’s expectations low so that you will never be disappointed? Well, it’s telling that if you treat people with major depressive disorders by giving them serotonergic drugs, their perceptions change before their mood does. In other words, putting on the rose colored glasses comes first, and seems to be the cause of the improvement in mood. So what antidepressants are really giving you is not a direct mood boost, but rather a shift in your perception that results in one. How can you do this without drugs? Who knows. If I knew that, I wouldn’t be writing this stuff for free.

Bottom line: Mood can sensitize (or de-sensitize) you to the outcomes of your decisions, increasing (or decreasing) your responsivity to them. Emotional instability could, in theory, arise from either moods having too strong a sensitizing effect or from weakening people’s ability to habituate to new normals. The evidence suggests that people who are emotionally unstable tend, if anything, to show stronger effects of outcomes on their feelings (they are sensitive) and a stronger influence on their evaluation of subsequent outcomes. Their hair-trigger reflex for inferring momentum may lead to overgeneralization, and inappropriate optimism or pessimism.

But without this generalization, come on. We’d be simple idiots. We’d be rats bar-pressing for our rewards. We’d be stuck in a railbound behaviorist hellscape of rote stimulus-response associations, Skinner boxes made of skin and bone.

***

The authors clinch an important win for moody bitches everywhere by closing the paper with: “Moods can reflect inference of momentum even when there is none in the environment, leading to excessive optimism or pessimism. However, the ubiquity of moods and the extent of their impact on our lives tells us that, throughout the course of evolution, our moodiness must have conferred a significant competitive advantage. Being moody at times may be a small price to pay for the ability to adapt quickly when facing momentous environmental changes.”

Give me, then, the power of mentally smearing the causal influence of many unrelated outcomes together, or give me death.

Posted in Angry Woman Stuff, Science, Uncategorized | Leave a comment

Ex Machina: Mapping the brain in the age of social networks

11999693_828398390965_5742989928452880188_o (1)

If you use social media sites like Facebook or Twitter, you’re part of a massive social network. Think about your personal network. How many different social circles are represented? Do you communicate with people in some circles more than others? Does that change sometimes? For example, among your old friends, there may be a flurry of activity around an upcoming high school reunion, and then silence for months.

Companies like Facebook dig deep to find patterns in your habits. Using algorithms originally developed for airline schedules, they get a sense of who you’re connected to and how. Now, these algorithms are being repurposed yet again: instead of mapping social networks, they are mapping neural ones.

With over 100 billion cells and 100 trillion connections, the brain staggeringly complex. Danielle Bassett, a professor of bioengineering at the University of Pennsylvania, uses community detection algorithms to make sense of it all. In a recently published study, Bassett and graduate student Shi Gu scanned the brains of a whopping 780 people, aged 8-22. These scans relied on functional magnetic resonance imaging (fMRI), which tracks changes in blood flow that reflect neural activity. Sophisticated algorithms then flagged important similarities and differences between the scans of people in different age groups. By identifying the brain’s tight-knit microneighborhoods and information superhighways, she hopes to create road maps for guiding learning or diagnosing mental illness.

“Far from a spaghetti like mess, the connections between different parts of our brain are fairly organized, but by a rule that none of us have been able to define,” Bassett wrote in a recent Reddit AMA (Ask-Me-Anything) session. “We would have loved the answer to be simple: That brain regions connect to other brain regions that are close by (similar to what might happen in grade school when you become friends with kids in your own school more so than with kids in the school district next door). But interestingly, the brain shows long-distance connections as well.”

Some areas of the brain only communicate with nearby areas, forming tightly modular hotbeds of activity. Other areas act as hubs, connecting faraway areas to each other. Based on these characteristics, Bassett and Gu assigned brain areas to networks, each with a different job. Their algorithms boiled a massive amount of data down into just two dimensions: communication within brain networks, and communication between brain networks.

These patterns of communication change as children become adults. Bassett and Gu found that networks for sensory and motor function wire up early, becoming self-sufficient and independent from the rest of the brain in childhood. Throughout adolescence, the influence of networks involved in abstract thought becomes wider-reaching, linking up with many different brain areas. Bassett and Gu believe that as these networks expand their influence, adults achieve greater control over the flow of their own thoughts, focusing their attention and pausing for reflection much more easily than children can. For example, in the “default mode network,” a network involved in daydreaming and mind wandering, synchronized waves of activity grow stronger with age. These waves become so strong, they can affect activity clear across the brain. At the same time, networks for abstract thought processes like decision making and rule switching also start to influence increasingly distant areas, but with flexibly changing codes instead of uniform waves.

These patterns show key similarities between individuals that can act as a road map for development. Deviations from the map could be warning signs of mental illness. But it is important to note that some variation is also normal. “Each of us have different task-switching abilities,” Bassett wrote. “For some of us, these transitions are quick and for others, these transitions are slow. Part of my research program is focused on explaining what makes us different!”

Bassett and Gu also found a trade-off in the maturation of sensorimotor and cognitive networks. This could reflect developmental delays, where book-smart children have simply fallen behind on their sensorimotor development. However, it may represent more permanent individual differences that make everyone unique.

Members of Bassett’s laboratory are currently working to identify distinct learning styles associated with different configurations of brain networks. With this information, it might be possible to tailor more efficient learning environments. “What I would really love to do next is to understand how we can use our new knowledge to enhance learning,” Bassett told Reddit. “What interventions could enhance learning? What environments are most conducive to learning, and how [do] they change the brain to enable learning to occur?”

Posted in Science | Leave a comment

Book review: Brain on Fire by Susannah Cahalan

Embarrassingly, I made it most of the way through a PhD in neuroscience without knowing the difference between a syndrome and a disease. In general, I knew I didn’t want either. But my own research focused on how the brain works, not they myriad ways in which it sometimes doesn’t. Then, in my last semester, I helped teach a class on human neuropsychology. In this class, I learned right along with my students the horrors of traumatic brain injuries, tumors, strokes, and neurodegenerative diseases. When I could, I went to watch the professor examine patients at the veterans’ hospital. He would scrutinize their brain scans in front of a room full of interns and doctors before bringing the patient in and asking all sorts of invasive questions. He would then have them do an elaborately choreographed series of stupid human tricks, each designed to ferret out the root causes of their deficits. In teaching the course, we watched a lot of videos of patients, too textbook in their problems to seem real. But nothing, and I mean nothing, is realer than a veterans’ hospital.

While teaching this class I became, for the first time ever, mistrustful of this lump of tissue inside my head that had always seemed so marvelous. In fact, it now seemed like it could turn on me at any moment. I became a bit of a neurohypochondriac. Which is why, when a labmate who had previously taught this same course suggested I read Susannah Cahalan’s Brain on Fire, I held off for a while. The cover photo shows Cahalan with an unsettling million-mile stare, a stark contrast from the author photo on the back. Whatever was going on inside her head, I didn’t want to know about it.

A syndrome is a set of symptoms, while a disease is what causes the symptoms. It’s easy, given a textbook full of conditions with a neat correspondence between the two, to forget that there are plenty of syndromes with no explanation. Cahalan’s book chronicles her terrifying battle with a rare disease, the identity of which I resisted spoilers with the fervor of your average Game of Thrones fan. No matter. It was nothing I’d heard of. Neither had her doctors, unfortunately.

Cahalan plumbs the depths of the despair that can exist between syndrome and diagnosis, recounting in excruciatingly vivid detail the sense of helplessness that comes with unexplained problems. Each new piece of medical evidence that she reveals brings either frustration or relief. The biggest relief of all comes, oddly enough, from simply naming a problem. In giving a name to a symptom or finding, doctors validate their patients’ struggle and, at times, dangle a nebulous promise of reliefheartbreakingly just out of reach.

Remarkably, Cahalan remembers almost nothing of her “month of madness,” setting her book apart from the typical memoir format. Instead of digging deep into her own nonexistent memories, Cahalan, a reporter, interviews her doctors, co-workers, and family members. She manages to piece together bits of the story from medical records, surveillance video, and notebooks kept by family members while at the hospital. The result is a surprisingly coherent narrative, woven with a sense of creative license that is at once artful and practical.

As I read, I recalled my reaction to the movie Titanic. How many of us, watching the water rise upon Rose and Jack, feared for their lives even though we had seen an aged Rose in the movie’s opening scene? Similarly, I kept becoming gripped by the insane devastation of her illness, having to remind myself that she eventually regained enough function to write the book. I played detective as her lip-smacking became more prominent, applying my small catalogue of neuropsychological trivia to guess at problems in the temporal lobe similar to Kluver-Bucy syndrome. I checked off various criteria for seizure types like squares on a Bingo card. When she was given steroids like prednisone for inflammation, I tried to guess at what effect it might have.

I felt silly, trying to put it all together when I knew the odds were against me. Silly enough that I tried to stop myself from doing it. And it was then that she met her new doctor, the one whose face the book shows alongside a smiling, recovered Cahalan in a photo bearing the caption “the man who saved her life.” At a critical juncture of the book, he administers a test that gives a useful clue (albeit more immediately useful for characterizing her syndrome than for diagnosing her disease). This test produces a result so strange, so firmly associated in my memory with the types of exotic cases described by Oliver Sacks in The Man Who Mistook His Wife For A Hat, that I had all but written off the odds of ever seeing it in the wild. I have discouraged students from letting their fascination with the oddity of this result convince them that all medical practice will be just as thrilling. So I almost threw the book across the room when Cahalan’s brain yielded such a meaty clue in response to a simple paper-and-pencil test.

For all of this chasing down of clues, though, playing neuropsychologist is fairly low on my list of reasons for loving this book. Like Sacks before her in A Leg To Stand On, Cahalan writes beautifully and insightfully about the impact her own medical misfortune has on her life. I felt the floor fall out beneath me when she pointed out that if she had become ill a mere three years earlier, there would have been zero people on earth able to pinpoint her condition (as opposed to a lucky one or two). I felt like I had been punched in the gut when she described how the illness transformed her relationship with her father from “maybe a visit once every six months” to making others feel left out of their wordless communication, exchanging meaningful looks across the table. My eyes grew wide and my heart grew three sizes when she let go of resentments, coming to understand that her mother’s optimism about her condition was not an unsympathetic insinuation that she was a burden.

More than anything, I was left with a sense of the enormous obstacles that people face when doctors are pressed for time. These obstacles to medical care can be isolating, shaming, frustrating, or deadly. Doctor after doctor turned Cahalan’s medical history into a dangerous game of telephone, latching onto her sleep, work, and drinking habits and twisting them into probable causes. In our class on human neuropsychology, my students and I learned how to tell if a patient is faking a seizure: you have another doctor run in, saying “I’ve seen this kind of seizure before! It always stops when you press on the great nerve of the neck!” As it turns out, there is no such thing as the great nerve of the neck. Just like how a few years ago, there no such thing as the disease Cahalan certainly wasn’t faking–instead, people were believed to be “possessed” or simply crazy. It seems like a lot of doctors are simply forced to press on the great nerve of the neck and send the patient home. Given how little we know about the brain and its problems, it seems like we can hardly afford to keep up with a system that doesn’t allow most doctors the time to consider subtler alternatives. Cahalan’s amazing recovery shows what previously unfathomable odds patients might be able to beat if only their doctor has the time to take even the baffling cases seriously. Dismissing Cahalan’s symptoms, pressing on the great nerve of the neck, and calling it a day would have resulted in her friends and family losing her forever.

Posted in Meta-science & pedagogy, Science | 2 Comments

Shot up

Despite my best efforts at sitting with discomfort and change and knowing that I am learning and growing, I still hate being wrong. So I have spent some time considering this notion that gun control laws won’t stop gun violence. Could that be right? The war on drugs is a horrific failure for many reasons, starting with its central role in tearing holes in the over-policed black community and driving up the prison population. Therefore, I do wonder if there isn’t something to the argument that banning guns will simply drive the trade underground.

But here’s the thing about that argument. It gets trotted out as the logical-and-reasonable cousin of the rest of the gun nut’s arsenal of excuses. Like their gun cabinets, I imagine them running and unlocking the part of their brain which holds this set of rhetorical weapons every time they must use their words to defend their right to bear arms. Our guns are our dicks is the real reason, hidden in some sort of false bottom of this imaginary cabinet. And a shelf up from that you have we have to protect our families (read: I will feel like less of a man if you take away the thing that says I am the law and my eyes are the arbiters of evil and I will take care of this with my man-stick). Another shelf up and you’ve got well actually if someone else in that classroom had had a gun maybe they could’ve stopped this sooner, ‘dja ever think of that? Didja? Congratulations to these debate team superstars. Your wit is as formidable a trap as any fencing champion’s prowess or steel cage of death. And then at the top shelf of this arsenal, you have but we’re not dumb we’re smart and you’re naive to think gun control will work.

Ah yes, they’re realists. Of course. How silly of me.

Today I am infuriated by this argument. First, because the same people deploying it are often so very pro-War on Drugs. But also, and definitely not unrelatedly, because these people are also often the same people decrying black-on-black crime, in the face of the Black Lives Matter movement. In the wake of this latest shooting in Oregon, I am momentarily not interested in discussing the kind of crime that arises from the frustrations of poverty, exclusion, and systematic disenfranchisement. I am interested in discussing the crime that arises from a toxic, rage-normalizing masculinity, an unchecked sense of entitlement to women’s bodies and attentions, and white supremacy, which is still alive and well.

So don’t come at me with “the criminals will always find ways to get guns.” These corn-fed, baby-faced, messianic hateballs I see all over the news are not in cahoots with cartels running AK-47’s across the border. They’re not hardened by prison or street life. They are not these “criminals” of which you speak. I don’t think you give one shit about the stolen lives of these “criminals” or of their families. I don’t think you understand how much criminality white supremacy and mass incarceration have created.

These mass shooters, instead, are going down the hall and raiding mom’s gun cabinet. They are cowards who have felt powerless and seek to claim what they feel is rightfully theirs (whatever it is, by whatever logic leads them to violence, which I don’t want to understand).

These angry white men are not happy about their supremacy, their centrality, being challenged. They are America and they do not want to share.

Nobody tells them the way they are is wrong. They are allowed to conceal and carry, or open carry. They rally like idiots for this right, a right which only extends to white people. It is not hard for them to get these guns, because it’s not supposed to be hard for them. You can’t even have a toy gun if you’re black.

The kinds of gun control measures being proposed are not going to take your guns. There is such a large gap between huntin’ guns and the seriously fast reloadable military assault rifles whose continued legality makes me jumpy in movie theaters and classrooms (the latter of which I am in every day). The gap between the amount of guns I’d like to see disappear and the amount of guns I think we’ll ever be able to pry from the nation’s cold, dead fingers is even greater. That’s me being a realist, and not indulging in wishful thinking.

It’s also not wishful thinking or ignorance on my part to say that gun control laws will help, or are at least a start. They are a stopgap measure until we figure out how to solve the root problems of poverty and inequality that generate violence. They are a bare minimum, and we have good evidence from other countries that suggests that they will work. When you say that these eensy bits of reason we’re trying to inject into gun control laws “won’t keep guns out of the hands of criminals,” I’m sorry, I don’t care. It’s not the criminals I am sick of hearing about. It’s the angry white sons you and your gun-loving communities raised.

Posted in Angry Woman Stuff, I didn't mean to write this it just happened | Leave a comment

I’m back!

Hello, blog. It’s been a while. So many things have happened. I am now a doctor!

Let me tell you about how this happened. For starters, I gave an exit talk.

11295590_960452019820_7679323922406218353_n

Then I gathered the signatures of all four committee members and I submitted my dissertation.

Screenshot from 2015-08-11 11:26:02

Finally, I got a lollipop. I could hardly believe it. I did it. I fooled them all.

11836884_822664067595_4267344887745091349_n

[Not pictured: I got a “dissertation muffin top,” which is what I think we should call all those studies you started with your advisor during more optimistic times, that don’t fit neatly into the muffin cup of your thesis, but instead spill over into the rest of your life indefinitely. Currently responding to reviewers on a paper, for which I collected data in my FIRST YEAR of graduate school. Let that sink in.]

My first order of business as a doctor was, of course, to go to Burning Man.

11951443_828399768205_5051781469274229791_o

It was a dusty one.

11947711_828399778185_1323732014145487349_o

I came back, dusted myself off, and started a new job!

12036643_829893674405_1592046373741029653_n

Which brings me to the reason for my return to the blogosphere. It’s been a nice summer-long hiatus during which all of my writing efforts were poured into “real science.” I figured I’d start writing for the internet again sometime, and that time is now. I have new goals, a new environment, a new perspective.

I’m a Thinking Matters fellow at Stanford (for real! It’s a kind of demi-professor post, so no, Mom, I’m not a real professor), which means I’ve been placed on three teams (one each for the fall, winter, and spring quarters of the school year) to teach a special set of classes designed for freshmen. They show up, having survived whatever insane gauntlet of courses and extracurriculars got them here in the first place, and they are required to take at least one of these courses to help them transition, pick up collegiate study skills, and level up on their critical thinking and rhetorical prowess. Given that these students will have a range of preparation from their high school years, I am excited to be a part of this great equalizer.

More selfishly, the curriculum and team dynamic of my first course, The Science of Mythbusters, are nothing short of perfect for me. I get to indulge all of the vendettas I’ve been fermenting for the last six years, by dropping truth bombs about how we do science onto the next generation of world leaders. That’s right, someone gave this angry woman a platform. Oh sure, I spend 4 hours a day commuting, sometimes crossing the bog of eternal stench that is the south bay on a very slow bus. And sure, I’m still getting used to some things (like how here, if a projector doesn’t work, it’s expected that something can be done about it–I guess that’s how money works). But the important thing is: I don’t hate my job.

In fact I really really really like my job.

I like my job so much that my ideas are starting to come back. I’ve been mining old notebooks for writing topics, reveling in my continued university-affiliated PubMed access, and scribbling down anecdotes that tumble from the mouths of some of the most ridiculously enthusiastic and engaged academics I’ve yet met. I’m still pretty sure that my still being in a university means I can’t access press releases on embargoed studies (holler at me if you think different, EurekAlert!), but then, how many times have I heard from my science writing idols that it’s lame to only cover things because they’re new? Challenge accepted.

Look for a new post soon–I wouldn’t be writing if I didn’t already have a special paper in mind. I’ve got three years in this here writing incubator. The work is fun, the people are nice, and the air here is rich with inspiration.

Posted in Meta-science & pedagogy | Leave a comment