If your visual cortex weren’t such a good camera it’d be a storyteller

I’ve written before about how the brains of blind people can show us which aspects of our mental life are strictly tied to certain senses and which are, let’s call them, “sensory-flexible.” Imagining how a person might navigate their world in the absence of visual input, I think, can stir up interesting new ideas about how and why people use each of their senses. But blindfolding yourself isn’t even a close approximation of the lifetime of experiences and neural changes that come with blindness. Imagination can only take us so far.

I am increasingly disenchanted with brain scanning studies that claim to prove the obvious: “Humans like fun, brain scans show!” Much of the time, you don’t need an expensive brain scanner to tell you these things, and it’s frustrating to see team after team of researchers use these big toys to confirm what we already know. In the case of blindness, however, I see a truly important use of brain scanning technologies. Esoteric philosophical thought experiments can come life when you compare blind and sighted brains.

In these experiments, the brains of sighted individuals can be thought of as being “exposed to” a really overwhelming treatment, visual input, while the brains of blind individuals are not. Which is an interesting flipping of the script: we’re not comparing blind individuals to “normal controls.” Vision is a manipulation, making sighted people our experimental group and blind people our control group. By comparing the two, we can begin to answer questions about what brain functions are “innate” or “hard-wired” and which are shaped by experience (i.e., visual experience).

How do we tell the difference? Remember those “sensory-flexible” functions I mentioned? Well, if a brain function happens the same way regardless of whether it’s fed information through visual, auditory, or even tactile pathways, then there’s probably something about that function that never needed a particular sensory input to help it wire up in the first place. The trouble is, humans overwhelmingly rely on visual input, making it difficult to tell “visual” brain regions apart from others. That’s where the blind brain comes in.

I want to focus on a study out last year in the Journal of Neuroscience, by Johns Hopkins researchers Connor Lane, Shipra Kanjlia, Akira Omaki, & Marina Bedny. This is one of a bunch of interesting studies coming out in this line of work, but I chose this one because it appeals to my inner grammar nerd. Lane & pals explored the role of the “unemployed” visual cortex of people who had been blind since birth. It’s been known for a while now that this brain region light ups instead of languishing unused. It seems to help out with a variety of things, from reading Braille to maintaining information in memory. We know this from studies that use brain scanning techniques like fMRI (functional magnetic resonance imaging) to see the brain at work. We also know that if we disrupt activity in the blind visual cortex using a brain-zapping technique like TMS (transcranial magnetic stimulation), these superpowers seem to go away. Correlation AND causation, neat!

Sensory substitution devices, which translate visual information into “soundscapes,” can be used to present words and pictures to blind people in the scanner, feeding the brain information through a previously unavailable sensory code. Eerily, the brain areas specialized for processing visual information about “what and where” in the brain are still there in the blind brain. Same goes for areas specialized for even more specific things, like words and body parts. These results show that the brain’s weird specializations and selectivities aren’t loyal to any one sensory input. They suggest that there’s something about the way we operate as humans that would’ve resulted in similar neural architecture whether or not we relied on vision. What that something is, however, is up for debate.

Lane & colleagues looked at the way the blind visual cortex processes language. A previous study had found that the response here was greater when sentences had a scrambled word order than when they didn’t. This raised the question of whether these responses were driven by something truly useful, like making sense of words, or simply by the surprise, novelty, complexity, or general weirdness of these scrambled sentences. Another study showed that making a memory task more difficult didn’t drive up the response here, meaning it seems unlikely that the response to scrambled sentences was driven by the difficulty people had in making sense of the words. But Lane & colleagues wanted to get more specific.

They studied something called syntactic movement, a concept proposed by Noam Chomsky. I didn’t know what this was, but it turns out I am certainly guilty of its overuse. Lane gives an example of a sentence without movement: “The creator of the gritty HBO crime series admires that the actress often improvises her lines.” And one with movement: “The actress that the creator of the gritty HBO crime series admires often improvises her lines.” See the difference? The latter sentence taxes the ol’ memory banks in a way that the former doesn’t. You have to hang onto the fact that we’re talking about the actress while I interrupt what I’m about to tell you about her with this little factoid about her having the admiration of the series creator. Now, if you are the blind visual cortex, do you light up for this because it’s difficult, or because you’re helping in some vision-like way to extract meaning from the complicated sentence?

Sidebar to any of my students that may be reading: start taking notes now. I want to see good experimental controls in your final grant proposals. Remember: beat your reviewers to the punch, anticipate their worries and do some cool magic tricks (rhetorical power posing, cleverly matched control groups, and reassuring graphs) to placate them. In other words, if you’re lying awake at night scared of a lurking variable and think your reviewers are definitely going to come for you about it, take the fall. Act hurt. Get indignant. And then get up. Look sickening. And make them eat it. Watch and learn.

In fMRI, we’re often comparing one thing to another thing. This subtractive approach has its limits, but can be very useful for dispatching lurking variables like difficulty. Here, the researchers compared the brain responses during listening and comprehension of sentences to two other responses: while people worked on verbal sequence memory (i.e., remembering a series of words not in a sentence) and while they worked on math problems. They also went a step further, asking whether the brain response increased with the difficulty of sentence comprehension and of math problems. So they’re first establishing that there’s some truly language-specific component of these responses by comparing them to the responses to a similarly language-y, but structurally different task (verbal sequence memory) and to a structurally similar, but non-language-y task (math). And then they further test for language-specific involvement by asking whether the visual cortex is sensitive to a change in difficulty for just language, or for either language or math. If it knows what’s hard and easy, that’s a good sign it’s doing the heavy lifting for that computation. And if it knows this for only one or the other, then this heavy lifting is specific to that task, not a general grunt of difficulty.

They found, first, that only in their blind subjects did the visual cortex respond more to sentences than to non-words and to math problems. They also found that, in blind people, sentences with syntactic movement elicited even greater responses. Meanwhile, when presented with easy and hard math problems, this area didn’t ramp up its activity for the hard ones. And if you’re wondering if maybe they just didn’t make the problems hard enough, check out the prefrontal cortex, where the brain responses definitely scaled up for harder problems.

This helps to rule out the hand-waving explanation that would say that these responses are just a general response to difficulty. So it does seem like the blind visual cortex takes on specific roles in language processing that the sighted visual cortex does not. And it does seem to really be helping the blind individuals get the sentence comprehension job done: the more they activated their “visual” cortex, the better they understood the sentences! The authors dangle a promise of a TMS experiment in front of the reader–I will bet you a set of tickets to Dolly’s upcoming tour that zapping this brain region knocks sentence comprehension way down, in a causal confirmation of this intriguing correlation.

What does this mean? Well, it’s pretty weird that this brain area that we think of as a glorified piece of camera film seems to be capable of participating in information processing as uniquely, complexly hierarchical and rule-based as human language (remember, these words were heard, not read). Prominent/vocal brain-development researchers Steven Pinker and Mark Hauser have proposed that the perisylvian language areas, a family of brain regions clustered around the auditory cortex, are where the real heavy lifting occurs during syntactic movement. The blind participants did also activate these regions, so it’s not like they used a different part of their brain entirely. But at the same time, their brains differed from the brains of sighted individuals in other ways. For one thing, they were less lopsided in how they process language. Normally, the brain’s left hemisphere becomes specialized for language while the specialization for abilities like visually analyzing faces gets pushed over into the right hemisphere. Or, erm, it’s unclear who shoved who first, actually. Regardless, in the blind subjects, Lane and colleagues found evidence that this left-sided language specialization was reduced. Absent this turf war between pushy roommates language and visuospatial analysis, brain organization sort of evens out.

In any case, it seems kind of ludicrous to think that the visual cortex could just switch teams so easily. Since we think of language as uniquely human, while vision is something that even many very lowly animals have, what, then, do we do with this idea that different brain areas are somehow “suited” to their jobs? In other words, what made the visual cortex visual in the first place? And how did language sneak in there when (sorry) no one was looking?

Well, it could be that language is cultural, and that there aren’t brain networks that are innately specialized for it. On other hand, the authors argue, it could be that evolution wired us for language. We know that the perisylvian language areas retain their function in both deafness and blindness. It could be the case that you only need these to acquire language–maybe these areas contain the seeds of linguistic processing, which later spread to other areas, and in blindness, this includes visual areas.

But wait, they say, what about critical periods for language acquisition? Isn’t it true that certain things need to occur at certain points in development, or they’ll never occur? This is why learning languages is supposed to be harder as you get older, and it’s also why a cat with its eyes sutured shut during certain developmental stages will never be able to process visual information in the same way. We know that only blind people who lost their vision before age 9 will activate their visual cortex, in general. After that, it’s thought that too much visual processing software had been installed, and you can no longer teach the old dog new tricks, to make a metaphor cocktail. So maybe, the authors write, if you’re blind, the connections from language areas to the visual cortex don’t get pruned and sculpted by experience during development. Maybe it’s these residual connections that get language information into the visual cortex. Alternatively, maybe the visual cortex picks up new tricks from all the other aforementioned specialized “visual” areas (those responding to words, faces, places, body parts, etc) that stay specialized even when words are in Braille or faces are converted using vision-to-audio devices.

Wherever these responses came from, there are likely to be other specializations within the blind visual cortex. The authors note that locating sound sources in space, mentally rotating a tactile object, and discriminating between two sounds or tactile objects have also been known to light up the blind visual cortex. But it’s in comparing these types of tasks, subtracting off any thought process too general to give us new information, and asking whether these responses ramp up with difficulty, that we can start to tease out what the common, non-visual computations underlying all of these processes are. What is it that you’re doing when you follow the syntactic wiggle-worms I call sentences? Does it feel similar to something you’re doing when your eyes dart around, analyzing a complex visual scene? Similar to anticipating the chorus of a song? Similar to the feel of physically wrestling with a wiggle worm? Somehow you turn sentences into stories and stories into lives and identities. This storytelling ability is part of what makes us human, and weirdly/miraculously, it’s something that springs from the same brain parts, whether or not you rely on vision.

Advertisements
This entry was posted in Science. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s