SfN Day 3 (?) Highlight: A BMI in V1

That I stopped blogging my SfN highlights the same day I took care of Ned the Neuron is no coincidence. That guy loves making new connections.

While plugging into an outlet to power up, Ned and I looked up to see our friend Ryan Neely at his poster! Hi, Ryan! And so, while keeping careful watch of our possessions, Ned and I learned about what Ryan is up to these days. Ryan works with Jose Carmena on brain-machine interfaces (BMIs). A BMI usually consists of a sensor implanted in the brain, some kind of actuator like a cursor or a robot arm, an encoding algorithm to act as a lookup table between brain activation patterns and body movements, and a decoding algorithm to translate brain activity into action. An early goal of BMI research was to help patients who had lost a limb or function of a limb by reinstating control over their environment.

Work by Aaron Koralek, a recent Carmena lab alum, showed that rodents could learn to control the auditory tone they heard, from high to low, by ramping the activity of one group of neurons up and ramping the activity of a different group of neurons down. If they hit just the right tone and sustained it, they got a reward. And to show that these animals had truly learned a new skill and not picked up a new habit, they showed that if they offered a reward that the animal was at the moment sick of, either sugar or chow, they stopped working for it. These new skills seemed to rely on the strengthening of the circuits between the primary motor cortex and the striatum, an area deep within the brain implicated in both learning and storing habits and skills.

These animals had sensors implanted in their motor cortex, and without moving, they were able to drive activity in targeted groups of neurons up or down. How these groups form their allegiances, we aren’t sure. We do know that BMIs work better when both the encoding algorithm and decoding algorithm are allowed to be flexible, learning each other as both the brain and the algorithms update.

But so far, having only decoded from motor cortex, we figure the brain activity that drives all our encoding model’s power comes from motor imagery. When we imagine a movement, the brain is activated in a strikingly similar way, compared to when we actually execute movements. Motor imagery is so reliable, in fact, that it was used to detect signatures of consciousness in people in vegetative states. Simply asking them to imagine playing tennis or walking around their home, and then analyzing the patterns of brain activation, yielded reliable “yes” and “no” answers. Crazy.

So, ok, now imagine you’re a BMI researcher and you want to know, does this work for motor imagery only? Are we limited to making BMIs that do what arms and legs do? Which is awesome, but we’re not even really sure why decoding motor imagery into motor behavior works. Those “two groups of neurons” that are ramped up or down to raise or lower the tone of a cursor? Dunno. Statistics and magic. We call each group an “ensemble,” because they act together, but it’s unclear what, subjectively, the animal (or human) would be doing to change these ensembles’ activity. For all we know, the animals could be envisioning playing tennis to make the tone go higher or walking around his home to make it go lower.

Enter Ryan. Ryan is doing something kind of crazy, which is measuring activity in V1, or the primary visual cortex. At first blush, this seems like it will never work. We can consciously control our motor cortex’s activity by using motor imagery, but the visual cortex is a sensory area. Sen-so-ry. Got that? It’s in the business of input, not output.

Wait, what’s that you say? It worked. Hell yeah it worked. You see, the visual cortex is not as passive as its moniker would have you believe. It receives inputs from areas sandwiched between motor and visual cortex, and these areas are involved in attention. Injury to these regions in one side of the brain results in an inability to pay attention to things on the opposite side of space (visual input switches sides, left-to-right, on its way from the eye to the brain). Attention is thought to rain down on primary visual cortex to sharpen, enhance, and otherwise hasten your response to, whatever it is that you’re paying attention to.

So were the animals just changing how much they paid attention to things? Maybe. Don’t care. Turns out you can use a sensory cortex to drive a brain-machine interface. That is nuts. Does this mean you could someday use visual imagery to control robots? Does it mean robots could control you by showing you pictures? Does it mean having a “vision” for something really does, in a weird scientific/metaphorical fusion, sell us on the “If you can dream it you can do it” thing? Who knows! Science is fun and weird.

That was all of the science Ned and I took in that day, but we did have an awesome time at #sfnbanter, meeting Twitter folks. Whether we were up the next day for our 8:30 am talk, well, we’ll find that out in the next installment. Ciao for now, you beautiful, powerful mental imagery machines. Ciao.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s