Interesting ideas interspersed with nonsense - RSS - by nikhil bhatla, nikhil@superfacts.org -
Home Archives November 2008

« How I came to believe in God - You have 2 time travels: Choose wisely »
Controlling robots with your mind
Nov 8, 2008, 12:15p

It's the stuff of which sci-fi dreams are made. Mind control, well, not of another person, but of a robot. Want your laundry done? Tell your robot to do it. But not just tell it, think it.

That's the idea, anyway.

Before I applied to grad school, this stuff really intrigued me. While working at Google, so often I remember thinking, "If only I could just think my emails, instead of having to type them, I would be so much more efficient." I could send email while I drove, or while walking from my car to my apartment, or while sitting on the shitter. Anything to be more productive...

While productivity as a goal has its own issues (described a bit in a previous post Why I believe in God), let's put those aside for now. The idea that thinking alone, without any muscle activity, could cause something to happen is pretty cool. A couple years ago, I described a device called BrainGate that let a paraplegic control a cursor on a computer screen with his brain. The company that produces these devices is called Cyberkinetics. How does it work? Basically, a grid of electrodes is implanted on the surface of the brain, specifically above a cortical area known to be involved in muscle control. Through trial and error on the person's part and learning algorithms on the software's part, the person is able to learn how to change the electrical activity detected by the electrodes to move the mouse cursor where he wants it to be. To be clear, this type of technology doesn't tell us much about how the person thinks, or how the brain works. Rather, it's an example of how you can take an arbitrary signal from the brain and interpret it to achieve an outcome (moving the mouse). As an analogy, think about typing. We learn to move our fingers so that words come out, but watching our fingers move doesn't tell us much about how our brain produces language. Instead of moving his fingers, the paraplegic is changing the electrical activity in his head. That in its own right is interesting, but this is more of a medical device engineering problem than a neuroscientific one. Now that I'm in grad school, I've lost interest in working on this problem, though I still find the results to be really cool.

One thing to note is that the electrode grid tends to move over time. So the long-term medical value of this device is still dependent on further engineering innovation.

Anyhow, I ran across a recent paper that takes BrainGate off the computer screen and brings it into the real world. Here, instead of controlling a cursor on a computer screen, monkeys were trained to control a robotic arm to grab food. How does this work? The robotic arm has 5 degrees of freedom (3 at the shoulder, 1 at the elbow, and 1 at the hand, i.e. how much to open/close). However, the monkey's brain is not being used to control these 5 variables directly. Rather, the monkey's brain is being used to identify a point in 3-D space that the monkey wants to reach to, and software then figures out how to get the robotic arm to move to that point in space (this is a slight over-simplification, but it's the overall idea).

The algorithm the researchers used to determine the point in 3-D space is surprisingly simple. The electrode grid records from roughly 100 neurons, and measures the firing rates for each neuron when food is presented to the monkey at a specific location in space. Based on the location associated with the highest firing rate for each neuron, a position in space can be defined by averaging over the actual firing rates of a collection of neurons, each with a different preferred direction.

Let's look at a simple example. If food is presented in the left corner of the field of view, neuron #1 may fire 5 times per second (5 Hz), while neuron #2 may fire at 10 Hz. If food is then presented at the right corner, neuron #1 may fire at 10 Hz and neuron #2 may fire at 5 Hz. Based on this "training" information, the algorithm then determines that neuron #1 encodes a vector pointing to the right corner (neuron #1 fires more when food is there), and neuron #2 encodes a vector pointing to the left corner (again, neuron #2 fires more when food is there). After the software has learned this relationship, it can make general predictions about the location of the food based solely the neural firing rates it has access to. Now, let's say that the food is presented in an unknown location, and neuron #1 fires at 7.5 Hz and neuron #2 fires at 7.5 Hz. The algorithm will interpret this to mean that the monkey wants to move the arm to the center, because this is in the middle of the range of firing for both neurons. Hopefully that makes sense - sorry if it's confusing!

Anyhow, I think that's enough technical details. On to the movies!



Monkey 1 self-feeding with a robotic arm

It's amazing how smooth the movement is (you'll see that monkey 2 is more shaky). It's also interesting to watch the restrained arm, which sometimes makes grabbing motions as well. Note though that the monkey's real arm *that we can see) is controlled by the opposite hemisphere in the brain, so its movements won't necessarily be identical to the movements of the robotic arm.




Monkey 2 self-feeding with robotic arm

This monkey isn't performing as well as monkey 1 - his robotic movements are much more shaky, and the gripper sometimes needs to be squeezed by the experimenter so that the food doesn't fall after it's been grabbed. Note also how much of the monkey's head is obstructed from view - the researchers probably chose this angle so that random viewers like you and me wouldn't be freaked out by all the nasty looking wires coming out of the monkey's head. If PETA has had some effect, it is this: that researchers are loathe to publicly acknowledge what they put their animal subjects through. If you can't see the problem, it doesn't exist (at least as far as the public is concerned). Also note that in order to motivate the monkeys to use the robotic arm to get food, the monkeys are starved (or "food-restricted" in euphemistic terms) so that they're really, really hungry. Sometimes they may be "water-restricted" as well. I'm not quite sure how I feel about this, but it's worth knowing because it is a critical part of the experiment: if you don't starve your monkey, he won't be interested in whatever you try to teach him, and therefore won't learn.




Monkey 1 using a robotic hand to push food into its mouth




Monkey 1 licking the fingers of a robotic hand

Here, the monkey is able to prioritize licking food off of the robotic hand even though there is more food in his field of view.

Seeing is believing, eh?

--
Source:
Velliste..Schwartz 2008. Cortical control of a prosthetic arm for self-feeding. (PDF)

Read comments (2) - Comment

Sachin - Nov 9, 2008, 9:50p
Movies hosted on Posterous.com
http://nikhil.posterous.com/



Garry - Nov 14, 2008, 11:36a
Wow, this is remarkable work.


Name 
Comment 
« How I came to believe in God - You have 2 time travels: Choose wisely »

Come back soon! Better yet, stay up-to-date with RSS and an RSS Reader. Creative Commons License