How Can Theoretical Computer Science Inform Neuroscience?
I wish I knew!
Today, there’s a thriving interaction between TCS and physics (mostly centered around quantum computing, but also around, for example, phase transitions in random constraint satisfaction problems). There’s also a thriving interaction between TCS and economics (e.g., combinatorial auction design, computational game theory), and a third thriving interaction between TCS and biology (DNA sequencing algorithms, phylogenetic tree reconstruction, inferring gene regulatory networks…).
Meanwhile, the thriving interaction between TCS and neuroscience is something many outsiders would expect to be there, but isn’t. Off the top of my head, I can think of exactly three theoretical computer scientists who have seriously pursued such a connection:
Leslie Valiant, with his 1994 book Circuits of the Mind.
I wish I understood their work better—and if I ever decided to get into this field, I suppose their stuff is where I’d start. I don’t know which, if any, neuroscientists have seriously pursued a connection with TCS.
I’m sure part of the reason why there’s not more work bridging TCS with neuroscience is just historical accident. Then there’s the cultural gap between people who dissect rat brains and people who sit and prove theorems about what not even an all-powerful Merlin could do—but OK, somehow that didn’t prevent the theoretical computer scientists and the molecular biologists from getting together!
Maybe the real divide is that, when push comes to shove, TCS just doesn’t care about reverse-engineering a specific computational artifact like the human brain, with all the weird evolutionary accidents that caused it to do things one way rather than another (e.g., with the visual cortex at the back of the head). Rather, TCS cares about how something could be computed with given resources, and also about rigorously proving that those resources are necessary.
So for example, theoretical computer scientists have been struggling to understand the power of constant-depth, polynomial-size threshold circuits: a complexity class called TC0. This class happens to be an excellent approximation to what neural networks do—and it also happens to be right near the precipice where progress toward proving P!=NP has gotten stuck. On the other hand, real neurons do lots of things that have no analogues in TC0, but theoretical computer scientists are unlikely to want to study those complications if they can’t even understand TC0 itself. Conversely, part of what makes TC0 so interesting to computer scientists is that, despite being constant-depth, it can handle basic arithmetic like multiplication, division, and Chinese remaindering, and hence (for example) the RSA encryption function—a property of dubious relevance to the human brain!
Even so, the fact that constant-depth threshold circuits are so important to the brain, and also so important to computational complexity for independent reasons, is a coincidence that I don’t think gets nearly the attention that it deserves. And we could easily imagine an interaction where, e.g., theoretical computer scientists proved theorems about the optimal ways to build some functionality (like recognizing faces) out of some given building blocks (like particular kinds of neurons), and then those theorems were treated as “predictions” to be tested against observation of how the brain really does do it.
One parting thought: today there is a pretty healthy interaction between TCS and machine learning, though there probably should be even more. There’s also a healthy interaction between machine learning and neuroscience—as exemplified, for example, by the NIPS conference, which brings those two fields together and which I had the pleasure to attend in 2012. If TCS and neuroscience ever do get together in as solid a way as (say) TCS and quantum physics have, then my guess is that it’s machine learning that will serve as the matchmaker and intermediary
by Scott Aaronson For Forbes