Riskgaming

The Orthogonal Bet: Using Computational Biology to Understand How the Brain Works

Welcome to the ongoing mini-series The Orthogonal Bet. Hosted by ⁠⁠Samuel Arbesman⁠⁠, a Complexity Scientist, Author, and Scientist in Residence at Lux Capital.

In this episode, Sam speaks with ⁠Amy Kuceyeski⁠, a mathematician and biologist who is a professor at Cornell University in computational biology, statistics, and data science, as well as in radiology at Weill Cornell Medical College. Amy studies the workings of the human brain, the nature of neurological diseases, and the use of machine learning and neuroimaging to better understand these topics.

Sam wanted to talk to Amy because she has been using sophisticated AI techniques for years to understand the brain. She is full of innovative ideas and experiments about how to explore how we process the world, including building AI models that mimic brain processes. These models have deep connections and implications for non-invasively stimulating the brain to treat neurodegenerative diseases or neurological injuries.

Produced by ⁠⁠⁠⁠⁠⁠Christopher Gates⁠⁠⁠⁠⁠⁠

Music by ⁠⁠⁠⁠⁠⁠George Ko⁠⁠⁠⁠⁠⁠ & Suno

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
Hey, it's Danny Crichton here. We take a break from our usual risk giving programming to bring you another episode from our ongoing miniseries, The Orthogonal Bet hosted by Lux's scientists and resident Samuel Arbesman. The orthogonal Bet is an exploration of unconventional ideas and delightful patterns that shape our world. Take it away, Sam.

Samuel Arbesman:
Hello and welcome to The Orthogonal Bet. I'm your host, Samuel Arbesman. In this episode, I speak with Amy Kuceyeski. Amy is a mathematician and biologist and professor at Cornell University in computational biology and statistics and data science, as well as in radiology at Weill Cornell Medical College. Amy studies how the human brain works, the nature of neurological diseases and how machine learning and neuroimaging can be used to better understand all of these topics. I wanted to talk to Amy because she has been using sophisticated AI techniques for years to understand the brain and is full of wild ideas and experiments about how to explore how we process the world, including building AI models that mimic how our brains process our surroundings. There are even deep connections and implications here to how we might noninvasively stimulate our brains in order to treat neurodegenerative disease or neurological injuries. I am so pleased that I had a chance to speak with Amy about her background and interests, the world of AI and brain science, and what exciting potential there is for this field going forward. Let's dive in. Hello, Amy Kuceyeski and welcome to The Orthogonal Bet.

Amy Kuceyeski:
Thank you for having me. Yeah.

Samuel Arbesman:
Good to be talking. You have a lot of interests in a lot of different research areas and they're all kind of related. Maybe it would be best if you could share your background and how you've gotten to where you are and the different things you're actually focusing on, and then we can kind of pursue lots of different areas from there.

Amy Kuceyeski:
Yeah. So I grew up in Ohio and I went to this really small high school. My senior year we had to consolidate because we didn't have a lot of money left to run the school. So we had to consolidate. The previous school that closed didn't have any higher math than algebra, so it didn't even have calculus or trigonometry or anything. And then my senior year when we consolidated, we actually had a calculus class and only two people were in it. It was me and another student. So basically I had a full year, my senior year of one-on-one basically tutoring with a calculus teacher. And it got me sort of interested in math and how math can represent the natural world. You can kind of model the natural world using these systems that are derived with equations.
And then when I went to university, I went to Mount Union. It's this really small school in Ohio, best known for its D III football team. Not anything else really, but yeah. So I went to that school and I sort of got really interested in math and I kept doing it just because it felt so comfortable and it made sense to me. And then when I was about to graduate, my professor suggested that I go to grad school. I didn't really know what else to do, so I applied for grad schools and I got into a couple. I almost didn't get into Case Western, but I called and the chair told me that I was first on the waiting list and then he saw how excited I was. So he kind of contacted the dean and got an extra line for me. And that's kind of how I got into grad school for math. So I got my PhD in applied math, and then I went to Weill Cornell in 2009 for a postdoc.
And there I was working in MRI sort of reconstruction. And this was at a point in time where sort of people were getting really interested in mapping brain connections, using things like diffusion MRI to map the wiring of the brain or the white matter connections between different gray matter regions and also using fMRI or functional MRI to look at brain activity and how that sort of evolves over time. And so then I got really interested in using these methods to try to figure out what's going on in the brain because to me, the brain is one of the most interesting things. One could argue it's the most complex object in the known universe. It has 86 billion neurons and a 100 trillion connections, and we are just scratching the surface on understanding how it works.
I vividly recall having a conversation with a neurologist at Weill Cornell in the first six months of my postdoc at Weill Cornell Medicine. And we were mentioning something about the connections in the brain and how we were imaging different parts of the brain and trying to figure out what parts of the brain map to what functions and how different diseases like multiple sclerosis or traumatic brain injury or stroke can disrupt these connections and then cause an impairment in the person. And I sort of said, "Oh yeah, but we probably already know this." And the neurologist looked at me and said, "No, we don't know this actually. This is something that we are just scratching the surface of. We don't know." And it was just amazing to me. Two things. One was it made me very inspired to keep working in this area. The second was I don't ever want to have neurological problem.

Samuel Arbesman:
Because there's like so little known. Yeah.

Amy Kuceyeski:
Yeah, so little is known about how to sort of prognose or actually treat some of these diseases like traumatic brain injury doesn't really have any kind of treatments. It's hard to predict if someone's going to come out of a coma or not after severe brain injury. The course of different diseases is really difficult to predict, especially things like MS. The idea that I was going to be making an impact on clinical outcomes was sort of mitigated by that conversation because I saw how far we were from understanding how the brain actually works and using neuroimaging to do predictions of outcomes or to tailor treatments. But I think we've definitely gotten closer over the last few years.

Samuel Arbesman:
When you were talking about this first taste of realizing there was this frontier. I mean, and presumably in grad school, like you were working at the frontier of, but I feel like there's kind of a difference between, okay, here's a specific problem. Maybe it's a small problem or kind of something very circumscribed versus realizing, oh my God, I'm at the edge of this wild west. Was that sort of the realization? Like realizing, oh my god, there's just so much we don't know.

Amy Kuceyeski:
Yeah, I don't know if you've ever seen this XKCD comic falling off the math cliff they call it. So you have this math cliff, you're learning algebra and you're learning calculus and everything makes sense and you're learning differential equations and you're like, wow, it can capture all these systems. And then all of a sudden you get to the point and you're like, oh, we don't know any, like there's a lot of things we don't know yet. And I think that that's probably true for a lot of different fields, where you start in the field and you know so little that you think everyone else knows everything.
And then you get to the point where at that sort of cusp of the understanding and then you realize what's left and it's this chasm ahead of you, which is so exciting, right? It's like in some ways it can feel a little disconcerting as a student because that's where you start to have to be creative about things and to think of new ideas or new approaches to analyze a problem. And that can be a little bit disconcerting, but it definitely is what excites me as a researcher.

Samuel Arbesman:
I guess related to education, is there a sense that because when people are learning various aspects of the sciences that they are not exposed to that frontier and kind of that lack of understanding, that it just makes it feel less exciting because there is a wild difference between doing a problem set and actually trying to do something that no one knows the answer and you're just trying to, you're stumbling in the dark and there is this sense of epistemic ignorance and it can be very invigorating.

Amy Kuceyeski:
Right.

Samuel Arbesman:
And I feel like we don't teach that. And not that, I think we teach the idea, like okay, science is constantly moving forward, but we don't provide that sense of, okay, here's all the stuff no one knows and is that,-

Amy Kuceyeski:
Yeah. I think we should do a better job of that in my opinion. I take on a lot of undergraduate students that do research projects in my lab, and that's definitely something that people need to learn, is that the project I'm assigning to them, nobody has done it before. So the project that we're going to work on together, they come back with me with some result and they're like, "Oh, does this look bad or good?" And I'm like, "I don't know. I actually don't know. This is the first time that anybody's ever done this, so we could maybe try to improve it or find a benchmark or something like that." But I don't actually know if it's good or bad. And I don't actually always have a hypothesis about everything, especially when you're working with such large dimensional data sets.

Samuel Arbesman:
Do these undergrads recoil the first time they're kind of confronted with like, oh my God, you don't even know this. Isn't busy work. This is something real. Are they excited or is it kind of a mixture?

Amy Kuceyeski:
No, I think it's a mixture. I think some people are excited by it, some people are surprised. But I would say I think it definitely will motivate somebody to really work on a project, if they can see that nobody's actually done this before and this is something the first time, this is the first time we're ever seeing a result like this. And I think it can be exciting for the student and it definitely gives them a different flavor of education than it would if you're doing a problem set in a class. I'm actually partnering with Women in Data Science, is this worldwide collaborative effort to sort of bring together women who are interested in learning about data science and analyzing big data sets. And so we've been putting together a few different datathon challenges that will happen in the fall and in the January 2025 where we're working on women's brain health problems and we're actually putting together challenges for different individuals that want to participate in these datathons that are mostly centered around neuroimaging data and something to do with women's brain health.

Samuel Arbesman:
And related to the unknown. Okay. So can you tell me a little bit more about, some of the products you're working on have just a lot of fun titles, like there's a crack encoder and there's NeuroGen. Tell me about the different, like the venturing into the unknown that you're currently working on.

Amy Kuceyeski:
Well, I will say some of the most, I think interesting work perhaps for your audience is looking at the relationship between biological neural networks, so the human brain and artificial neural networks, so things like large language models or image segmentation models, image identification models, things like that, computer visiony sort of things. People probably know this, but artificial neural networks are actually modeled after biological neurons. So the way that our brains sort of learn by making connections between different concepts, that was actually the inspiration for neural networks and what is used in a lot of our large language models and ChatGPT. And these are all based around deep learning sort of structures, and that's based around the biological brain because from what we know of the entire universe, the human brain is the best learning object. So it's the object that learns the best and actually specifically children's brains which have different sort of biology than an adult brain, but they have this ability to pick up concepts and do this very quick generalization to the rest of the world, and they learn a single concept and kind of apply it in other places.
Looking at how biological networks learn can also help us improve how artificial neural networks learn. And so a lot of what I'm doing is looking at using different artificial neural networks to model biological networks. And so I think what you're referring to is this project we call NeuroGen where we have an encoding model of human vision. So we have this model that is in a computer and you give it an image and it predicts how the brain is going to respond to it. So it was trained on this beautiful dataset called the Natural Scenes Dataset from a collaborator of ours, Kendrick Kay and Tom Naselaris and there they had eight people come in over a year's time and they saw 20 to 30,000 images a piece. And so the brains were recorded. FMRI was taken, a picture was taken with the brain activation while they saw this image. And so using that data, we can actually train a brain in a box, if you will, that you can show an image to and it will predict how the brain is going to respond, how it's going to activate.
So we have this encoding model, and what we wanted to do was couple it with a generative model. So I'm sure people have seen these generative models that create images of the pope in a puffer jacket, let's say. And so they have these very nice generative models that can create these really nice images that look very naturalistic. We put those two things together and then that allowed us to actually provide the tool a pattern of brain activity that we want to see, and it would create an image that was predicted by that brain in a box to achieve that pattern of brain activity.
So let's say I wanted to maximally activate the fusiform face area, so we know that responds to human faces. So I would give that as the target. And what the generator would do is create all of these images and try to figure out which one gave the highest predicted value for the fusiform face area. And so that was actually one way that we validated it, was using these areas that we know respond in the brain to specific things like faces or places or words. And we had it generate these images and they lined with what we expected. So if we put in a fusiform face area, we would get back human or surprisingly somewhat dog faces. So I don't know if you want to talk about that, but.

Samuel Arbesman:
Yes. Yeah. Yeah, talk more about that.

Amy Kuceyeski:
Yeah. So in the human brain there are series of different areas that light up for faces. So there are perhaps five areas in your brain that fire for faces and different aspects of faces will fire these in different rates. What we saw when we were using NeuroGen to create these images that had sort of top stimulating predictions for these different five face areas was that across the eight people in the dataset and across the five regions in each person's brain, there were a varying number of human faces, which was what we expected. And also dog faces. So some regions in certain people's brains had more dogs in their top stimulating images and some people had more people in their top stimulating images. And at first we thought it was a byproduct of the generator. So we were using something called BigGAN-deep and BigGAN-deep can generate images from 1000 different categories, but out of those 1000 categories, 100 of them are dog breeds. So it is very heavy because that's what the internet contains, right?

Samuel Arbesman:
Right. Yeah. The internet has a lot of animals. Yeah. A lot of pets.

Amy Kuceyeski:
Perfect. So we thought, okay, it's probably just because our generators overrepresented for dogs, so somehow those dogs are just leaking in. But why don't we go back to the data and actually analyze the data, the brain data, and look at how a person's brain was responding to humans and dogs in the Natural Scenes Dataset, which is what they used for the fMRI experiments. And then for each region in each person's brain, we got sort of a preference level of how that brain region responded to dog faces and human faces just directly from the fMRI data.
And we correlated that with a number of human and dog faces we found in the top stimulating images of NeuroGen and they were actually correlated. So it seems like there was some real underlying preference of the person's brain regions to dog versus human faces that was being picked up by neurogen that wouldn't have otherwise been picked up because if you looked at the top stimulating images just from the data, it's very noisy. And so you generally end up with just human faces because the noise, the SNR of fMRI is not that great for doing test or test reliability of image responses.

Samuel Arbesman:
Well one, do you know anything about the eight subjects, like are some of them dog owners or dog people? Is there something,-

Amy Kuceyeski:
I really wish I did because there was one person actually who was very biased towards dogs and as sort of neuroscientists know that attention has a lot to do with how the magnitude of your brain response. So if you're really attending to something and it's really exciting you or really scaring you or it's got a high salience for you personally, then you do have a bigger brain response. So it could have been somebody who owns dogs or who's afraid of dogs or likes to look at pictures of dogs. I wish I could go back, but I can't.

Samuel Arbesman:
And humans domesticated dogs. But I wonder if there's almost this co-evolution kind of thing where as a result humans now view them as human-like in some way. And so it actually is being caught up in this perception. Is there something there, like to that theory?

Amy Kuceyeski:
I mean, I would say that is. We are a product of our environment and our evolution. Right. The reason we have lots of brain area that is dedicated to human faces is because we needed to be able to understand human faces to exist in society. The reason we have brain regions that fire in response to words is because that is a very important part of our culture. There are these specialized parts of our brain that fire for different things that are more salient than others. You don't have a part of your brain that say fires for an iPod. These things that are sort of important in evolutionary terms and important in functioning society have these dedicated regions in our brain. And you can imagine that because we co-evolved with dogs for so long, it's also sort of telling that 100 out of the 1000 topics that we can also generate with BigGAN-deep were dogs. Right. So it somehow, it's taking up a lot of real estate in our brain.

Samuel Arbesman:
Were there any regions of the brain that you were trying to stimulate with images that generated, stimulated the system in some way but actually looked very, very different than what you expected? And there was the dog example, but were there other things, I don't know, some weird swirling thing that somehow is correlated with I don't know, certain types of words or whatever it is and you're still not entirely sure, like there's just some other thing going on or things you don't fully understand?

Amy Kuceyeski:
Yeah, so the first step we wanted to do was sort of validate the framework. So in the validation we sort of had to pick these a priori regions we knew would fire for certain topics. But I think one of the things we want to do next is look at how an undefined blob that we don't really know has a specific function, could potentially fire for that. Doing this, there's actually work from someone at Cold Spring Harbor and he actually does these experiments with monkeys where he does single neural recordings and he optimizes naturalistic images that are maximally firing these specific neurons. And then he also has these sort of synthetically generated images that spot fire specific neurons.
And on Twitter he actually once a week or something, he'll have a different neuron and they always have some really interesting visual makeup, or geometric shape, or a checkerboard, or an airplane flying in the air, or a bird, or something like that. So it seems like it's definitely doable with this type of approach where you have a generative algorithm coupled with some either recording directly in the animal or the human or some encoding model of human brain.

Samuel Arbesman:
I think this is kind of how we first connected was around the ability to generate images or videos that would almost generate physiological responses. If you look at some image, and I don't know, it's equivalent of drinking a cup of coffee or helps alleviate pain or whatever. Are these the kind of things that you think increasingly as these generative models become more sophisticated is actually possible?

Amy Kuceyeski:
So I hope it's possible because it seems like it could have potential for some kind of therapeutics, either depression, anxiety, or even Parkinson's disease, where's specific neurotransmitters that are either under produced or the receptors are being sort of damaged and you could actually perhaps target specific neurotransmitter releases. We know that if you look at a picture of your own child, that you have a specific pattern of neurotransmitter release more than another child. Or if you look at a beautiful picture of a sunset and you appreciate that beauty, you might have a release of dopamine, but it's specific to the individual as well. And so one of the focuses of our work was identifying how a particular person's brain responds to a specific set of images.
So we had this way of taking bigger model that we train on this massive data set and a few examples from a novel individual to create a personalized model for that person. And so that was kind of the subsequent paper after NeuroGen was sort of looking at how we could actually capture a person, individual person's responses to an image using this sort of bigger framework and fine-tuning it with a few images from that person. Like we found, some people are going to be more stimulated by perhaps dog faces and other people may be stimulated more by human faces. So we want to be able to isolate that and to be able to capture that with a small amount of data.

Samuel Arbesman:
And actually related to the small amount of data for these personalized models, how much information is needed to kind of capture a specific individual? Now of course I'm caveating, like this is just for certain types of visual responses.

Amy Kuceyeski:
Right.

Samuel Arbesman:
But I mean, are people, at least when it comes to this kind of thing, like far less complex than we might realize in terms of how you can personalize the model? Or are they still pretty complex?

Amy Kuceyeski:
Right. So if you look at how well you can predict another person, like a person A from person B's model, it does okay, but it's not perfect. There is a lot of sort of individualness in the responses of the brain to the same stimuli across individuals. Did an analysis actually to see how many samples we needed from an individual to be able to get a pretty good accuracy in predicting their brain responses from a larger set of data. And we found that it's really only a couple hundred that you need to look at the responses and get maxing out of the accuracy of the model that you would otherwise not be able to collect. You can't collect 30 or 40 hours of fMRI in each subject going forward, it's sort of unfeasible. So using that information and sort of fine-tuning a model for a specific individual looks to be pretty robust in terms of capturing both inter-individual variability and overall accuracy.

Samuel Arbesman:
Is the resulting model pretty compact for an individual person, like once you kind of lay it over onto the generic model?

Amy Kuceyeski:
Yeah. So we did lowest order thing that we could have done, which is we had these larger dimension encoding models for each of the eight individuals in this densely sample dataset. And really all we did was just fit an eight parameter linear regression to do the prediction.

Samuel Arbesman:
Okay.

Amy Kuceyeski:
So it was very straightforward. We're working on something now that might be a little bit more complicated and perhaps capture more of the individual variability. We showed that you could use it to sort of reproduce the individual differences in the prospective cohort of individuals.

Samuel Arbesman:
And you mentioned that there is the artificial neural networks were based on by analogy, kind of biological networks and the ideas trying to kind of learn from them. To what do you think that these encoder models that you're developing actually mimic either the structure of the connections within the brain or maybe at a processing level they're kind of doing similar kinds of things. Do you feel like structurally versus functionally it's similar? Is that even the right question to ask about these kinds of things?

Amy Kuceyeski:
No, it is. So this is a question that a lot of people are asking and it's an active area of research, especially in computer and biological vision. And so there was actually this really cool work by Jim DiCarlo's lab where they created a website called brainscore.org and you could actually upload your artificial neural network that was doing image prediction. So what is in an image? Is this a cat? Is this a dog? It would use the internal layers of your artificial neural network and correlate it with what happens in the flow of information in the biological brain of monkeys. So they had these images, so your biological brains are hierarchical, so there's this processing that happens. Low level features are processed first, so things like texture and color. And then as it goes up in the hierarchy, you start to pull out objects from it.
And so we have that recording from each of these layers from a biological brain and you can actually compare it to how the artificial neural network fires through its layers as well. And you can do that and get a score of how similar you are in the way that a monkey brain responds to an image versus your artificial neural network. And you can actually do this comparison. And people have done this a lot and there's actually been some really cool work looking at how robust a network is. So I have things called adversarial examples where you perhaps have a picture of a duck and you add some very highly structured noise and now it's classified by your artificial neural network as a horse, let's say, even though to the eye, the human eye, it still looks like a duck. It's sort of tricking your artificial neural network.
So there's some evidence, this is from Dan Yemen's lab, looking at the structure of the internal workings of the artificial neural network. And it showed that the more those internal workings of the artificial neural network matched to biological networks, the more robust the artificial neural network was to these adversarial examples. So it seems like taking the structure from the biological vision network and imposing it in the artificial neural network is actually, was improving its robustness to these adversarial examples.

Samuel Arbesman:
I feel like I saw some research where they were trying to find adversarial examples for humans, like for human vision, and it was one of these things where you kind of look at it and you're like, okay, I can kind of see it both ways. I'm not quite sure what this image is. Would these kinds of systems, like the more robust systems still be, I guess susceptible to kind of adversarial examples, the kinds of things that we would be susceptible to as well then?

Amy Kuceyeski:
Yes. And this is a really interesting question, and there's a bunch of studies that I'm thinking of, a couple of studies from I think Jim DiCarlo's lab where they're actually, they have single neural recording in monkeys, and then they created these examples where they added some kind of highly structured noise and created an adversarial example to suppress the neuron firing. They started with the original image that maximally fired this neuron, and then they tweak it slightly and then they get suppression of the neural response. When you look at them side by side, you can definitely see that there are differences, but the semantic content is the same. So let's say it was a picture of a flower, you could still see the flower in it, but it might be a little bit of a swirl in the middle or something like that. And so they're actually working on these biological adversarial examples, at least for single neuron recording in monkeys.
And I did see this one paper, I think it needs to be replicated because it was kind of a small sample, but basically they had an image of a panda and they added adversarial noise to make it such that it was classified as a cat or a dog. And then they showed the image to people in a very short presentation, so they couldn't really get time to look at it. I think it might've even been perhaps subliminal. And then they were asked whether it was a dog or a cat, and they had better than random chance at picking the one where the adversarial example was trained for that. So I'm not sure. I think we need a bit more replication and rigor on that one, but that suggests that perhaps humans are susceptible to adversarial noise as well.

Samuel Arbesman:
Certain artificial networks have been trained to recognize and actually generate optical illusions, which I mean optical illusions are certainly different than an adversarial examples, but they are kind of, an instance maybe, I'm not sure if the right term is oh, like the brain kind of misfiring, but something going wonky in some weird, very non-technical way. How much research is around optical illusions?

Amy Kuceyeski:
There actually surprisingly is a lot.

Samuel Arbesman:
That's amazing.

Amy Kuceyeski:
Yeah. So there's this famous sort of paradigm where you have a cube and it's either facing up or facing down, and they would put the person in the scanner, have them look at this cube that's either facing up or facing down or both, and the person would respond in a continuous way, whether they saw it going up or down, and you can actually decode their responses based on their brain responses while they're looking at it. So it does seem like it's actually changing sort of the neural patterns of activity and response to that same stimuli, the way that we perceive it.

Samuel Arbesman:
A lot of this kind of stuff, it feels very science fictiony, and you and I have spoken about science fiction kind of its influences. And so actually, I mean some of the stuff when I think about looking at some image and kind of changing some physiological impact, like there's Neal Stephenson's Snow crash is one of these examples, and there's many other stories around these kinds of things or decoding brain. Within this field, is there a huge amount of science fiction as inspiration? Is it there but it's kind of not really talked about so much? How do you kind of navigate that,-

Amy Kuceyeski:
Say it's more the latter. Yeah.

Samuel Arbesman:
Okay.

Amy Kuceyeski:
Yeah. I think scientists are usually try to be sort of buttoned up because we have this aversion to our work being not portrayed with the right kind of fidelity in the media. Right. So there's this sort of common idea in scientists that when you write a paper and you want it to get out into the public, that there will be certain things that are pulled out that will get people to read the popular article, which that's fine because I think we want to have people excited about science, but also I think we have a general aversion to these sort of less scientific and more sort of exciting, I guess I could say ways of presenting our research. And I think as a scientist using examples from science fiction, which are I would say mostly dystopian, they're mostly dystopian, but I think it actually gets us to think about how our potential things that we're doing could be misused.
There's been a lot of excitement around decoding brain activity. So we also have a paper doing decoding of brain activity. And when I say decoding, I mean, so when I was saying encoding, it's when you present the artificial network image and it predicts how the brain is going to respond. Decoding is the opposite where you give it the model, a pattern of brain activity and it reconstructs the stimuli that the person was looking at. And so this is actually something that's a really active area of research now because we have these generative models that allow us to create candidate images for the decoding and they're actually looking better and better. I would say every month or so I see at least a few coming out looking at decoding of images as well as decoding a video. So there's something called mind video. You can decode video person's watching and also text that a person is listening to, but it's not just text that person is listening to. It's also something a person is thinking about.

Samuel Arbesman:
Wow.

Amy Kuceyeski:
So you can also decode imagine stimuli. So a person can be showed an image, then the image is turned off of the stimulus and they're told to imagine the image. And with a pretty good accuracy, you can decode what image the person was thinking about. This is from Alexander Wooves lab where they played people, The Moth podcast for hours in the scanner, looked at their brain responses and then they were able to sort of decode which story they were listening to in a held out set, but also the person imagined the story and they were able to reconstruct the text with a relative amount of accuracy. So there are these steps towards being able to reconstruct what a person is thinking from looking at their brain images. We're far from being able to do it in an accurate way without a ton of data from the single subject themselves. Right. So I wouldn't say it's anywhere near being dystopian, but the potential is there. Right. So science fiction can either be inspirational or it can allow us to put guardrails on the research that we do.

Samuel Arbesman:
I appreciate that, yeah, it's being used in kind of a way, yeah, to kind of make people aware of kind of the implication, societal, ethical, legal implications. Yeah.

Amy Kuceyeski:
Yeah. So when we were doing NeuroGen for example, I brought to mind it's not exactly the same but Clockwork Orange where they're exposing this person to the images of violence and then making them sick. So they had this aversion sort of therapy, but they're showing these images to sort of elicit a specific response. And so I was thinking about that at some point when we were making the NeuroGen as well.

Samuel Arbesman:
And so this broad field, I mean it's both basic research, but it also feels like to a certain degree the actual practical implications are getting closer and closer. How do you think about the downstream implications, whether it's for healthcare, for digital understanding, for I mean, various commercial applications? What is kind of the sense where long term these kinds of things can be used, just beyond being amazing and fascinating and awesome?

Amy Kuceyeski:
Yeah. Right. Understanding the brain itself, but I think there's a lot of different applications, and one of the first ones that's the easiest is the brain computer interfaces. So there's a lot of excitement around using intracranial recording to decode a person's thoughts that perhaps are paraplegic or paralyzed who have perhaps ALS, and they are losing their function of speech or have Locked-in syndrome. And so these sorts of decoding strategies can be used to allow the person to communicate with their caregivers, with their families, to be able to control robotic arms to perhaps bring a cup of water to their lips. And so it's giving these people the opportunity to feel like they can interact with their environment even when they're paralyzed or locked in or can't otherwise do that. So I think that's the sort of first application I can think of. There are other applications as well.
One of the ideas we had about NeuroGen specifically was that if you can perhaps get the brain to respond and get a specific pattern of brain activity, then maybe you can cause longer term changes to the connections, the functional connections in the brain. So let's say a person has anxiety, they have too much connection between their amygdala and their hippocampus. We want to break that, so maybe we can make an image that will stimulate the amygdala and suppress the hippocampus to sort of reduce the amount of functional connectivity between those regions that could potentially then have a therapeutic effect. These sort of brain computer interfaces, noninvasive brain stimulation.
Invasive brain stimulation is also something that's happening more and more these days with the electrodes that are implanted for treatment of OCD, depression, traumatic brain injury. That is interesting. And because you can actually use decoding strategies to look at the brain state, so where you can actually record how the brain is activating and how that relates to a person's symptoms. And then if you see the brain and going into the direction or the state that is bad for that person, you can kind of deliver a shock to get it out of that state. And so there's this sort of feedback loop where you can decode the brain state and sort of stimulate it or push it to get it into a different dynamic range and away from the negative state that the person is feeling perhaps in depression.

Samuel Arbesman:
Going back to kind of the beginning where we were talking about how when you first began to realize the extent to which these neurodegenerative diseases, like we know so little about them, do you view this body of research and these kind of directions as providing some window into kind of helping treat these or kind of provide therapies for these kinds of things?

Amy Kuceyeski:
Yeah, so that's ultimately what I would like to do. And that was the sort of idea behind NeuroGen, besides it being kind of cool and using the idea of stimulating the brain via images. So one of the ideas was to perhaps use this in a longer term setting where you could potentially change connections in the brain. The other idea we were having recently with a few collaborators of mine, right now, when a person goes into an fMRI scanner, oftentimes they're told to just stare at a crosshair and not think about anything in particular. If you're told that for 15 minutes, are you going to not think about anything in particular? Probably not. You're probably going to think about maybe the fight you had with your significant other that morning or what you're going to have for lunch or whatever it is. And so there's this sort of wide space of what a person is actually doing when they're in a resting state scan.
And so the idea that people have had recently, Emily Finn and others, Anish Sagar and I have been using naturalistic stimuli like images or video so that you can get a better SNR signal-to-noise ratio of the brain activity while a person's in the scanner. And it turns out the way our brains respond to these naturalistic stimuli, perhaps movies or video or audio, has a lot to do with our sort of personality, characteristics, our cognitive scores, things like that. And so yeah, we've been thinking about ways that we could integrate our sort of NeuroGen framework into creating movies and video that would give us this sort of maximal explain variance on some behavioral outcome that perhaps we want to do brain behavior mapping for.

Samuel Arbesman:
And lastly, what is the research funding landscape? Are they kind of more interested in kind of some of these science fictiony things, basic research, they want very clear kind of healthcare and biomedical implications? Because obviously research funding dictates the kind of research that actually gets done for better or for worse. What does that look like?

Amy Kuceyeski:
Yeah, so I would say most of my funding comes from the NIH, which is more clinically oriented. There's this sort of joke that we always joke about, which is like you have to have half of the work done that you're proposing to do, so it's a low risk sort of thing. The NIH does have mechanisms for sort of high risk, high reward applications, which is where I sent some ideas that I had about NeuroGen perhaps being a noninvasive brain stimulator. And it seems like when it got discussed in the research section, study section that half were loving it and half hated it and thought it was stupid, so it ended up not getting funded. But there are these supposed mechanisms of the NIH that are supposed to be high risk, high reward sort of things, but they do have to have some kind of downstream clinical application.
If you just want to go in and sort of poke around and see what's going on in the brain, it might be more better at asking for National Science Foundation or NSF grants because the NIH, like I said, is more towards clinical applications. And then of course, foundation funding can be something that people look into as well, and that could be better for high risk sort of projects, especially if you can talk to somebody who's very excited about your research. But yeah, I would say it's a little bit tough. The NeuroGen project that I told you about was unfunded, so.

Samuel Arbesman:
Oh, wow. I didn't realize that.

Amy Kuceyeski:
It was just me. It was just an idea that we had that we wanted to carry out. And my sort of department of radiology is really supportive of these different ideas that I have, and that's why I've been here for so long. But they gave us funding to do some of the fMRI and MRI scanning work that we did.

Samuel Arbesman:
Hopefully there'll be kind of greater appreciation for yeah, things that are high variance, like especially, and the kinds of research that you want to be supported are the ones where people either think it's amazing or it's terrible. As long as it clears a certain threshold, that's going to be the most interesting stuff in terms of the actual outcome. But yeah, here's some more spaces to support high risk research, and that might be a great place to end for,-

Amy Kuceyeski:
I love that. Yes, please write your congressman.

Samuel Arbesman:
Yes, exactly. Well, thank you very much, Amy. This has been fantastic.

Amy Kuceyeski:
Yeah, it was a pleasure. Thank you.