Riskgaming

The Orthogonal Bet: Artificial Life and Robotic Evolution

Design by Chris Gates

Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Samuel Arbesman⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

In this episode, Sam speaks with ⁠Tarin Ziyaee⁠, a technologist and founder, about the world of artificial life. The field of artificial life explores ways to describe and encapsulate aspects of life within software and computer code. Tarin has extensive experience in machine learning and AI, having worked at Meta and Apple, and is currently building a company in the field of Artificial Life. This new company—which, full disclosure, Sam is also advising—aims to embody aspects of life within software to accelerate evolution and develop robust methods for controlling robotic behavior in the real world.

Sam wanted to speak with Tarin to discuss the nature of artificial life, its similarities and differences to more traditional artificial intelligence approaches, the idea of open-endedness, and more. They also had a chance to chat about tool usage and intelligence, large language models versus large action models, and even robots.

Produced by ⁠⁠CRG Consulting⁠⁠

Music by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠George Ko⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ & Suno

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:

Hey, it's Danny Crichton. Here we take a break from our usual risk-giving programming to bring you another episode from our ongoing miniseries, the Orthogonal Bet hosted by Lux scientist and resident Samuel Arbesman. The Orthogonal bet is an exploration of unconventional ideas and delightful patterns that shape our world. Take it away, Sam.

Samuel Arbesman:

Hello and welcome to the Orthogonal Bet. I'm your host, Samuel Arbesman. In this episode, I speak with Tarin Ziyaee, a technologist and founder about the world of artificial life. The field of artificial life is involved in exploring, among other things, ways to describe and encapsulate aspects of life within software and computer code. Tarin has a ton of experience in machine learning, and AI has worked at Meta and Apple and is currently building a company in the field of artificial life.

This new company, which full disclosure I'm also advising, is trying to embody aspects of life within software in order to speed, run, evolution, and develop ways to robustly control robotic behavior in the real world. I wanted to speak with Tarin to discuss the nature of artificial life, its similarities and differences to more traditional artificial intelligence approaches. The idea of open-endedness and more Tarin and I had a chance to also chat about tool usage and intelligence, large language models versus large action models and even robots. Let's jump in. Tarin great to chat and welcome to The Orthogonal Bet.

Tarin Ziyaee:

Thanks for having me.

Samuel Arbesman:

This is awesome. Let's start, at a very high level because we're going to be discussing about the field of artificial life and various things like that, but first all, what is artificial life? What's the deal?

Tarin Ziyaee:

Yeah, yeah, for sure. The people who work on it will tell you it's a very broad field. It's perhaps broader than even AI in a sense, and I suppose that can be good or bad. It covers so many things like do we replicate biology? Do we replicate ecosystems and things of that nature? There is a community there that also deals with a lot of artistic effect of cellular automata and things like that, so it's a very big field in and of itself. It started back in the late 20th century and it's sort of been around. It's been around. I like to think of AI life as sort of the grander cousin of artificial intelligence, which I guess was what we're going to talk about also today as well. But yeah, that's what it is to me at least.

Samuel Arbesman:

Some people have talked about it as life as it could be, so we have, life is given, okay, here's all the biology and the diversity of the world around us, but presumably that is also a subset of what it could potentially be out there.

Tarin Ziyaee:

Yeah.

Samuel Arbesman:

A-Life is kind of helping to figure out, "What are the rules and regularities around biology?" Obviously there's evolutionary computation is certainly one of them. Did you think of artificial intelligence then almost as a subset of A-Life then?

Tarin Ziyaee:

This sort of goes to my origin story of how I got into A-Life also. I like to typically think in terms of first principles and for the longest time I've been working on artificial intelligence and robotics and things of that nature, and I sort of started to realize that a lot of this stuff doesn't work in the real world. And specifically with robotics, there was this big gap between what life forms could do and what robotics can do today. And so I started scratching my head and I said, "Why is there such a big gap between what we see in natural systems versus what we see in artificial systems such?" And that kind of took me down the rabbit hole towards artificial life and I started asking questions, "Are we replicating the same things? Are we looking at intelligence in a way that we should be?"

One thing I think that came out of that line of thinking was that intelligence is sort of a natural byproduct of life as far as we know at least. And that sort of said to me, "Okay, let's go look at what life is doing, and there's a whole field called artificial life that tries to study that. And so for me at least, it's an interesting field unto itself because it can sort of tell us about the constraints that not only led to life, but also that led to this phenomenon called intelligence that comes from life as far as we know. It's probably a very convoluted way of talking about it, but that to me at least is why it was sort of interesting to get into.

Samuel Arbesman:

Are there other people who are now in the A-Life world that had a similar kind of trajectory of try doing some of these things in more traditional artificial intelligence, got a little disillusioned by certain failures or shortcomings and are like, "Okay. Now I need to kind of think even more broadly around A-Life."

Tarin Ziyaee:

Yeah, I hope that the field is sort of warming up to this. Not as many as I would like, Sam, to be very honest with you. Not as many as I would like. And it's really fascinating because you should look at some of these failure modes with current AI systems and robotics in the physical world, and when you squint at them you say, "Oh my God, this is something that living organisms are very, very good at doing. What is the nature of this gap?" So it's a very nice example is if you look at something ambiguous in the real world, you might just actually squint at it or you might move yourself to go sort of poke at it or you might interact with it in a sense to get to more information.

That's very different than this notion of, "Let collect more passive data about this one object in the world and sort of train something to tell me that actually this is an orange, not a banana." And that sort of comes from this notion that living organisms are very embedded and embodied in their environments and the way to gather information in the environment is to move or maybe interact with it. Perhaps that's something that we know from life. In fact, there's a whole thing about the whole reason the brains even emerge is because organisms have to move. So there's a whole thing by Daniel Walpert that talks about "born to move."

The entire reason you have a brain is to move and take control in the world in a sense. So that already puts you already in a different frame of thinking from the sort of very passive pattern recognition mode to one where we should start to look at these systems as being very embodied, being very embedded in their worlds and being able to experiment with them and you can experiment with the world moving in it and interacting with. That's sort of what sort of drew me towards taking a second look at, "How is it that living systems are so robust and resilient in the real world in ways that we might recognize but maybe not appreciate fully?" And I certainly don't know of any serious efforts out there that try to imbue our systems with, the gateway drug into A-Life.

Samuel Arbesman:

Were there any specific touchstones of certain models or systems within artificial life that kind of got you thinking more deeply about A-Life world and that entire field? What were there specific things like, "Oh, these researchers really get it. They're the ones thinking about these ideas around embeddedness and embodied nature in a way that the AI community is not."

Tarin Ziyaee:

There's definitely a whole slew of people. So I'm a big fan of Michael Levin. He's done a lot of great work in this field. Ken Stanley, of course, who's done a lot of work in open-ended systems. Josh Bongard out of Vermont. What's her name? Lana Sinopian, Lisa Soros, there's a whole slew of different people in this field. I don't think it's being taken seriously enough and that makes me want people to take it more seriously in a sense.

Samuel Arbesman:

Related to that, and in terms of the state of the field, you mentioned that that AI at some level is not able to handle these kinds of things or kind of it's bumping up against certain limitations. On the other hand though, AI feels like so much is happening, there's huge amounts of resources and people kind of all pour into this. And certainly if you look back AI as compared to a decade or two ago, and certainly when it comes to the neural networks approaches, there was just not the same advances in terms of we needed advances of compute and data and an algorithm, so really kind of make certain advances.

Tarin Ziyaee:

Yeah.

Samuel Arbesman:

Do you feel that the A-Life world is almost in the same kind of place that AI was a little bit ago, where on the one hand, yeah, it's really hot and there's lots of things happening in AI, but A-Life is also really ripe for this kind of advance, and so it's ripe for this-

Tarin Ziyaee:

Hundred percent, hundred percent. Of course. Of course, a hundred percent. Yeah. Yeah. And that's actually what excites me a lot about A-Life today, is that, look, in so many ways, actually the deep revolution sort of happened because we have this amazing compute, all this crazy amounts of data and some sophisticated algorithms. And we put them together and scale them. You get these amazing effects like we solve this computer vision or we made huge inroads in computer vision and LP because of that for instance. Sometimes I think about this, like, "Hey, you know what? We took a very cartoonized version of a neuron, of a biological neuron, and we just went with it. We ran with it, and here we are with all the successes of AI, which nothing to sneeze at by the way."

But then the question to me becomes, "Well okay, wait a minute. What other aspects of life or biology perhaps can we also cartoonize and perhaps scale and then get the end effect of those as well?" That's sort of what fascinates me a lot about artificial life is that indeed, yes, there are so many aspects I think from nature that we can take, cartoonize, simulate and then scale that's going to lead to a lot of great breakthroughs. And it's also one of these things that in a very funny way, I think that the one lesson that deep learning taught us was, "Hey, you know what? Stop hand crafting your features. Instead, set the conditions up for something such that when you scale it, you get these emergence effects."

Stop hand crafting, for example, computer vision features, which is what we used to do back in the 2000. Stop hand crafting NLP features. But in a very counterintuitive way, I think that that lesson hasn't been sort of absorbed for intelligence itself. Some people like to say, "Stop thinking about thinking." It's not our job to sort of hand craft what intelligence is, but it is our job to figure out, "What are the conditions that led to intelligence to emerge and try to replicate that?" That is a story of life. Story of life I think is a story of how intelligence emerged from these living systems that went through massive, massive constraints in the environment.

Samuel Arbesman:

And related to this, what are the features of evolution or of life that you think should be embodied in some sort of cartoon form within computers that you think actually will yield the conditions for intelligence? And you mentioned almost this adversarial environment. Is it certain features of evolution, nature, red and tooth and claw? What are the kind of things, or maybe it's an open question, we need to actually try these things. How do you think about what we should be embedding within computing in terms of figuring out the cartoon version that will actually give rise to certain types of robust intelligence?

Tarin Ziyaee:

Yeah. Sure, sure, sure, sure, sure. One realization, Sam, I remember having was that if you sort of zoom out and look at how we've approached artificial intelligence to date, right? The pattern that seems to emerge in open intended is that we've become very good at sort of copying the outcomes of intelligence versus the processes that led to intelligence. And that's sort of very clear, even when we look at things like chess or go. Things of that nature where it's like we play chess or we're able to do chess because we're intelligent, but playing chess isn't like the be-all, end-all of intelligence. It's just an outcome of it. It's almost like a symptom of say, being intelligent, this form of intelligence. And so I think that the emphasis, in my opinion, should be about not so much trying to copy the outcomes of intelligence as we know it, but sort of asking the question, "Well, why didn't intelligence even arise to begin with?"

That's, A, more interesting question, but also, B, maybe easier to answer. And if you kind of go down that rabbit hole, what we see in nature was that the entire reason you even have intelligence is because life had to make do with these massive constraints in order to survive. So for example, you can have a jellyfish world where everything is great and there's sun everywhere, and just floating around and absorbing sun and making energy and replicating, and life is good, you don't have to do anything. I think that was actually the case on earth for millennia, for billions of years. It's only when we had these constraints thrust upon life. It could be energy economies, it could be things around big geologic events that suddenly changed the ability of organs to survive. That is when intelligence emerged. And so that's interesting because it gives us a clue as to what to do.

And the clue there is that if you don't have constraints around survival, you don't need to be intelligent. But if you have constraints around survival, in particular ones that have to do with some very basic things like energy you don't have all day, there's a lot of time constraints. You have space constraints, you have energy constraints, you suddenly start to force the system to be able to make the most out of what's around it at any one given point in time to maximize its likelihood of survival, AKA, become intelligent in a sense. So I think that when we talk about these artificial systems, a challenge becomes less about, "What does a brain look like and does it even have a brain? And what's the architecture of the brain?" So on and so forth. It's more about how do we imbue these systems with a lot of these constraints and others and almost have these pressure cookers of simulated worlds if you like, at scale, where the name of the game is survival at any price?

And I think that when you look at it this way, you'll end up with a lot of capabilities that you can then exact for different tasks downstream. So it's almost like an inversion of how we look at it today. Today the field looks at it in the sense of, "Here's a point-wise solution to a task, grab the cup, pick up the ball." Because what we're saying is, "Don't do that. Actually concentrate on the intrinsic goals of survival. And then you're able to do a lot of these extrinsic things." I have stories to tell you about this, about this one point if you want to go down that rabbit hole as well, but that's how I might approach it in a sense. It's about setting the conditions where these organisms can sort of, well, they have to survive basically, and in doing so, they have to come up with a different diverse set of skills that help them survive. And then you exact them from different things.

Samuel Arbesman:

I'll borrow a phrase, it's like whatever doesn't kill you makes you smarter.

Tarin Ziyaee:

Yes. That's right.

Samuel Arbesman:

And kind of creating that condition, which is also interesting because especially when it comes to AI or certain AI techniques for, and you mentioned Tracer, Go or whatever it is, oftentimes whenever these things are able to be accomplished by AI systems, people are like, "Oh, maybe those things didn't require intelligence in the first place," or, "It's not really doing that. And especially if you look under the hood of some of these early game-playing games or systems, they're not really doing anything that we would consider to be intelligent. They're just reinforcing certain situations or whatever it is. The end result is very sophisticated and which is kind of the joke of when AI can solve it, that it's not considered AI anymore or whatever.

Tarin Ziyaee:

Right, right, right. Exactly. Moving goalposts and everything, which of course is a thing, by the way. I mean, and that's a very valid critique in the sense of we, I think, also do a very bad job at sort of setting these benchmarks in a sense. We deal with benchmark and then we end up blowing it away. And so that to me sort of points to, well, we're kind of bad at getting proper benchmarks to begin with.

Samuel Arbesman:

That partly the result of the fact that, I mean, I don't know if it's a failure of introspection or just not fully understanding what intelligence is all about, but maybe humans are just bad at understanding what intelligence consists of or even figuring out what intelligence is across the entire tree of life because we're just surrounded by it so much.

Tarin Ziyaee:

Yeah, that's right.

Samuel Arbesman:

That it's hard to figure out.

Tarin Ziyaee:

It's hard to figure out. Yeah, yeah. And on the one hand, we sort of do care about how things process information come up with their answers. And I mean, if you have kids, for example, you want them to go to school and you want them to learn specific things, and the whole point of exams is to say, "Don't have your cheat sheet lookup table. You're not allowed to do that. Come at this from first principles a.k thinking in a sense." So in very deep sense we actually do care deeply about how we arrived at certain answers, the how matters.

And I think that sort of sometimes goes against what Turing was saying, which was, "Actually, it doesn't matter as long as I get all the right answers. Okay, what do you care how I do it?" In a sense. There are certainly applications where that could be fine, but I think that if we are after coming up with thinking machines that can be resilient in this open-ended universe that we live in, we have to start to pay attention to how it's being done. And I think that that's a more interesting question of, "Is the system thinking? Is it able to come up with solutions given a lot of constraints that it's faced with?" So I think the mechanistic how around thought does not get as much TLC as I think it ought to.

Samuel Arbesman:

And maybe one of the answers then there is, like you mentioned, a lookup table doesn't really feel like thinking and so-

Tarin Ziyaee:

Which might still be useful. Right, right.

Samuel Arbesman:

Which could be very useful, useful, but perhaps then true intelligence is one of the hallmarks is some sort of compressed explanation or description or model of the world which can then be used for an open-ended series of situations.

Tarin Ziyaee:

Perhaps, yeah.

Samuel Arbesman:

Is that one way? I don't know.

Tarin Ziyaee:

Yeah, yeah, yeah.

Samuel Arbesman:

That's not to say, "Oh, I just came up with the exact answer for intelligence." But I think there is this idea of this kind of modeling of one's environment as a way of being able to adapt more easily.

Tarin Ziyaee:

Yeah, sure, sure, sure. Let's first get a living systems, what they end up doing, like us I think is because the world is very unpredictable and there are unknown unknowns. Nature could not have possibly been able to afford us seeing every last possible thing out there. There's just no way. And so it had to come up with a different solution. And I think that solution was sort of something around the lines of learning how to learn in a situation, meaning coming up with, say, these programs, if you will, that I can launch on time and say, "There's something ambiguous over here. Let me come up with this program. It's a little experiment actually, and launch it and see what happens and garner some data." And then say, "Oh, okay, interesting. And maybe now I need to launch a new type of experiment."

And I think people say, "Well, this is RL." It's like, "Well, it's not really RL because RL is about me as a human defining some objective that you go maximize or minimize or do something with." This is almost like a meta level of that, which is saying that organisms, when they're smart, when they're intelligent, end up actually coming up with their own experiments in the world, taking that information and then coming up with new experiments that they launch and so on and so forth. That broadly speaking, is what nature sort of imbued us with. And we see this all the time, by the way. I mean, next time you go outside and walk around kind of slow down and you'll see that you're running these little mini experiments all the time.

You're seeing what would happen if you go towards this person if they move away or if they don't, something ambiguous is out there and so you might squint at it, you might actually move around, you might go poke this thing, see what it is, even speech, even language as a form of interacting with the environments and say, "What happens if I say this thing? What's the inference that I get back in a sense?" So I think that this mode of being able to come up with ways to interrogate the environment through our own actions is something that's very, very deep. But I also think it's the hack that nature sort of come up with so that we can survive and perhaps even thrive in open-ended environments. But that's something that we see within life forms a lot. It's not something that we typically see within AI systems yet at least. And we're hoping to change that of course, but life has a very, very rich template of these breadcrumbs that we hope to see.

Samuel Arbesman:

Yeah. To make this a little bit more concrete, you're currently building a company around this kind of idea, which in full disclosure I'm also advising.

Tarin Ziyaee:

Yes.

Samuel Arbesman:

But maybe you can kind of talk about how you envision the need for a company around these ideas around artificial life and constraints and eventually kind of creating certain conditions for intelligence. Because right now in artificial life, and we mentioned that artificial life, it has this long pedigree, it's still perhaps kind of in the place where deep nets were a while back. There's a lot of academic things going on, but what is the space here for a company and how are you designing it?

Tarin Ziyaee:

Yeah, sure, sure, sure. Of course, of course. Yeah, yeah, yeah. At a very high level I like to say that we are very much aware of the LLM phenomenon, the large language models. And I think that to a first degree, what we're after is building large action models, LAMs, if you like. That's a nice way of putting it because the degree to which organisms can take action in the world can control things in the world, that tends to be the starting point for intelligence. And that might be a controversial statement for some, but for others it's sort of like where we started. Intelligence is what happens when the tenacious, irresistible force of life meets the immovable object of massive, massive constraints. And that's where intelligence falls out, and that's effectively what we're doing.

We're sort of putting these creatures through these massive pressure cookers, and in doing so, they have to become very good and adept at moving themselves and moving around, chasing and evading and so on and so forth. So in many ways what we're saying is let's mimic the conditions that led to intelligence at scale. And then once it's there, then it becomes very easy because then I can draw on that intelligence, I could put it in charge of different things, put it to work in a sense. And so I like to say that we're building these large action models that analogously to LLMs are able to take action in the world. They're able to perform action with such resiliency and robustness that we might expect from say, a smart monkey.

It's actually funny. There's actually a video of a monkey taking off the earring from a woman without hurting her, and it's doing it in a very intricate way where it's able to move the hair around, kind of stops, looks at it kind of turns its head, it's being very deliberate about it. But it kind of understands all these different facets about it's a soft tissue and there's this thing I have to push to pull. That's not something that we have any clue how to do today within the field of robotics. So I think that for me at least, the first goalpost for us is focusing on these large action models and then past that, I think there's a lot of impact there as well that we can talk about. But going back, I guess, to why is this the right time?

The compute we have today is only increasing and becoming cheaper and cheaper. That's number one. We also have the capability today to spawn a large number of worlds, and you can talk about synthetic worlds and simulated worlds, but worlds where you sort of have this quasi open-endedness of different situations and different scenarios and things of that nature. That's something that I don't think we could have done five years ago even. And last but not least, we also have the capability to actually use current AI systems in a very circular way to oversee a lot of these worlds and sort of pick winners and modulate them and change the scenarios, maybe-

Samuel Arbesman:

As some sort of smart microscope to help study them.

Tarin Ziyaee:

Yeah. Almost like a smart farmer that might go there and say, "You know what? This crop is doing great. Let me take more of it, let me make more of it, and so on and so forth." There are all these pieces today that I see out there in the field where we're getting to a point, like you're saying, where I think that the field of artificial life is going to have its 2012 Alex Vyshevsky moment pretty soon because the ability to spawn these very diverse worlds that are open-ended, the ability for compute to be there to run them, and the ability to shepherd and farm the outcomes of these worlds, I think is something that five years ago we could not have fathomed. And that's what sort of excites me about this.

Of course, you say nothing of the fact that we also know what the breadcrumbs to look for. We know that we won't see organisms have these types of behaviors. We know that they're supposed to do them in certain ways like this and like that. So put all this together, we squinted this, we say, "You know what? It's time for us to grow these AI systems versus us sort of handcraft them." And like I said, I think this is the lesson from deep learning is stop handcrafting or if you stop thinking about thinking, start thinking about how to make the thing you're after emerge.

Samuel Arbesman:

So related to that, I mean, we've talked about the term open-endedness a few times, and I feel like that's kind of one of the ideas that you're trying to think about, this idea of growing these kinds of systems. I guess two related questions. One is how early in life, I'm not sure early is the right term, but how simple of artificial life are you thinking about starting with? And then related to that, what are the conditions for that kind of open endedness that will give rise to this diversity of different solutions? Thinking about how to navigate a constraint system.

Tarin Ziyaee:

So I'll start by saying this, Sam, right, is that I think there's going to be a very, very large impact area for robotics. And there's perhaps one word that I use to describe this impact area that kind of takes from artificial life, it's transduction. What does that mean? That's actually something that we learned at control app days actually, which was the ability to motor map. What that means, something very simple. It means that I didn't go to driving school, learn how to drive right, neither did you. We already knew how to move ourselves and we knew how to move our bodies in the world. We just went to learn how to control this very particular tool called a car. When I learned how to ride a bike, when I learned how to write with a pencil. And so what's happening here is that, again, life comes to rescue.

Evolution or life, they don't teach us organisms how to go drive a car per se. It just says, "I'm going to make you really, really good at controlling your body." And if you're very good at controlling your body, then by extension you'll be very good at controlling other things through your body, aka tool use. And you can extend that further and further out in a sense. Our first goal, honestly, is to get to this first level of transduction. Where, "Can you make these creatures be able to control their bodies so well? Of course, this is still in these similar environments. They control their bodies so well that they're now able to transduct or accept these to other things like tools.

And view through this lens, the entire notion of a robot almost goes away because all you really have is organisms and you have tools and you have organisms that can control tools in a sense, that is the hack that nature came up with. And that's something that excites us here a lot as far as what we're building towards, is don't train your tool to do something very specific, actually build the agent, build the organism if you like. It is able to control tools. And if you do that, you end up with something quite general, just like us.

Samuel Arbesman:

Related to tool use though, while tool use is found throughout life, it's not just a hallmark of humans and other animals and creatures, use tools. It's not, maybe I'm wrong, I don't think that bacteria are using tools, but maybe I'm thinking about it the wrong way. I mean, how early, according to your version, did tool use in some sense kind of arise? How fundamental is it to life in biology?

Tarin Ziyaee:

So I actually think you hit on something very, very fundamental. So the story of life is organisms, yes, using tools. But here's the thing. Even in bacterium uses itself as a tool. You can think about it that way. So the very first level, I have to be able to change myself, manipulate myself, move myself vis-a-vis the environment and change the environment in some way. But think about it, the only way I can ever bring change to the environment is through my body. That's literally the only way. And so at the first order, you have to be very good at moving your body first, and it could be very coarse, and it could be this blob like a cell, and you're pushing and pulling, right? That's fine. But they are doing tool use in a sense, I would argue, and I would say that they are their own tool in a sense.

I am as a big blob and I get to push and pull everywhere. But I think what happens after that is that you start to end up with more and more sophisticated extensions of your body. And so you might have things like appendages and appendages are also part of your body, and they give me maybe a fine granularity because now I have a stump that I can use to push stuff around. Oh, what if I have fingers? Well, I can now say, play them off each other and grab things and do so-and-so forth. And so I think this notion of tool use is actually very, very fundamental, even for, let's put in quotes, our simple creatures like bacteria in a sense. It's just that they're their own tool.

And that I think is what you're seeing across life. Arguably the entire notion of language is me trying to control the other person on the other side via these vibrations of my vocal muscles in a sense. Putting some information out there and hoping that they do something or don't do something, that's also a form of extending our control. So I think that the fundamental thing here, going back to primacy of say physical movement within environments is the ability to control the environment vis-a yourself. And that just becomes more and more sophisticated as you get more and more sophisticated organisms.

Samuel Arbesman:

Yeah, no, I love this kind broad view of tool use or AI. It's just an extended set of features that you have access to that can be used to control your environment.

Tarin Ziyaee:

That's right, that's right, that's right. And for example, Michael Levin has this great point he talks about. He talks about talks, this concept of the cognitive light cone. These organisms are always trying to extend their cognitive light cone to bigger and bigger spatial and temporal scales in a sense. So bacteria might only care about something within the next couple seconds within a couple, say millimeters or whatnot. Humans on the other hand, we tend to care about scales of, say, maybe years and decades and over thousands and thousands of miles in a sense. That's really an artifact of our ability to extend our control to these levels. But it really all starts with the body. It sort of starts with the ability to move myself and then by extension to move other things.

Again, I'm sort of channeling the stuff we learned to control labs back in the day, which was there are cases where, for example, even if you're raking leaves on your lawn, at some level, the end of the rake sort of starts to feel like an extension of your hand almost. You can start to push and pull with it to maybe poke at something, that's not an accident. That's actually what we're evolved to do. We're very good at extending our control through our bodies. That to me is a very concrete case of where the field of artificial life can actually have an impact into something, let's say, plagues our society today, which is we don't know how to make these robots work in these environments very, very well.

And that to me at least, is a very, very concrete use case of, "Hey, wait a minute, what if we actually train these creatures to not do very specific tasks like we do today in robotics, but actually to survive in the way that sort of life intended. And then by extension, because they're able to do that and they're able to move their bodies and so on and so forth, they can then start to take control of things that to me at least, is a very, very specific concrete facet of the promise of artificial life when we start to apply it to problems that we have today.

Samuel Arbesman:

This is really interesting. And then we talked about tool use and open-endedness, some of these other features, evolution in the process of evolving certain things. There is this whole field of evolutionary computation, embedding evolution within computational algorithms and using that to either optimize systems, things like that. And it can be used in many different ways. And there's genetic programming, genetic algorithms. How do you think about the process of evolution within the kind systems that you're building? I mean, is the kind of thing it just hard-coded, it's embedded implicitly within the system that requires things competing and constraints and replication and reproduction. How are you thinking about putting evolution into the system?

Tarin Ziyaee:

So first, I'll caveat this by saying we are also big fans of back-prop. I love back-brop. I love a good optimizer any day. I say that, because evolution tends to become dichotomous with back-prop, and there's tension-

Samuel Arbesman:

Also, and there are many different approaches, whichever one is more efficient for the thing, but at least from your perspective, evolution has certain features of it that are powerful.

Tarin Ziyaee:

For sure, for sure. Yeah, yeah, for sure. Sure. Yeah. The way I like to look at this is we are very good today at, let's say, optimizing things that converge to a goal, to a task. And we have back-prop, we have a slew of different very powerful algorithms, and that's fantastic. We should definitely keep those and actually build on top of it. Think about evolution that I like that I don't think it's appreciated is that people can think of evolution as an optimizer, and that's possible. But I think that the features of evolution that are more interesting and I think more powerful is that evolution is not so much about optimizing something. It's about increasing the surface area of possibilities. What does that mean? Well, that means something like you might end up with as an organism or A-Life form, you end up with this feature on yourself that on the face of it looks completely useless.

Why do I have this, I don't know, six finger, whatever it is? It's completely useless, doesn't do anything. But what nature is very good at is hedging its bets. And so this thing that might look like a quirk or bug now, but actually end up being extremely, extremely useful later in a different scenario. And so what you actually need to do is have divergence versus convergence of things. And so I think what evolution is very, very good at it's in divergent search, this notion being very creative with what it puts out there, that doesn't have to be very good in the maximal sense.

It just has to be good enough and it sits around and maybe one day something can leverage it or appropriate it to make the downstream task much, much easier in a sense. So that to me is a part of evolution. It's a very powerful, divergent creative force that increases the surface area of possibilities. To my knowledge at least we don't know how to do this with convergent methods. It's just different animal, it's just different creature altogether. It's not so much about optimizing, it's about satisficing.

Samuel Arbesman:

Right. There needs to be a touch of messy wastefulness, just a touch in a way that evolution can do.

Tarin Ziyaee:

Yes, just a small amount. Yes.

Samuel Arbesman:

Right. It's kind of more on the explore versus exploit side, and that allows certain things to arise that it would not have otherwise.

Tarin Ziyaee:

Yeah, that's right. That's right. It also works on these extremely long time horizons. Genome, for example, exists, but the fact that your genome is extremely good, say at adapting or being adaptable, but not too much in a sense also have to be evolved. It's sort of trying to play this game of, "How do I make this thing survive across perhaps eons of time?" Versus for this one very, very specific task where you're sort of dealing with a here and now. Maybe there you can go optimize something that I think is a power of evolution. It's not just an optimizer. It can be, but it's a lot more than that.

Samuel Arbesman:

The world of artificial intelligence. There's both an academic side, there's kind of the world of AI and the tech world and the world of startups, then you have, I would say, an interesting relationship between all of them. How do you envision either by analogy or in distinction, the relationship between the world of academia and start-ups in the A-Life community?

Tarin Ziyaee:

Yeah, that's actually a very good question. And I think one of the first things we actually talked about way back in the day. There is a big white space in how organizations can organize towards making leaps and deep tech. So what does that mean? So we have, for example, academia. Academia tends to not have that much money, typically speaking in a sense. And you have the whole notion of publish in perish, and that sort of animates a lot of stuff. Big company labs I think are great. They have a lot of money, a lot of compute, but they're big companies and they might not move as fast as one would like them to. And then you of course have start-ups, but traditional start-ups have to almost chase the local minima of, "Let me go chase this one very, very particular product right now and take what's out there and sort of make it work. And bada bing bada boom."

In the age of AI, what's becoming more and more apparent is that there is this white space where deep tech companies can start to exist and deep tech companies are still for profit companies, but I think that they look at the world and say, "You know what? Here's a slew of different bets that we need to incubate so that and when they succeed, they can have an enormous impact on all these verticals and all these markets and even open up markets we didn't even know existed. That's a very different animal than, say, making a dating app or something like that. And I think that the world is sort of waking up to this new type of species that says, "You know what? It might take a year, maybe two to incubate these various different ideas that if and when they get cracked, the potential of the upside is astronomical."

That idea of a deep tech company I don't think was a household named 10 years ago maybe, but I think it is now. And I think it's going to be very, very important because a lot of the fundamental questions that we have around, "What's the future of labor? What's the future of artificial intelligence? What are the different paradigms of artificial intelligence that we can develop?" Those are deep tech questions. And again, I think the upside is incredible. So I think that investors today are waking up to this new reality. I think that there's a slew of deep tech companies out there, and I think that's a great thing. So you can have this happy marriage between being for-profits, but also saying, "Well, this thing is going to be developed over the course of, say, months and years versus days and weeks." Which is maybe what Silicon Valley 1.0 is. And I think Silicon Valley 2.0 can and should, and I think it is, taking notice of a lot of deep tech companies when it comes to that. That's my hot takes as far as that goes.

Samuel Arbesman:

And so in terms of research or approaches that might require even more on the many years to decades kind approach, do you view it as an academia or even other types of organizational structures, those are going to be the scouts for those kinds of things, and then can be absorbed or learned by some of these deep tech startups? How do you think about the relationship there?

Tarin Ziyaee:

Yeah, yeah, sure, sure. Because I think also different ways of doing deep tech, for example, there's a very completely bottom up approach where it's like, "We're just going to go and research stuff, see what happens, and something comes out of that." I think that's probably a good domain for nonprofits or something like that where they say, "You know what? We just need to shock and blast in different directions and see what kind of comes out." But I also think there's room for an Apollo-like program in a sense, for deep tech where you say, "You know what? Here's a very particular big problem that we've sort of thrown the kitchen sink at. It can't seem to make any progress with these old paradigms yet here are these other paradigms that we think we want to throw towards this one thing."

So I think that there's room for both of these types of organizations that are both deep tech, but very, very different types of deep tech. So I think the former might be better for nonprofits, but I think the latter is something that's probably good for a very applied research lab in a sense that has identified one thing, maybe two, and says, "We are going to go after this one thing. But the way we're going about it is by leveraging all these new things out there, we're kind of converging." And in many ways we see ourselves as that. We see ourselves as sort of an Apollo program for that versus open-ended bunch of different things. So I think the ecosystem needs all of them, but my bias is that we need more of the Apollo-like programs for this. But just my personal bias.

Samuel Arbesman:

And maybe one last question related to that, this Apollo program, the fact that, I mean, artificial life itself is kind of drawing from a number of different domains and for the success of the kind of things you're imagining would also require drawing on different fields. What is sort of the makeup or the DNA of the kind of team that you're kind of assembling or the kinds of ideas and fields of study that you think are going to be bearing on this question of building artificial life that can then potentially create the conditions for intelligence?

Tarin Ziyaee:

So I guess a long answer might be something like, since we're very focused on say, creating these large action models as a first step, that it's going to have massive impact on robotics and of course other things, I think that makeup there is really drawing, just like you said, it's going to draw on a bunch of different fields. For example, physics simulations, generative modeling for all these different types of worlds, accelerated physics. So there's a whole field called differentiable physics right now. AI, of course, that doesn't go away. The computational biology to some extent, and of course software engineering of course goes without saying. I think it's a combination of all these things that's going to give it its unique makeup because yeah, like you said, the business here really is about spawning these massive, massive worlds or these pressure cookers where you're able to play this game of, "Okay, have these creatures sort of evolve in a very accelerated fashion that we can do now, by the way, so that they can be very general in how they survive.

And because of that, you can start to leverage them to do a certain tasks. Of course, there's plus and minus for different aspects of it. But yeah, it's definitely very multidisciplinary as far as that goes because remember, we're building worlds and we're building the engine that grows, evolves trains, whatever verb you want to use these agents in these worlds, you're drawing on a lot of insights from computational biology, life itself, AI. It's going to take a nice diverse team to put this stuff together, but the pieces are there. I say part of this is five years ago a lot of these pieces were missing. We just did not know how to do this. A-Life, I think, is going places that hasn't gone before. It's going to hit its Alex Vyshevsky moment.

Samuel Arbesman:

No, I love this. I love this kind grand holistic vision. That's awesome. Now, this is probably a great place to end. It's super exciting. Thank you so much for taking the time to chat. This is fantastic.

Tarin Ziyaee:

Thank you so much.