Riskgaming

The Orthogonal Bet: How to Navigate Complexity Within a Large Organization

Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Samuel Arbesman⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

In this episode, Sam speaks with ⁠⁠Alex Komoroske⁠⁠, a master of systems thinking. Alex is the CEO and co-founder of a startup building at the intersection of AI, privacy, and open-endedness. Previously, he served as the Head of Corporate Strategy at Stripe, and before that, spent many years at Google, where he worked on the Chrome web platform, ambient computing strategy, Google Maps, Google Earth, and more.

The throughline for Alex is his focus on complex systems, which are everywhere: from the Internet to biology, from the organizations we build to society as a whole. These systems consist of networks of countless interacting parts, whether computers or people. Navigating them requires a new mode of thinking, quite different from the top-down rigid planning many impose on the world.

Alex is deeply passionate about systems thinking and its broad implications—from making an impact in the world and navigating within and between organizations to understanding undirectedness and curiosity in one’s work.

His more bottom-up, improvisational approach to systems thinking reveals insights on a range of topics, from how to approach large tech companies and the value of startups, to a perspective on artificial intelligence that untangles hype from reality.

Produced by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Christopher Gates⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Music by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠George Ko⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ & Suno

Show notes:

Chapters

00:00 Thinking in Terms of Systems

04:11 The Adjacent Possible and Agency

08:21 Saruman vs. Radagast: Different Leadership Models

13:17 Financializing Value and the Role of Radagasts

21:59 Making Time for Reflection and Leverage

25:18 Different Styles and Time Scales of Impact

28:14 The Challenges of Large Organizations and the Tyranny of the Rocket Equation

34:10 The Potential and Responsibility of Generative AI

45:12 Disrupting Power Structures and Empowering Individuals through Startups

Takeaways

Embrace the complexity and uncertainty of systems when approaching problem-solving.

Shift the focus from individual heroics to collective efforts and systemic thinking.

Recognize the value of the Radagast approach in nurturing and growing the potential of individuals and teams.

Consider the different dynamics and boundaries within large organizations and startups.

Take the time to step back, reflect, and find leverage points for greater impact. Focus on your highest and best use, not just what you're good at, but what leads to something you're proud of.

Consider the long-term implications of your actions and whether you would be proud of them in the future.

Large organizations can become inefficient and lose focus due to coordination challenges and the tyranny of the rocket equation.

Open source can be a powerful force for good, but it can also be used as a control mechanism by larger organizations.

Generative AI has the potential to make the boundary between creators and consumers more porous, but responsible implementation is crucial.

Startups offer the opportunity to disrupt existing power structures and business models, giving individuals more sovereignty and control over their data.

Keywords

systems thinking, uncertainty, complexity, individual heroics, collective, leadership, Saruman, Radagast, startups, large organizations, large organizations, values, decision-making, generative AI, startups, data sovereignty

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Samuel Arbesman:

Hello and welcome to The Orthogonal Bet. I'm your host, Samuel Arbesman. In this episode I speak with Alex Komoroske, a master of the world of systems thinking. Alex is the CEO and co-founder of a new startup building at the intersection of AI, privacy and open-endedness. He previously was the head of corporate strategy at Stripe, and prior to that, spent many years at Google. But I think it's fair to say that the through-line for Alex is a focus on the world of complex systems. And complex systems are everywhere around us. From the internet, to biology, to the organizations that we build, to our society as a whole.

They're all networks of a huge number of interacting parts, whether these parts are computers or people. And navigating these systems requires a new mode of thinking from the top-down, rigid planning, many of us impose upon the world. Alex is obsessed with systems thinking and its many implications, from how it can be used to think about making an impact in the world, navigating within and between organizations, and even how to think about undirectedness and curiosity in one's work. And from Alex's more bottom-up and improvisational approach to systems flows so much, I'm so pleased that I had a chance to speak with Alex. Let's dive in.

Alex, great to chat with you and welcome to The Orthogonal Bet.

Alex Komoroske:

Thank you for having me.

Samuel Arbesman:

One of the main themes that underlies your work and how you think about the world is that you think about everything in terms of systems and how to manage them or maybe garden them is the more proper terminology. So how do you think about the world of systems and handling them? If handling them is even a possible way to even describe this kind of thing and think about it?

Alex Komoroske:

Yeah, it's funny because I feel like the way I tackle most problems is almost exactly the backwards from how you typically do it in serious business, which to me is wild because a bunch of the techniques I apply are a fundamentally useful and powerful way of accomplishing even traditional business kind of goals. A lot of it I think comes down to embracing the fact that you don't have full certainty. The serious way of doing things kind of assumes that yes, we make a plan and then we execute that plan and then the world changes according to that plan. And I think a systems approach acknowledges that there's tons of uncertainty. If the plan doesn't work exactly as you expect it to, that's not necessarily, well, the people who executed it did it poorly, but rather the world is more complex. There's some unforeseen, unknown constraint that turned out to invalidate that idea.

So much of how the traditional business approaches implies individual heroic people who can go and tackle any problem that they're given and just with ingenuity and grit, they can get through and solve any problem. And it emphasizes this individual, a special person that can do this. And everybody else is losers or lazy or something, they're just not very good. The approach I take is trying to emphasize, I think that in general almost everybody is like A, good at what they do and B, trying pretty hard. I think that's not necessarily the case in every single human environment, but I think the vast majority of professional environments that is true. Maybe one of the reasons it's so hard to get things done is not people are lazy or losers, it's that there's just inherent complexity. Systemic complexity is introduced in trying to coordinate a large organization.

Samuel Arbesman:

So related to that, is the better approach then having a better sense of humility towards these systems, simply recognizing that we exist in these systems? It's obviously hard to get away from the praising heroics and individuals because everyone wants to think that their actions matter, but everyone's actions matter within something larger. What are the right approaches to think about this kind of thing?

Alex Komoroske:

I've been accused of being a nihilist multiple times, people saying... Because I say that heroic-

Samuel Arbesman:

You definitely do not feel like a nihilist.

Alex Komoroske:

Yeah, I don't think so at all. I believe in like, our actions, of course they matter. Of course they matter. And observing that you're part of a larger system that you aren't fully in control of does not rob your agency. In fact, you have within... I have an essay called The Iterative Adjacent Possible that tries to square the circle. Where the adjacent possible is a design thinking lens and it's the space of things that you have, actions you could take that will probably work. They're within your capability, they're within your reach, within your grasp. And I think in the tech industry in particular, we had this default assumption of the adjacent possible as being massive. You could just take a flying leap and do something crazy and bold and just do it if you're just a good enough hero.

And I think the reality is the world is really complex, it's really hard to understand exactly what's going on and how things will react, and so our adjacent possible is way smaller than people think it is. That's why people call me a nihilist and say, "Well, you're saying that our actions don't matter and we can't do anything." That's not what I'm saying at all.

The cool thing is within that smaller space, you do have full agency. Any of these options, you may pick and they will work. And when you pick them, it's not like you pick it and then you reset the world and try another one. Now you get to take the world response, the action that you took, and then you get to do it again. And so that allows you to have a consistent thematic arc that allows you to achieve great things, not necessarily planning 100% exactly how to do it ahead of time, but surfing and responding to how the world's reacting to your actions, and do great important things by acknowledging that your adjacent possible is much smaller than you intuitively would like it to be.

Samuel Arbesman:

No, I like that. Because also, the way I think about this is the world is inherently a complex system and you can either deny that and then think, okay, we have unlimited agency and then just be smacked in the face every time you kind of bump up against this reality, or actually truly look at the world as it is and then therefore say, okay, how do we actually make change in a world that is much more complex and figure out, okay, whether it's iterative or tinkering at the edges or taking this more humble approach or an approach based on catalytic kind of thinking. These are the kind of ways that actually do work in the world that we find ourselves as opposed to constantly just saying no.

Alex Komoroske:

The world that we wish that we were.

Samuel Arbesman:

Right. Exactly.

Alex Komoroske:

I love this way of framing it. The world is complex, deal with it.

Samuel Arbesman:

Exactly.

Alex Komoroske:

And it's funny because to me, throughout my career I've had a bunch of people who do the traditional style and they try to mentor me to do the traditional style, and don't know, it's like the strength is to deny the complexity. Like no, no, no, no. The strength is to acknowledge the complexity and still do great things. That's the strength.

Samuel Arbesman:

And so related to that, it's hard probably to push back in the tech world where there is this dominant paradigm of, okay, you need to assume you have infinite agency and heroics and everything like that versus acknowledging reality. Obviously you can still succeed in the world of technology and management and in the face of these large systems, but it often requires different models. But it also, I imagine, will also require, I guess, rewarding different kinds of behaviors. And maybe this might be the right place for to talk about, your like Radagast versus Saruman kind of dichotomy. And how do you think about those different models as well as the ways in which maybe managers should think about people actually making impacts?

Alex Komoroske:

The way I think about it is within the organization as a leader of sub-portion of the organization, outwards to the rest of the organization, I'm going to look very traditional. I'm going to flex for the cameras, I'm going to have sweat on my brow, be constantly running places. And one of the things that's death in an organization is for somebody else in any conversation, even behind closed doors, to go, "What does that team do anyway?" This is death if anybody utters this. And so your goal outward facing is nobody to ever utter these words.

And you can do this by having a lower burn rate of your political capital because you have fewer heads so that the expectations are lower for what you do, demonstrating very proactively and shouting from the rooftops about, "Here's a newsletter." One of the tricks is you just have a newsletter that goes out every two weeks like a metronome that's competent, and then people won't read it, they'll just see clearly the team has their shit together if they can put out a consistent, realistic, but optimistic newsletter twice a week or every two weeks about what they're doing. And this allows people get off your back.

So people go, "Oh, they're doing good stuff. Good stuff is happening." And then they never ask this question, they leave you alone. But inside the team, I think the way you do it is very different. And I would say it's much more about helping the team have this environment where everybody can lean into their own individual superpowers in a way that adds up to something much larger than the sum of it parts. So one way of putting this is outward to the rest of the organization, you track as a Saruman and inside the organization you track as a Radagast and are a Radagast.

Samuel Arbesman:

Okay. And can you describe what those two different I guess modes of being are?

Alex Komoroske:

Yeah, so this is a very fanciful essay. A lot of my essays are long and kind of discursive and just-

Samuel Arbesman:

They feel like business fables almost, business and technology fables. Yeah, they're great.

Alex Komoroske:

Yeah. At one point I wrote a short one I liked quite a bit called The Magic of Acorns a few months ago. It's short and it's a business parable is the way I describe it.

Samuel Arbesman:

Oh, that's even better.

Alex Komoroske:

I also wrote one a couple of years ago, I was trying to create a metaphor that fit in a lot of different good strategic practices that are unintuitive. And then the story and the metaphor ended up becoming very large. This is The Wanderer and the Seeds. And at the end I was like, okay, do I spend another thousand words unpacking this metaphor or do we just let it be? And so I decided in my head it was like the oops, the Cap'n Crunch, oops, all wild berries thing. It's like oops, all parable. It's just entirely parable. Like there's no unpacking. And I think that sometimes these things hit you differently when they track as a timeless truth. You take away something slightly different from it, you hold it lightly. In a parable, you are forced to hold lightly because it's not too literal and I think that's healthy.

So anyway, Sarumans and Radagasts, the frame was that there's magic in the real world, obviously is not a real thing. Let's be serious. And yet magic in the social world, absolutely 100% is a real thing. And the decisions we make in the social world definitely have impacts in the physical and the real world. And so in a very real sense, magic affects the real world quite a bit. And I sketched out two different archetypes of real magic that work in social situations that work for totally different reasons. The game theory at their core is totally different. And the first is the Saruman. And the Saruman is, by the way, I mean Saruman before the Lord of the Rings, like from the Hobbit Lord of the Rings.

Samuel Arbesman:

Before he went bad.

Alex Komoroske:

Yeah, before he went... I'm not trying to emphasize that Saruman is evil, I'm just trying to like industrious, bold, heroic. That's one of the downsides of this metaphor is that he turned pretty definitely evil later that everyone's kind of familiar with, but let's ignore that. So the Saruman is somebody who, one, believes in the great man theory of history, that history is principally driven forward by great men. Two, they believe that they themselves are a great man, and three, they have enough initial success that other people also believe that they might be a great man. And with this dynamic, it is possible, not easy, but it's possible to get a self-accelerating reality distortion field that actually affects the real world, because as you get more successful, you get more resources. Your network, the number of people who go, "Well, I don't think that's going to work, but this guy appears to be magic because he's done all these things in the past, so I guess I'm going to just hedge my bets and assume it's going to work."

And if everybody does that, gets out of the way and invests or whatever, that does work. And so this becomes a powerful thing. I don't even have to name some of the people. Steve Jobs of course is a Saruman. There's other, a number of tech personalities that loom large that fit very clearly in this space. Everyone's familiar with this as the archetype of how you do good in the tech industry and industry in general.

The other approach, the other type of magic is very different and it's what I would call the Radagast magic. And I like the fact that Radagast, at least in the movies, I never read the books, he's got bird poop on him and he looks like he hasn't showered in months and like, is he high? I like the fact that the Radagast is kind of undermined, not taken that seriously. And the Radagast magic works for a totally different reason. And the reason is the Radagast believes and loves everything and everyone around them. They find the seeds of greatness around them and they help nurture and grow those into something that is much larger than the sum of its parts. And this can create enormously high performing teams and create all kinds of indirect value, which is typically hard to directly financialize or to make legible to an official large organization or economy or something. I believe that we talk all the time about Saruman magic and like that's the only way of doing it, and I think the Radagast magic is extremely important. We don't talk about it often enough.

Samuel Arbesman:

Is there a need almost of creating a list of Radagast or somehow valorizing them to make it more either legible to other people or acceptable to be this kind of thing? How do we promote more Radagasts?

Alex Komoroske:

I think part of it is, in the essay I've accumulated a list and I actually keep on adding more people to that list over time. As in, when people tell me an example, I added Dolly Parton in there for example.

Samuel Arbesman:

Oh, okay.

Alex Komoroske:

Good old Dolly Parton, right? There's a number of other people have added over the last few months. I think one of the lines in the essay is Radagasts are unable typically to financialize the value they create. They can create immense value, but it's indirect and the trade-off is they can't financialize it. And so no Radagasts own their own helicopters, but I think you could say that Oprah is a Radagast and Oprah definitely has her own helicopter. So I think that that line in the essay must be incorrect because I can think of at least one example of someone who is I think pretty clearly a Radagasty kind of person.

Samuel Arbesman:

I wonder, maybe one way to think about it is also, when I think about science or public goods. When you create a public good, you're going to... the goal is to create much more value than you're going to be capturing. And you can still capture some of that value or it can redound to you in greater esteem and things like that, but you're creating something for lots of people as opposed to just a specific organization. Is that another way to think about that?

Alex Komoroske:

Yeah, 100%. I think a moral precept is think as long-term and as broad as your horizons allow, create more value than you capture, significantly more value than you capture. I think those are fundamental moral precepts that I think are important. I wish more people lived by, and I think a lot of people do, it's just not the ones that society lifts up or at least American society lifts up as the obvious role models.

Samuel Arbesman:

And so when I think about creating more value than you capture or helping catalyze change, you're not necessarily building all these big things or you're not being the Saruman yourself, but you're helping make Sarumans more successful or make organizational level things more successful. And I think a lot of people want to do that. Is there a way to think about that in terms of how to build a career that allows you to do those kinds of things or allows you to have that as a role? Is it part of a role?

Alex Komoroske:

I think it's fundamentally hard, and I think, I used to joke that the performance review process in any large company is fundamentally inherently broken because it's doing two things. It is deciding where to allocate scarce resources like promotions, salary bumps, et cetera, and is trying to help you develop into the best version of yourself in that context, the most productive, most effective version of yourself you can be.

And those two things are directly in tension because when the former directly gives you this gamification that leads to doing things to play the game, even though if you know it's not the biggest actual impact in the company. You can get Radagast style things in small scale where everybody knows everybody and everybody trusts everybody. It's very hard to get Radagast style things at a very large scale. At a certain scale of organization, everyone has to reduce to sum or statistics. They must, how else could you possibly run it? And then that's where Radagast doesn't work.

Samuel Arbesman:

Interesting.

Alex Komoroske:

I'm sorry, Radagast isn't rewarded.

Samuel Arbesman:

Yeah, it might not be rewarded. So related to this, so within a large organization you kind of have to think in terms of this systems thinking, there is a value for this Radagast kind of approach. How do you think about smaller organizations or individuals operating outside of organizations? For example, the world of startups existing in the larger world of technology. A startup has to exist in a large complex system and it often has to play a somewhat different game. Is it the same kind of thing where you mentioned within a large organization, externally you have to be one thing, internally you can be something else? Is it the same kind of thing at the startup level? How do startups fit in this thinking?

Alex Komoroske:

I think it is the same thing. One of the key insights I had many years ago when I was helping run the Blink rendering engine for Chrome was treating third party developers and second party developers of Google, internal Google developers, the same was a big unlock for me. Because everybody else treated them like, "Oh, they're part of Google, so I just need to convince their boss's boss that this is an important thing and then they'll do it." If I approached it more like a third party developer of like, I have no leverage over these people. I have to convince them it is in their self-interest to do this thing. In some ways, again, it's like, oh well you're throwing away, you're giving yourself way less leverage. Like yes, but I don't actually have that much leverage in a large organization for the second party developer.

So I think a lot of the same techniques work both inside of one organization or out because everything is nested inside of... your organization is part of an industry. And yes, some boundaries are much stronger than others. If you're working in a large organization, you have signed a contract that says you will exclusively work for that organization, that is a very strong bind. So that boundary is extremely relevant to you.

I think the same dynamics show up in all kinds of spaces, just how strong is the individual boundary between this and the other thing and how plausible is it to align things. I couldn't tell you how many people I saw waste just so much time and effort at a large company, one of the large companies I've worked at in the past, where they'd say, "Well only if we could get the CEO to tell this other team, this far-flung team over there, this thing is important, then it would solve all our problems." And they'd spend all their time trying to get the CEO to know about it. It's like, there's a thousand other teams trying to do that. Where'd you rank on that list? You're probably the bottom 10%, you're like 10th percentile. He's never going to care. And so instead say, "Listen, fine, I will not be able to use that avenue. What is the way to make this mutually beneficial thing happen?" And I don't know.

So I think the same tactics can be used to a surprising degree in that if you don't see how close they are, these internal and external kind of organizations. And again, which boundaries matter is a matter of perspective. One of my favorite little factoids is the United States used to be a plural noun largely before the civil war and after became a singular noun. And this is one of those small but profound shifts that before emphasized the states as individuals, and then once you make it a singular noun, it emphasizes the united, the collective as the primary thing.

And so which one matters more, the collective or the individual? That differs in different contexts. If you're in a startup ecosystem, there's lots of other startups and cool, the collective matters way less to you than your individual startup. Whereas if you're in a larger company and you expect that you're going to be there for many years and if this project goes bust, you will go and move to a different project within the company, then you're going to think much more deeply about the collective in that context than your individual team.

Samuel Arbesman:

Okay. No, that's really interesting. Yeah. I wonder to what degree people who are working within a small team within a large organization are thinking about their team as sort of on this continuum with startups where like, okay, they are similar in some ways, different other ways. I imagine there are many, many differences, but kind of recognizing that there is this spectrum and you can kind of tune it. There could be a certain amount of power.

Alex Komoroske:

I don't think people often do think this way, and this is what's so funny to me is the number of times in my career I've seen people, I will do something where I'll take one step back and be a little bit fanciful or oh, here's a different way of looking at this. It's kind a little bit odd and somewhat surprising but leads to different insights and people will say, "Alex, I don't have time for that. I'm too busy. And it's like, the mundane will fill every single square inch that you give it and you will never not be busy in modern life, especially in any kind of high functioning fast moving company. You got to make the time. So you're spending all this time trying to climb up a thing that has very low returns at great expense, and sometimes people are trying to climb up a mountain and they're like, "Oh, it's so steep and we need all hands on deck to climb this mountain."

And you say, "Oh wait a second, I just took a step back. I think there's a path around the mountain." People go, "I don't have time for that. I need your help to climb the mountain." I'm trying to tell you, I think we don't have to climb the mountain. I think we just walk around the mountain, completely cutting out the entire thing. And the number of times, like I told this story before, but very early in my career as a product manager, individual contributor, and I would work from home on Fridays back when that was not done. This is very, very much pre-COVID, very far pre-COVID.

People gave me so much for it and they'd say, "Oh, you're going to work from home on Friday." It's like, first of all, I will compare my output, my impact to yours any day of the week. Second of all, Monday through Thursday I'm spending 8:00 AM to 6:00 PM or whatever just running between meetings and quickly slinging Slack pings and doing very straightforward little things and having no time to think. And a single time during the week I take a step back and I carefully read the documents and I carefully reflect and I think, what's the thing that I could have done if I would've done it before this week started, would've made this week much easier?

Like, oh, I convinced the same 10 different people of this particular argument of why a certain thing was the right thing to do and they all found it convincing. And if that had been written down so they could have just read it, I would've saved a ton of time, maybe I should take the time to distill that argument that works so well in one-on-one context into a short document. These are the things that give you the unlocks. They are the ones that give you leverage. Because now there's that 10 people that I had to convince one-on-one in a linear fashion, and now maybe there's a hundred of the people in the broader organization who had the same question and now they can read the document.

This is not a linear increase in my impact, it is direct, saves a ton of time in a super linear fashion. And these kinds of things are all over the place and they just take time to find. If you are constantly firefighting, you will not find these kinds of unlocks and so you got to take the time to it. To me it's like obvious, I just don't understand. I've mentored hundreds and hundreds of PMs over the years and a lot of them, especially the junior ones, I tell them that and they go, "Yeah, but Alex, my project is just so busy." I'm like, "I got news for you. Every project you'll ever be on will be busy. It's always going to be like this. It will never not be like this and so you got to change."

Samuel Arbesman:

How many of them actually take that advice?

Alex Komoroske:

It depends. One of the things that makes me very happy, by the way, is someone who resists this kind of feedback in the moment and then a few years later reaches out to me and says, "Hey, remember three years ago you told me I was doing this problem and you told me this advice and I thought it was bad advice? Well, I've reflected on it, I realized that was really good advice and that has had a huge impact on me, and when I finally embraced it, that was the thing that really made me..." That makes me feel very, very impactful.

Samuel Arbesman:

No, that's good. Yeah, they were receptive. It took a little while and they had to be in the right head space, but then were receptive. I guess I have kind of a larger meta question, and there's a number of different ways of thinking about this, but in terms of the best ways of making an impact on the world, and we talked about the Radagast/Saruman kind of thing and it's sort of like the great man of history versus these small nudges and recognizing within a complex system.

So there's both the different styles one can act as an individual. There's also the style of organization one might be a part of, whether it's a larger organization within a small startup that we talked about. But then there's also the timescale on which we're thinking. You mentioned this a little bit with The Iterative Adjacent Possible of like, okay, rather than having some really large scale objective that you're aiming towards, that can not always work well and you and I are both fans of the book Why Greatness Cannot Be Planned by Joel Lehman and Kenneth Stanley, which is this idea of when you're dealing with this high dimensional search space, focus on interestingness and novelty rather than these large scale goals.

So I guess the big question then becomes, how do you think about what are the best ways to make an impact at the organizational structural level, individual style, the timescale one should be thinking about? I've just thrown a ton at you and I imagine it is extremely dependent on the problem and thing that you're thinking about, but how would you think about the dimensions that one should be analyzing when you think about this big hairy problem?

Alex Komoroske:

To me it's the highest and best use. Just asking yourself, am I spending time in my highest and best use? Not just a thing I'm good at, but the thing I believe is leading to something that I'm proud of. The test for me is imagine showing this conversation or this thing that I'm doing, this decision or this thing I'm doing right now in 10 years with a thousand of the people in the world whose opinion I care the most about, showing them this video clip. Would I be proud of it, would I be embarrassed of it? And this helps you live according to your values of you toss an empty cup at the trash can on the way to a meeting and it bounces out. Do you stop to put it back in the trash can or do you not?

Well, that's the kind of thing that I'm the kind of person who doesn't want to mess up the office. Like yes, I will take the 10 seconds to go put it back in the thing. That's something I would be embarrassed about if someone showed me that in 10 years. But then also if I'm sitting there and all the videos are of me just running around in circles and, "Yes sir," and doing a bunch of bullshit kayfabe work or whatever that's actually eroding value or whatever, I'm not going to be proud of that either. You want to do something that's not just making a buck or getting a pat on the back from your manager. You want to do something that you feel like matters and you feel like is something that you care about and you'll look back and think, that was time well spent and that was a useful, good thing I did for society.

I think large organizations kind of suck. Like fundamentally, they just suck. And I think they must in that they are, to me it's like the tyranny of the rocket equation, which is the thing that says to get an additional pound of payload into space, you have to put 0.95 gallons or pounds of fuel, which then of course requires more fuel, has this crazy regress that gives you this crazy amount of things. Like yes, the best way to get an additional pound thing to happen in any organization is to hire one more head. And yet, the additional value that you'll get of the head, the net value will get smaller, smaller, and smaller until it's infinitesimal, where the vast majority of the energy is just going into coordinating and spreadsheets and TPMs and whatever the hell to try to do something coherent. Like somebody, a good friend who I think you might know, [inaudible 00:27:00], who's a brilliant product thinker at one point said to me a year ago, it really stuck with me.

She said, "Alex, why are all the smartest people I know spending every waking minute of their professional lives thinking about how to navigate the particular petty politics of their particular organization? Why does that happen?" And it's that same thing of the mundane bullshit will absorb every single square inch you give it. I think organizations are the exact same way. Like this overall kayfabe and just coordination for coordination's sake will happen every square inch that you give it.

I also find that the vibe I dislike the most in the tech industry, I love a lot of things in the tech industry, I'm a consumer, I'm part of the tech industry. I'm also kind of an outsider of the tech industry. I have very different approaches to things. The thing I like the least is this vibe of to not think through the implications of your actions. That takes too much time. Or who's to say what they might be, and so I'm not going to think about it.

When you're working in a highly levered space, your actions matter and they matter in a big way. And if you say, "Well, I'm not even going to think about how that's going to affect plausibly, if I'm going to be proud of this to change or not proud of it down the road," that's shitty. I think that's one of the reasons that the tech industry has lost some of its luster, even as it's continued to add a lot of value for society, people rely on a lot of the free services that it makes available. Part of it is just immense power and also kind of like this, "Whatever!" And that I think is just not... You know, with great power goes great responsibility a wise man once said.

Samuel Arbesman:

Right. It's also, and we were talking earlier about we live in these complex systems with nonlinear impacts, and you are a large organization with a huge amount of impact and power. It behooves you to think about these nonlinear impacts at least a little bit,

Alex Komoroske:

Right? And it's one of those things that it's one of these cases where the thing that is good for the company and good for the individual, the employee and good for society often are more aligned than people think they are. Not always, there's definitely some misalignments, but they're more aligned than people think they are. It's just, it takes the time to find those parts of mutual alignment, but they do exist often if you take the time to look for them and to think through, because a lot of things that will appear to be in the interest of the company to make a quick buck, for example, there's any number of ways of making a quick buck. You can burn all kinds of trust and you can just put it on fire and you'll get a bunch of money in the short term, but now you don't have that trust anymore. That took you forever to accumulate it.

So this appears to be, "Well, I'm going to do the right thing for society that's in tension with the business goals." No, it's not. The business goal should also be thinking about its future cashflows over a long horizon. And so if you burn all the trust to make a quick buck, then you will not be able to make as much money in the long term. So don't do that. I don't know. It's not that hard if you take a long enough time horizon. Organizations have a very short time horizon and they must, I think because you reduce down to whatever the normal reporting cadences is the horizon that you are able to think.

Samuel Arbesman:

Do you think it's possible for some of these large organizations to take longer amounts of time into account? Or do you think that truly because of these reporting and quarterly returns and things like that, they're just fundamentally incapable of that?

Alex Komoroske:

Large organizations have definitely spun off a lot of really useful things, although I find it's always very tenuous and it only lasts for a certain period of time and then it gets scrunched and then it's dead forever and it's just a matter of being on that path. And one of my former employers, guess which one, I think is probably on that path.

So they can do good things mainly I would argue by being profligate. There's a whole bunch of stuff that if you're making a shit ton of money, you can just do all kinds of random stuff and that can be picked up by other things in the future. And sometimes it's publishes opensource stuff that people can literally pick up. Sometimes it's just the know-how in the employee's heads and then they go on to build something else and they bring the know-how, and they now know how to navigate through a particular problem domain, was like I spent the last 10 years navigating this problem domain and all the false starts and all the dead ends, and now I know the one path through the maze, so I can just do that now.

So there's this kind of, companies that have a lot of... that can think bigger, can kind of create some of these spaces. But I found that a lot of the big bold bets, I hate big bold bets as a frame by the way, I just hate it, like fundamentally. Because they're often just stupid bets. They're just bets that can't possibly work, just take this number and multiply it by 10. It's like, okay, but why? How? If you told me, "I'm going to tweak this thing that's a linear return into a compounding return by aligning the incentives that the producers of the input also invest more the more progress we make. Like, that I believe as a big bold bet that sure, fine. But in practice it's just the way that people do big bold bets is they just 10x the number and it's like, that's impossible. They're like, "I have boldly set this goal, now it's a simple matter of you, peon, executing it. And if you fail, well I'm going to hire somebody else that will succeed." That's not leadership, that's bullshit.

Samuel Arbesman:

Right. It's like a thought experiment that somehow someone has to implement in reality that is not the whole-

Alex Komoroske:

It's like, if it doesn't work in reality, it doesn't work. So I think big companies, when they tackle some of these big bold bets, I think there's a fetishization of things like building the Eiffel Tower or building the Pentagon or there's a number of these examples of big projects.

Samuel Arbesman:

Like the Manhattan project or the moonshot, yeah, like Apollo, the Apollo project.

Alex Komoroske:

There are definitely examples of this happening. And I would argue that in almost all those cases there was some existential threat that was breathing down your neck that was driving a particular existential drive to a thing. And most of those problems I would argue, people say it's not rocket science. Rocket science is easy, y'all. In the grand scheme of things, because rockets and physics, the quote about imagine how our physics would be if atoms could think? The vast majority of problems, if you're just putting a rocket in the air, yes, very challenging, but also in a complicated way, not a complex way.

The vast majority of problems that affect us today are complex ones, and the tools of reductionist thinking do not work for it. And I would argue that's one of the reasons by the way that society has gotten so kind of frustrated in general about all these politicians and these powerful people must be lying or bad, partially because everyone kind of assuming that the reductionist techniques that work so well in certain contexts, like putting a rocket into space or whatever, don't actually work. And the vast majority of problems that we have, they're coordination problems and the expectations that people have are wrong for what is possible to do. And so, well, it's not working so therefore somebody must be evil or incompetent or lazy.

Samuel Arbesman:

Well, I think it's also similar to when a new leader of a company comes in, or an organization, there is this pressure, at least from my perspective, often they need to make their big impact and make sweeping changes or whatever versus saying, "No, this is a complex system, any new changes are going to have unexpected effects. Let's kind of tinker at the edges for a little while and try to figure things out." And of course that's not the kind of thing anyone wants to hear. They want the big bold bets, they want the big changes even though they could be entirely counterproductive.

Alex Komoroske:

Yeah, totally. Because at least you can tell that, "Well, I did a thing." And that's the thing, what you're optimizing for in any large organization's [inaudible 00:34:10] process. I think this fundamentally must be the case. Because if you're looking at summary statistics, you're going to look at did a thing happen. It's really hard to figure out was it a good thing? Was it the thing that was better than the alternatives? And so people would rather do a thing that is obviously a thing than do the right thing, which might not look like a thing. And that fundamentally is I think one of the laws of physics that makes large organizations kind of suck because you get all this just running around about stuff that you can't possibly change, just the opportunity cost of these things, it's just exhausting.

And this is why also I say that once you understand systems thinking, really get it too early in your career, it's a curse, because at the beginning you're just running around, like, "I can do this." And you don't know any better and so you're going to run around and you're going to get promoted and get more larger positions. But once you see it, you're like, "I just can't bring myself to care." You guys are all running around in this thing that you have no direct leverage over with no plausible plan, and this is what's actually the dynamic that's happening, the social dynamic that's happening is all boring in a very certain way. It's like this doesn't matter, this isn't causing any impact in the world that causes a better thing to happen. It's just running around in circles and getting rewarded for it.

Samuel Arbesman:

So related to that, and so you have left the world of big organizations relatively recently and are now building a startup. Is that because you're tired of all the games and the busyness and the kind of coordination issues, the big bold bets? I mean is it the kind of thing where you want to build a startup that doesn't necessarily do a big bold bet but can make a large impact and therefore this is the kind of thing, this is the structure that is actually best positioned for that kind of thing? How are you thinking about moving from the big tech world to the startup world?

Alex Komoroske:

I'm doing it and the reason I left was entirely because I believe that this idea that we were executing on I think is my destiny. I think I have the skillset and my co-founders have the skillsets necessary. A thing that I think has a potential to remake society if it works in a way that I'd be extremely proud of in the same way that the web had this massive positive impact for society by democratizing power dynamics of information flows. And so to me, it's all about doing that. In fact, the idea of being a startup founder was insane to me. If you had told me two years ago, like of course not. Of course not.

Samuel Arbesman:

But even in the face of all the things you've told me about these large organizations, that still did not appeal to you?

Alex Komoroske:

Just because large organizations are safe. The chance-

Samuel Arbesman:

That's fair.

Alex Komoroske:

Yes, I'm running around and we're all pretending that these stupid actions that people are taking have direct impact that is wildly off from what it actually has. But also you're getting a consistent paycheck and the company completely combusts and doesn't exist anymore in the next three years is very low. There's something very nice about that secure foundation. Whereas in a startup you are going in, the vast majority of startups will fail. And so you have this risk posture that's wildly different.

And so I thought that wouldn't appeal to me, but a lot of it was just this feeling of my destiny and also the kind of thing that you couldn't possibly do at a large organization. It's when you have ideas I think that are potentially have an asymmetry of like, oh look, you take this thing and you flip it upside down and then wildly different things result. That's a two-ply argument. And it's very hard to make a two-ply argument in a large organization because everyone's constantly busy, they don't have time to think about it, you can only make a one-ply argument in an organization, and one-ply arguments are crappy. They don't do the job they're supposed to do. And so I think that sometimes not having to convince a whole bunch of people of the thing, it makes it easier to execute on ideas that are potentially, if they work, would have a very different super linear thing.

Samuel Arbesman:

And I guess maybe the last topic to explore is generative AI and the world of modern artificial intelligence. I know you've thought a lot about the hype versus reality and many of these different kinds of things. Do you feel that AI when used well or when done well or when understood properly, it has the potential for making that boundary between creators and consumers more porous or kind of making people feel less passive? Or could it kind of go the opposite direction? Or how do you think about AI also more broadly too?

Alex Komoroske:

To me, the mental model, the thing I think that makes the most sense most quickly is LLMs are magical duct tape that is principally composed of the distilled intuition of all of society. And when you see it that way, I think it explains the stuff it can do and what it can't do. But I do think that LLMs and their results, they're messy and it's weird. And as Gordon says, they're squishy. And we don't know how to reason about squishy computers. What even is that? Computers are extremely precise, so you tell them exactly what to do, they do exactly that. And LLMs are not like that at all. They're way more organic. And that's almost a category error. People are trying to use them and like, "I'm trying to use this duct tape to create a large piece of factory farming equipment." And they're like, "And it's failing.

I'm like, yeah, what did you expect? You can't make it out of a ball of duct tape, but what you can do is jury-rig the crap out of anything, anybody can. I think that's a very different approach. And I don't think in the tech industry we know how to handle that. We're so used to these hyper-optimized. The only two business models, the only two things that have worked in the last decade, everyone knows, hyper-aggregation of consumer experiences, small number of hyper-aggregators, and two, vertical SaaS. Those are the two business models that we know how to do, they're super optimized and they work in an environment where the computers do exactly what you tell them and software's pretty expensive to create.

Well, you can get the LLMs to generate you little bits of software that do the thing you want really cheaply. And it's definitely imperfect, but good enough. This is where Clay Shirky's old frame of an essay, I think of 2004, about situated software. Situated software is software that is designed and built in this very specific, situated specific context. Think of spreadsheets or these little things that are hacked together. And situated software is something that anybody who didn't create it will look at and go, "That's a piece of shit. It barely works, it's ugly, it's insecure." But to the person who built it, it is perfect because it does exactly what they need in that context.

Samuel Arbesman:

Right. They're not trying to scale it, they're trying to make it do the thing.

Alex Komoroske:

And a lot of the challenge of writing software is scaling it to unforeseen circumstances. You don't do that, software's actually not that hard, especially when you got LLMs. So I think that we're going to see if the cost of software, there's some law, you'll know what this is, but when something that used to be scarce becomes plentiful, something else becomes scarce, because scarcity is relative. So software used to be scarce, it's expensive, it's hard to write software that works reliably in all these different contexts.

Now if you say, "Ah, it's actually pretty easy," then what changes? I think the value of data, of users' data and their sovereignty and the preciousness of their own data will become more important. Today, software's expensive. So what am I going to do? I'm going to create my own aggregator, whatever. I'll just join in on that one and give it my data. And if software becomes cheaper than I think you'll see the preciousness of people's data that they look after us, something that's specific to them, important to them that they don't just spray around all over the place will become more important.

Samuel Arbesman:

This is really thought-provoking about what that future world will look like, but also opens potential for lots of different-

Alex Komoroske:

I think it's an enormously exciting time. That's why it felt like my destiny to go and co-found this startup of like, it wasn't just an idea I felt morally deeply interested in it, it was not just a thing that versions of it had been bouncing around my head and my co-founder's heads for a decade and we've been developing all kinds of knowhow and expertise to how to execute this. It was also the world. We're in this environment where LLMs have clearly changed things, but we aren't sure yet what. And this is exactly the time that you go into that the world shifts on its axis and things that seemed impossible are now briefly possible.

And so if you're going to re-imagine, like a lot of people today are assuming that AI is a sustaining innovation. It's going to be the same basic power structures and market structures and business models, just a little bit more or little bit different. I think that AI is a disruptive innovation and we should act like it. If that's the case, how would we want to disrupt? What's the kind of things that we'd want to disrupt that we think the vast majority of society would be like, "Yeah, that's what I want."

And I think if you ask giving people more sovereignty and ability and agency over their own data to collaborate with it, to make it, to do tasks specific to them in that context without having to worry about what random people will have access to their data or might use it in some way that harms them, I think is something that a lot of people can get behind. Especially if it's an open system that is designed specifically to not have a single powerful aggregator that's [inaudible 00:42:41] the laws of physics. And so that's my hope is to do something, time was right, we were the right team, and it was the thing that I was going to kick myself in 10 years if I didn't attempt it. This is the time to do it, let's do it.

Samuel Arbesman:

That's awesome. That might be the perfect place to end. That's awesome. I love this vision and destiny, trying to actually do it. So thank you so much, Alex. This is fantastic.

Alex Komoroske:

Thanks for having me. Was great.