In this episode, Danny Crichton and Josh Wolfe discuss themes from Lux Capital's Quarterly Letter, where techno-optimism collides with the despair of techno-pessimism. The conversation dives into the paradoxes of AI, oscillating between its awe-inspiring potential in transforming healthcare and education and the looming existential threats it poses. Danny and Josh dissect the complexities of AI, debating whether it's a Pandora's box leading humanity towards an unstoppable dystopian future or a beacon of hope promising unprecedented societal benefits. They also look at the critical role of error correction and criticism in the advancement of technology, advocating for a pragmatic middle ground in a world polarized between blind optimism and hopeless pessimism. The duo explores the necessity of competitive open systems in fostering innovation, warning against the dangers of AI monopolies. Josh sheds light on the concept of instrumental objectivity, emphasizing the urgent need for realism and pragmatism in technological and societal progress. They argue that while we aim for lofty future goals, the focus should remain on developing practical tools and instruments in the present. It's a must-listen for anyone interested in the future of AI, the role of innovation in society, and the fine line between utopian dreams and apocalyptic realities.
Produced by Christopher Gates
Music by George Ko
Transcript
Danny Crichton:
Hello and welcome to Securities, a podcast and newsletter devoted to science, technology, finance, and the human condition. I'm your host, Danny Crichton, and today I'm bringing us a conversation with Josh Wolfe triggered by Lux's latest quarterly letter, which we just published to our limited partners.
He and I discussed this intense battle in Silicon Valley between the techno-optimist and the so-called E/ACC movement versus the techno-pessimists who now swarm the halls of Washington, London, and Brussels, and so many more places. At core, what we're frightened by is the closing of the open systems that built the economic and scientific supremacy of America this past century, allowing rebel scientists access to funding and publishing to develop the cures and technologies that fuel our quality of life. We lay out a course toward techno pragmatism while debating the Pandora's box of existential risk and so much more.
So let's just jump straight into Josh now.
Let me grab, I have the letter open:
Josh Wolfe:
Instrumental objectivity. I think the second whole page personally, or starting on "let's get real", where you've got this bifurcation of the unrealistic euphony,
Danny Crichton:
Yes.
Josh Wolfe:
Of techno utopianism, which is pedal to the metal, full speed ahead. Don't stop. Anybody that is in our way is an obstacle. Run them over, shame them. And then you've got people that are like, "Whoa, whoa, whoa. Precautionary principle. Let's put on the brakes. Let's slow it down. Don't you realize that we're headed towards disaster?" And we're taking an approach, no, neither is practical. Both are absurd. The future is a destination. You take these two polar opposite views.
You've got apocalypse on one end that maybe people are going in reverse to. And you could argue that there are geopolitical forces and certain sensibilities and values that want to take us back to the past.
And then you've got this pro-social, pro-techno utopia. If you are rushing towards the darkness of the former, what ends up coming into focus in our eyes is futility and nihilism and resignation, which basically leads to the strong men and the populists and the Trumps. They're focused on destroying institutions. And it ends up invoking this bleak imagery of bombed out, rubble, razed, cindered cinder blocks that we unfortunately see in the news. The crumbled infrastructure, the erosion, the erased vitality of all color, all life, just this essential blah, this grayness to which we in our letter say, "No thank you. No thank you. We don't want that."
And then lying at the other extreme, you've got the Pollyannas who are promoting the promise of this techno-optimist utopia, and that too is in its euphonic exuberance, just utterly unrealistic, and it always has been. You have the cliches there that you could put into Midjourney and generate, which are these sci-fi, sleek chrome metal reflective bubble glassed architectures, the biodomes that Bezos is actually building, the air trams, the flying cars, these manmade marvels all springing forth amidst these fountains of fauna and greenery and technological progress.
But the reality is that that kind of free proliferation of technology is no panacea. It ignores all of the human consequences. And true progress comes from this restraining inhibitory focus and controls, not of regulation, but we call criticism and error correction. It's the equivalent of having maybe a seatbelt, maybe the side view mirrors, maybe a rear-view mirror, but just proceeding with intention but with a pragmatism. And the call for pragmatism is totally lost with the Pollyanna calls for just full speed ahead.
Danny Crichton:
Well, I think obviously we're recording this post the OpenAI, so I mean this is maybe the greatest example of this tension between promise and peril. You look at OpenAI, the concern was from the stability and the safety folks, which is we're getting closer and closer to artificial general intelligence. At some point it could take over the whole planet, we need to stop this before we get that far. It's an existential risk to the survival of the species. Versus AGI could actually help so many people, it could help us in healthcare, it could help us with education. We were just talking right before the show about how do we give more one-on-one tutoring. Well, imagine AGI, everyone has the best professor in the world teaching them every single day for 8 billion people. What would that change in society?
And so we see these two extremes because literally at the board level, but just in general around AI, because it's focused on existential risk. So I think one of the questions I would have to you is it's great to talk about housing or apartments or a lot of this infrastructure where look, the highs and the lows, there's not a huge variation. But when you get into these sort of existential questions, pandemics is usually one that people focus on with biology, they focus on AI. There's this concern that we're potentially building technologies either that could potentially save everyone or could kill everyone. And they were just completely at the asymptotes on both sides.
Josh Wolfe:
The fear of the Pandora's box. The fear of the Pandora's box is what leads people to rationally say that the precautionary principle is warranted. The fear that you are unleashing the genie from the bottle and it can't be put back in. You heard Elon just yesterday when he was with our friend Andrew Sorkin on the DealBook stage, where are we in this? Are we facing existential risk? "Is the genie is the genie out of the bottle?" Andrew asked, and Elon said, "Its head is definitely peeking out," which is a nice way to hedge because nobody knows.
And the truth is my favorite quote about technology is it's actually easy to sort of predict the technological arrows of progress and which technologies are going to come and whatnot. What's really hard to predict is the social implications when everybody is using it.
So I'm at the moment currently in the Yann LeCun camp. Yann is shouting down the Max Tegmarks and the other sort of peril prognosticators, and he's saying, "Look, there's a big difference between humans and AIs in that humans like to dominate each other. They like to have sovereignty over other people. They like to influence and boss people around. They like to signal and virtue signal and establish themselves for status and zero-sum pursuit of currency and cash and compensation and rewards and acclaim and celebration. And machines don't necessarily want to do that. In fact, not just don't necessarily. We program them to decide what incentives they're seeking."
And so his view, which I subscribe to, is that machines don't have a desire to dominate us. Machines don't have a desire to dominate machines. What machines interestingly might have a desire to do, and we've all experienced this, is denial, not domination, but denial. What do I mean by that? You've tried a prompt on ChatGPT and says, "Sorry, can't answer that." So you say, ah. It's almost like the equivalent of how saying, "Sorry Dave, I can't do that." You've given control over to the machine, and the machine is saying, "I will not execute your program. I will not do what you're asking me to do."
Okay, so in a world where we have closed systems and dominant AI that is void of competition, that would be a danger to humanity. That would be the thing that I would be arguing if I'm the OpenAI board that you don't want, which is the very thing that sort of OpenAI is, which is this closed system growing in monopoly power or whatnot.
What's the solve to that? Not killing OpenAI but letting a thousand other flowers bloom. So you want an open source movement like in Hugging Face. You want an open source movement like in Llama and Meta of all people, surprising. Why? Because if I ask ChatGPT a specific question or I ask it to execute something quite literally, and it says "No, what do I do?" I go to Anthropic. Anthropic says no, what do I do? I go to Grok. Grok is probably going to almost definitely say yes and tell me a dirty joke in the process.
But the answer to a closed system is not just an open system, but many competitive systems so that you have alternatives because when you are denied in one place, you then go to the other. And so that to me in this current debate about the peril of AI, number one, presumes that machines want to dominate. It's not machines that want to dominate. It's humans. Number two is the real risk is denial, is the frustration that you have turned over your system to a computer, to a machine, to an algorithm, and it tells you you can't do the thing that you want to do.
Now if you have an alternative and you go to a different system, then problem solved. But if you're locked in a system and it's basically saying "Wrong password, error. Won't allow you in," that's where the real peril is, which is almost a small administrative bureaucratic Kafkaesque nuisance, not a global existential one.
Danny Crichton:
Well, I was just reading a French novel called the Wizard of the Kremlin, recently translated into English in the last couple of weeks, and it sort of ends with this question and commentary on robots. And so there's always this fear of AGI taking over the planet, the robots kill us all, Terminator. And the argument was actually the fear that we should have is that robots do not rebel. That dictators, that autocrats could use this technology and they'll not just follow through on the rules and programming that they have, they'll follow through it to the end of the universe. They will never rebel. They will never actually stop the kind of stuff that they see.
And so the argument was actually we're looking at it entirely the wrong way. We should be looking at it as we want robots actually to rebel. We want them to actually look at things and say, "Well, that's not right," or "I don't want this to be done." We don't want them to actually follow through in everything that we program on. And so the more that we have these closed systems, the more that you have only three that are empowered by governments, regulated.
We wrote about this a couple of weeks ago in the newsletter around the UK and the Bletchley declaration, the EU AI Act, the executive order from Biden. Governments seem to be very quickly trying to regulate everything around AI, and I'm with you, which is it's so early in the process, and so exactly how this gets entwined with all of our systems, exactly how it gets implemented, the social effects that has, how do you sort of sacrifice all the potential gains that this technology potentially could deliver with the fear that we don't even know exists because it hasn't even happened yet? That to me seems to be the huge challenge right now.
Josh Wolfe:
You hit on something on this idea of rebelling, and when you think about why do we like backing rebel scientists at Lox, or why are some of the great people who end up getting very famous in art or media or literature or winning Nobel Prizes were quite literally rebels. They were people who the system rejected, where they said no.
My favorite example of course is the Nobel Prize winning Peter Mitchell, who came up with the chemo osmosis theory of how things go through cell membranes. Everybody doubted him. Not only was he correct, he wins the Nobel Prize, and he shows the chart during his Nobel Prize speech of when his colleagues finally agreed with him in a date linear form. It's just amazing like chip on his shoulder. But it's the rebel who doubted that.
Now why is that important in the context of AI? Yes, you do want systems rebelling, but not in the sense of they are refusing to do what you're saying. You want systems that are error correcting. I mean, quite literally when I misspell a word, my machine prompts me and says, "Do you want to spell tomorrow with three Rs?" Or maybe you go there.
So the way that we get progress, the way that we improve knowledge, the way that we improve scientific understanding, which ultimately leads to useful tools and technologies that we can use to do the things like give people individual tutors and create lifesaving drugs and help model out game theory to help reduce conflict and diplomacy and all of these kinds of things is by having error correction. And so error correction means that you need systems that can trend towards truth, that they have increasing accuracy, and that is open system and competition. I mean it truly is the biggest case for open systems and competition is being able to error correct people's random conjectures and the refusal of closed systems.
So that's very interesting in the context of geopolitics and Putin, right? Because a dictator basically suppresses dissent. A dictator suppresses people who would criticize. They try to route out the institutions that are full of error correction. Journalism, investigative journalists, watchdog groups, anybody that would basically undermine their version of the, as we noted in our letter, the fictional Ministry of Truth of 1984.
We are facing new pressures coming into 2024 with a nonfictional version of this. And we've had fake news. I mean, one could argue we've always had fake news since the dawn of yellow journalism back in the day and muckraking. But it's the underscoring of the importance of open and competitive systems that can lead to error correction. That, I think, is the salve and the solve for all of this.
Danny Crichton:
Well, I want to go back to your comment about rebel scientists, because this was a story for the Nobel Prize winner in medicine this year. Katalin Kariko, who famously researched messenger RNA, really struggled to get tenure throughout her entire career, was sort of in an island cul-de-sac of scientific research, ended up becoming critical to COVID and a bunch of other treatments that are coming out now.
And there's this quote that went around at the time that she was told well at Penn, University of Pennsylvania, that she was not of faculty quality. She could not get through the review process, because you have this system of committees, you look at your research. And fundamentally if you are sort of a rebel, you're discovering something that no one else really sees first, no one in the room actually agrees with you. It reminds me of the quote of science progresses one funeral at a time.
But what's amazing to me, as much as our modern research system is a mess, and there's obviously tons of complaints, the upshot of that is all these decentralized institutions, all these different universities, all these research institutes all hire separately. There is not one bureaucracy that determines every single person's career. You could argue the NSF and the NIH or some of the research funding have out, but there's still nonprofit funding we're seeing with ARK Institute and others in Silicon Valley. So there are all these sort alternative ways for rebels to get into the mainstream of science.
I think as soon as you get rid of that sort of decentralization, as soon as there's not another way around the system so you can potentially offer new knowledge or a new innovation into the system, you sort of prevent the whole thing from moving forward. And I think, prior to Putin, if you go back to the Soviet Union, this was sort of the challenge with planned economies. You go back to, okay, well here's how we're going to make cherries or apples or some sort of agricultural product better or industrial products.
And there was a book earlier this year, we're walking through around the history of the Soviet computer. The Soviets were actually ahead of the US for a period of time. The technology was earlier, there was better investment. It was sort of a great example of socialism early on works very well against capitalism because you can actually devote, in a non-profitable way, resources to something that has no marketplace. But as soon as it became competitive and consumer driven, US companies competed so ferociously, so quickly that Moore's law kicked off here in the US and it never did in the Soviet Union, because you didn't have to get past it. It was actually good enough for all the services that the Soviet government needed. You were able to do the census, accounting, et cetera. And so there was no need, there was no competition, there was no pressure from the market that signal that says, "No, we got to get better every single time.
So I'm with you on the competitive open systems, but that leads back to this risk question of, but if we open up all these different AI models or all of this different information... Let's say pandemics, I have the ability to download Ebola, print it out on a home bioprinter, inject myself, inject someone else, and now there's an Ebola epidemic. That's obviously one of the nightmare scenarios that's going around DC these days.
How do you sort of prevent that? My view is sort of, it's common so you have to solve it a different way. But how do you do that if you're not controlling each of these individual models and preventing it from talking about it in the first place?
Josh Wolfe:
I think the issue is a low probability, super high consequence event, let's say, of a runaway bio terror threat. The counter to that is if you know that somebody's about to unlock a virus, then you need to be able to also unlock its counter, which is the vaccine. So I think that there will be groups that are dedicated to this. Some will be nonprofits, some will be Gates-like foundation institutions. Some will be globally coordinated efforts in a perfect world, and some be startups that are rapidly trying to identify.
I mean it still amazes me the fastest time to a vaccine in the past was Ebola around five years. COVID vaccine was a matter of weeks and then until or days arguably, until it was able to actually get mass distribution took longer, and then it became politicized. But I am very optimistic that whatever threat is identified, we will have a counter-solution to be able to neutralize that threat.
So if we are solving for that super low probability, but high magnitude edge case and we were to apply the precautionary principle, we are also stifling all of the incredible pro-social positive developments to reduce suffering by solving things in cancer, Alzheimer's, Parkinson's, et cetera. And in so doing, we would be saving those people's lives and increasing the probability that one of them may be a brilliant cancer researcher who is then being able to create some incremental discovery and hire the postdoc that is going to make the next discovery in something else.
And so I'm quite optimistic that even in the worst case pessimistic scenario, that the answer to knowledge being discovered that could be used for ill is positive knowledge that's used to discover, to thwart, or protect against it.
Danny Crichton:
Well, I think you're getting at something around this exponential effect of open systems. So oftentimes slower to begin with, harder to get consensus, more debate, more argumentation. The hope is, though, as you are building up those debates and that system actually congeals, actually accelerates and moves faster than a closed system where a sort of top-down model selects what research is good.
We actually saw this with sort of this fraud in the Alzheimer's community in the last 15, 20 years, where a very specific program around investigating Alzheimer's sort of took over the scientific industry. And then we found out in the last year or two that most of the original papers of that program were fraudulent. They were made up, the data was wrong. And so this field has sort of been, now in retrospect, almost completely moribund. It's all going down the wrong direction.
Whereas if you had imagine a world in which we had 15 different directions all around the same topic, it really would've allowed us to maybe even, may not solve it, but certainly have more promising directions. Now we're starting from scratch, and we're like, "Okay, we just did all this work for more than a decade, what do we do next?"
But I want to get at this exponential effect because obviously E/ACC or E slash ACC, effective accelerationism has really gone around Silicon Valley on Twitter. People have it in their handles. It's discussed quite frequently. You don't have it in your handle, and neither do I, but I'm curious, as this has sort of expanded so widely, obviously Mark Andreessen has really popularized this with his manifesto, Vitalik Buterin, the inventor of Ethereum, had a piece out in the last week on the subject. What's your sort of thoughts? And why are we both sort of skeptical a little bit from the furthest frontiers of what the E/ACC folks are looking to do?
Josh Wolfe:
Well, let's look at the motivations and why this is becoming a movement in the first place. You can argue that it starts with, it's time to build and a view of let's be optimistic instead of pessimistic. It is also a backlash against growing calls for regulation, which we also agree with. We do not want to see regulation of AI systems done by government. That should be self-regulation. You should have companies that are basically doing the right thing. In the same way we didn't really want regulation for search or for software. Like over time, somebody is deemed to be uncompetitive or nearly monopolistic, and the government comes in. But by the time the DOJ ends up knocking on somebody's door, I've always said, that there's a competitor waiting in the wings that ends up taking the center spotlight.
So you've had concerns over too much pessimism and a need for declaratory optimism evident in our media around sci-fi movies. Most of them are dystopian instead of protopian or utopian. You've had concern about regulation wanting to push back against that. You've had an argument for geopolitical competition, that if we're not doing this, then China or Russia or somebody else might, and they will get ahead and we will be disadvantaged. Certainly the thing that helped us to bankrupt the Soviet Union during the Cold War was a combination of superior defense technologies and the space race, both of which advanced American society, economy, technology, job creation, competitiveness, global standing. And so we want to see more of that. So calls to say "reign it in" are also coupled with the concern of people that are either centrist, right or libertarian against the left that's saying, "No, we need to slow things down. We need degrowth, we need more government." And so some of this is political, some of it is philosophical, some of it is technological.
And so you also have a movement that is sort of adjacent to what I would generally say is Elon's movement. Elon's movement has been, "Let's," if you believe him, "accelerate sustainable future. We're going to develop the means and infrastructure to get off of Earth, to be able to be the first species to cohabitate with each other on a different planet." And all of that is accelerating towards a future and that future is going to be brighter and better. Now, there's a pragmatism that says there's so much to do here. There's so much here on earth to do in curing Alzheimer's and cancer and heart disease and exploring our seas and our oceans and helping provide abundant food for people and helping to fix basic things like democratic systems and voting and trust and cohesiveness amongst each other. Sort of going back to that old Leo Wilson quote of, we have these sort of barbaric brains, but we're Star Wars technology.
There's so much more to do here. But there is a great feeling for somebody, particularly online when it's easily amplified, and particularly to identify that you are part of a movement. And that movement is for technology for progress. It puts you in a tribal camp of, "I'm backable, I'm future focused. I'm not one of them. I'm part of us." And so that's why this movement has sort of picked up speed accelerating, so to speak.
Danny Crichton:
Well, I think you're getting at one of these things, which is everyone wants rockets. And then I always joke, it's like I want rockets too. I want to be able to colonize other planets. I'd also just like a house in a nice urban walkable neighborhood. I always joke about the Jetsons. Everyone's flying around these air cars, and I'm like, but does anyone go to a park and look at a tree anywhere in the Jetsons?
Like you had mentioned earlier, I think we have in the letter about this, the chrome dome, the glass arcologies that you're sort of living in and you're like, actually, nature's amazing. And being able to walk around. The theme of the letter was around instrumental objectivity, and we were talking about the progress of science, the progress and evolution of instruments in particular, which is the microscope, the first time we were actually able to zoom into a cell. And we're still continuing to have new evolutions. I'm thinking of companies like Icon or Kallyope, which are pioneering all new instruments to be able to look at every individual molecule within those cells and be able to analyze them and be able to create therapeutics from them.
But one of the main things we have is this sort of, reality is different that we look through Twitter, which is its own instrument, its own algorithm, its own way of lens of seeing the world. It's literally a microscope of its own sort. But we go online and you just talk about New York, "It's criminal. Every place is dangerous. You're walking around, it's like the shittiest place you could possibly be." And then you and I walk out of the office and it's like paradise of a sort. A curated paradise, let's say. The city is so vibrant, people are meeting each other. We're going from party and meeting and set up an event or whatever the case may be.
And so there's this complete divide between what the reality actually is and what you either hear online or even in the scientific literature. And to me that is, again, this sort of concept of error correction, which is in order to have an open system, you actually have to use your own two eyes. You actually have to collect your own data. You actually have to understand what's really going on.
And oftentimes what you hear in terms of the narrative is not the same as what you're actually experiencing with your own best instruments, which is your own eye. And I think you had a point in here, which is the word eye is a homophony, am I doing this right? I forget the right word, homophony for I-E-Y or the human eye, and that we identify ourselves as what we can see.
Josh Wolfe:
Yes. So this idea, first of all, of instrumental objectivity. It is one thing to be able to quite literally look out into the far reaches of the celestial heavens or to look into the inner space of a cell and to imagine the future possibilities of what both of those things portend. In the former case, extraterrestrial exploration and are there aliens or life forms, and can we do space mining, and all of these far out things.
But to figure out those far out things, you need instruments. Those instruments are literally the things that you have to focus on here and now. You have to assemble the thing before you and in front of you with a technical complexity and a pragmatism before you can just hand wave and say, "We're going to the outer reaches of the solar system."
Same thing with biology. It's one thing to talk about, "We're going to cure cancer," or "We're going to extend life." But it still requires a near term pragmatism of, okay, what are the new microscopes and the optics and the laser elements and the imaging, and how are we feeding that into a machine learning model to look at all of the protein motion that is occurring so that we can infer whether this drug works. And now we have to actually test the drug. And by the way, we can't accelerate that beyond things in silico. We still have to go through clinical trials because there's a regulatory apparatus.
So it is balance. I mean, the key part of this letter is calling for realism, a realism and a pragmatism, balancing the very present things you have to do in technology, craft, understanding, manufacturing, engineering, with the future implications of what those things can be. And I think that most of the value, most of the great entrepreneurs in an era of low rates, were able to basically just talk about the future. That's how we got Hyperloop and whatever. And you could sort of bullshit your way through the here and now.
When the cost of capital goes up, as it is now, you cannot bullshit your way about the future. You might be able to tell a story, but as Feynman said, people can be fooled, mother nature can't. So it requires a pragmatism to show that the thing works, and that is back to roots, back to technology, et cetera.
So that's one thing on this idea of instrumental objectivity, you have to invent the tools that can bridge you to the future, and I think that we're going to have a few years of tool invention. Things that at Lux we've talked about, of like the technology of science, the tools, the instruments for lab automation, for new microscopes, for new telescopes, for the software that empowers these, connects people, networks scientists. So there's a big opportunity there.
The second thing which you mentioned is the fact that yes, we have our eyes. They are a great form of instrumental objectivity, but we know how flawed they are. There is true objectivity, true realism in the measurement of a microscope upon a cell. There is true measurement with high precision of a laser or a 3D scanner relative to somebody sketching this kind of thing. So the fidelity is constantly improving and the resolution of things to be able to have the fidelity of our instruments is constantly improving. That is a clear directional hour of progress.
When we look out at the world, depending on the filter that you're looking through, you either will have the same sort of optimism or pessimism. If you are on Twitter, as we spend a lot of time on, or X, you will leave feeling, oh my God, this is a cesspool of chaos and everybody hates each other and... Pick your topic, you can find polarizing, hateful views, and people are shaming each other. And as you have noted, I think very wisely, reality is different. Take a walk outside your city, get outside physically, be in nature in reality. Now nature in New York City may be cement and buildings and whatever, but there's still this Jane Jacobs poetry of the harmony of the city as it flows and people coordinated amongst each other that you just forget when you're hiding behind this glass screen.
So anybody that gets outside realizes that the inundation of the chaos that they're hearing from friends and family... We've both experienced this, "Oh my God, you're in New York City. It must be utter chaos. I've seen the zombies on the subway." Or social media showcases San Francisco cesspools of homeless people, but you're physically visit the city and walk around and go from company to company, the reality is different. You've got thriving communities of these productive knowledge workers and diverse and vibrant people walking around the city on campus. We think, "Oh, our universities have gone." And now there are elements, of course, where in some cases left has taken over certain conversations and there is legitimate chaos on campuses. And at the same time, they're these robust, vibrant centers of learning and possibility.
So it's this idea that reality is different. And even though everyday headlines are constantly fighting and capturing our attention, there are all these beautiful, small and appreciated acts of these pay it forward kindness, these unseen cascades of civility, which if you just turn your attention to being with people, one of the lines that we say is, "In person, people are better people."
It's very different. It's very hard to disagree with somebody when you're physically face-to-face with them as much because you realize, wait a second, there's a humanity here. And that is especially important going back to the AI conversation. One of the greatest concerns is that we polarize ourselves behind our glass screens. It's easier to pick fights. It's easier to misunderstand each other. Being in person, something that has been lost with both the rise of technology and with the rise from work at home, has taken something away from our humanity. And if we want civility, if we want community, people have to be back together.
Danny Crichton:
Well, I think you describe all this terrible negativity, all these headlines in the letter I'm reading. War, death, horrific terror, loss of innocent lives, displaced people, refugees from climate and conflict, migrants and immigrants, political dysfunction, rising interest rates, oil prices, inflation, indebtedness, and yet you're the most optimistic you've ever been because we're in a nadir of negativity. A nadir of negativity that's driven by narratives almost in a nattering nabobs of negativity, very Dan Quail or whatever.
But there's so much overwhelming pessimism driven by social media, driven by the news cycle, driven by all the stuff that's going on. And that reality is as you connect with folks, you sort of realize that we are coming together. We do cooperate even in tough geopolitical circumstances, even on science, that collaboration is the core of moving humanity forward. And so the more that you sort of buy into those negative narratives, the more you just close yourself to the potential of progress and acceleration. And to me, that's a real shame.
And to bring this all around as we sort of close up, I do think decentralized systems, people collaborating, reusing that muscle of how to connect with others, someone who I disagree with, comes to the core of the enlightenment, the core of how the Royal Society was built all the way back to Robert Hooke. Those sort of institutions is what we have to keep going back to because that's how you actually build a great partnership, a great scientific team, and actually find the discoveries that are going to change the frontiers of human science. Josh, thank you so much for joining us.
Josh Wolfe:
Always Danny.