Riskgaming

The nightmare specter of designer bioweapons and the people trying to stop them

Description

Ever since the invention of CRISPR technology about a decade ago, biologists have gained increasing power to discover new DNA sequences, cut and mash them up, and then print them in ever larger volumes through biomanufacturers. That freedom and openness is the opening to a long-awaited Century of Bio, with scientists bullish on the potential to discover cures to long-resistant diseases.

On the tails side of the coin though, there are fears that the open nature of these tools afford a rebel scientist the means of inventing and distributing well-known or completely novel pathogens that could threaten the lives of millions. It’s not the premise for a bad Hollywood B-movie, but a top security threat that experts at the White House and in the intelligence and defense communities are rapidly trying to solve.

Today, I have Kevin Flyangolts of ⁠Aclid⁠ joining us. Aclid is using artificial intelligence to identify what new sequences of DNA might do, scaling up screening efforts that might allow biomanufacturers the ability to verify their customers’ intentions in a more thoughtful and comprehensive way.

Kevin and host ⁠Danny Crichton⁠ talk about the recent history of bio, the rise of biohacking, the differences between bioweapons, cyberweapons and financial crimes, why we need new approaches to biosecurity, whether executive, legislative or industry approaches might work best, and whether designer bioweapons are as dangerous as many are making them out to be.Finally, a note: in line with the launch of ⁠our first riskgaming scenario⁠ on the Lux Capital website, ⁠Hampton at the Cross-Roads⁠, we have officially condensed the “Securities” podcast name into just “Riskgaming,” which I think captures in one word the risks and opportunities that come from science, technology, finance and the human condition. Same show, more focused name and a great future.

Produced by ⁠⁠⁠⁠⁠⁠⁠⁠Christopher Gates⁠⁠⁠⁠⁠⁠⁠⁠

Music by ⁠⁠⁠⁠⁠⁠⁠⁠George Ko⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
If I had to lay the lay of the land right now on biosecurity, biosecurity is like the great ominous threat of DC today. When I think about the CCP committee in the House, they just hosted a committee on biosecurity. There's a Biosecurity Commission underway right now, doing a bunch of work, figuring this out. There is a specter of this fear, partly motivated by COVID-19 and what took place, lab leak implied, not applied, whatever. This general background of almost like a Richard Preston novel, like a contagion, the hot zone, we're suddenly going to have this big leak, and yet we have no tools to sort of use to fight back against bioterrorism, biosecurity, to know what's going on. And so there's just this palpable fear that's going on in the industry. And so, Kevin, I'm just curious as we sort of just get going, how did you get in this field? Because for a lot of folks it's effective altruism, but my understanding is, it was not EA for you.

Kevin Flyangolts:
It was not. No. I actually found out about the effective altruism community afterwards, and it was mostly-

Danny Crichton:
Oh.

Kevin Flyangolts:
... from people telling me that, "How did you get into this? Are you part of the effective altruism community?" And a couple of times it got mentioned, and then I realized, "Oh, this is actually a great community for me to recruit, find people that want to work on this stuff."

Danny Crichton:
Right.

Kevin Flyangolts:
But no, I actually was just interested in biotech, wanted to work on something totally new. I was in the tech industry before, was an engineer, had a lot of experience in product and software, but didn't have anything to do with biotech. I was in blockchain for a little bit, then did consumer apps, was in FinTech, but never anything with bio or health. And because I wanted to do something new, I was looking for where I could have the biggest impact. And it just so happened that biotech was going through this sort of wave, where software was becoming really meaningful, there was a ton of data being generated. And not a whole ton of software engineers. Basically, what was going on with FinTech and health tech over the last decade or so, I felt like was happening in bio in 2020, 2021. And so around that time also, the pandemic obviously made people pay attention-

Danny Crichton:
Right, right, of course.

Kevin Flyangolts:
... and it got a lot of people super interested. And I think really shortly after that, tech bio got coined as a term.

Danny Crichton:
So let's talk about the explosion of bio, because I think that is the predicate before we understand the biosecurity world. Up until the last decade, let's call it, having access to the ability to produce novel pathogens, DNA synthesis, whatever, was limited to a very small group of people. The equipment was expensive, it was major labs, it was the National Institutes of Health, and Bethesda Maryland. It was the CDC, oftentimes national government labs, and then obviously a lot of the academic research institutions.
But then over the last 10 years, we've had this kind of explosion, this Cambrian explosion of, every lab is able to do this, now. The cost of doing this has dropped dramatically, in the same way that putting a payload into orbit has dropped exponentially, thanks to SpaceX. Companies like National Resilience and others have been able to mass produce biological agents, which means that it has a lot more customers now, and we don't know who those customers always are. In the past, there was a couple of medical, you would probably know who these people were, physically in the real world. But today there's tens of thousands of people who have a legitimate reason to make DNA, or other forms of biological life. So everyone in agriculture, everyone in the healthcare system, everywhere in pharma, everyone in a traditional bio wet lab as you were getting at, now has access to this.
And so we've massively created a new problem, which is, how do we know who those folks are and what they're doing with those particular strips of DNA?

Kevin Flyangolts:
Totally. I think a lot of what happened in genome reading, genome sequencing essentially, is happening now in the writing of the space, the synthesis of the DNA. So I know a lot of people are familiar with the Human Genome Project, which was back in the '90s, we figured out how to sequence the entire human genome. It took millions of dollars to do it, took many, many years. And then today we have a hundred dollar genome, a thousand dollar genome, where you can get your entire DNA sequenced and find all the different parts of your genes and different mutations that you have.
And something similar is happening within DNA writing, too. We figured out how to read, so we had this whole corpus of information to build from. And now, very recently, we figured out how to write efficiently, so we have this whole corpus of experiments that we can run, of how can we mutate something that make something useful? Like the materials, or chemicals, or foods that you mentioned. It's not quite where it is with sequencing. Today, sequencing is broadly useful to just the individual, 23andMe, other companies that are doing it for pathogen screening, within clinical samples, or looking at cancer.
I think we'll get there with writing. We're already seeing some of that, where you have individuals that are biohacking, so to speak, and they're making something that maybe nobody else finds useful, or a very small niche community is interested in. And they're synthesizing DNA, because they're doing their own experiments. There's a lot of exciting things happening, and as you said, that means that there's a lot of actors where you just don't know who your customer is, you don't know what they're doing it for. And it doesn't make it any easier that what they're essentially sending you is just binary code. You're getting some information about what they want to chemically synthesize, but what the function, or what the purpose of that sequence is, unknown. The manufacturer themselves don't know it. They really just have to figure it out based on what you're providing them, and it's not that much. It's your shipping address, it's your name, and then the sequence you want to synthesize.

Danny Crichton:
When we enter into the biosecurity world, we're leaving a world in which security is focused on a physical object. So if we look at nuclear weapons, a couple of states can produce this. It takes a lot of parts, it takes a lot of advanced manufacturing, to produce a warhead cable of producing a nuclear cloud. You get into bio, you get copies.
And the closest corollary, to my view, is something like ghost guns, in 3D printing. Where, look, if a printer doesn't really know what you're printing, what is the design for this? If I have a plastic printer, and I give it the schematic for a gun, or the individual parts of a gun, I can't actually manufacture it myself, but I can do the individual parts. I'm not going to prevent you to make a tube, and so it's very hard to actually piece this together. So we're seeing these emerging threats. Ghost guns would be an example, and then in bio, because we've gotten the cost so low on producing copies of code, we can do something of the same thing.
And so the nightmare scenario is, you look at something like COVID-19, I was just looking this up. 30,000 base pairs in the COVID genome, and the thing is, it's like... 30,000 letters is 6,000 words, roughly speaking, it's about a couple of dozen pages, of a printed book. And that can just be in an email. You could send that over, and once I have that code, I have the ability to actually mass produce it at scale, by sending that in to a manufacturer. And that to me is where Aclid comes in, which is, "How do I know, you're making this, what does it do?" Let's say I invent something new, that no one's ever seen before. You know COVID, so you can easily screen out COVID or Ebola, or some of these well-known diseases that have been sequenced and are available in public databases. But what happens when someone changes that? How do you identify that? I'm curious, how do you think about figuring out that functionality, as part of the company you're building today?

Kevin Flyangolts:
Somewhat similar to how you look at cybersecurity threats. You identify the exploit, and then you look for viruses with certain signatures, you look for malware that's exploiting something within the computer system. And so we're doing something similar for biology, where we're looking at specific domains, or specific binding sites that have similarity to things like ricin, or some dangerous virus. A lot of it is right now based on, "I know something that's dangerous, I know something that's harmful. How similar is the thing that you're trying to make, to that thing that's harmful?"
We do a lot of work and build a lot of infrastructure to make that search really fast, but just to give you some context, there's hundreds of millions of sequences at this point that we're searching through. And that's just of the sequences that we know of, that's not all the sequences that are out there. There's thousands of species out there, millions, that we don't know of, that we haven't sequenced before and that's not in our database. Some of which are potentially harmful, some of which are... A lot of which are probably harmful. And then we do some structural search, functional search. We annotate the data that we have, we curate it, and we're able to compare not just on the sequence level like, "Does this A and this A, and this T and this T, match?" The base pairs of DNA. But also, are there functional characteristics or structural characteristics of this sequence that look similar to something that's pathogenic?
And this is constantly evolving. Over the last few months, LLMs in bio have become much, much more prevalent, and so we're able to use that technology to help address some of the newer sequences, more novel, and potentially dangerous. Look at, how can we take what these sequences have learned, and use that as a way to detect function, or detect characteristics of pathogenicity or virulence?

Danny Crichton:
I think Google DeepMind, with their launch of AlphaFold, was sort of the first big headliner. And obviously, there have been dozen plus, including just as we're recording this, Evo launching a couple of other major bio models, where for the first time ever, we're able to go through those millions of sequences. It used to be manual, you used to do this with actual human labor, and just like we're doing this in text and audio and video, in the bio world we can do the same thing. Because we are working with text. You're working with ATCG, the base blocks of DNA, or uracil in the context of RNA, and so you're able to actually do a lot of work.
I'm curious though, we're trying to mostly use these models to create therapeutics. We're trying to find proteins, we're trying to find other forms, whether it's particular binding sites, that can solve disease. Is it as hard to identify things that are harmful, because you're sort of doing the same thing, or is it easier in terms of that stack? Because it's easier to see something that's a negative to the body, as opposed to something that actually solves the problem?

Kevin Flyangolts:
It's a little bit of a rat race. There's sort of both sides, trying to edge over the other. Right now, we don't really have too much going on the pathogen side, like somebody trying to make more dangerous viruses. At least, it's not in the public eye. It's not like in cyber, where we know there's these group of hackers that are always trying to make exploits. There's probably someone, but I don't know who they are, and it's definitely not a large community of people.
There is, though, a lot of people that are more on the white hacking side, where they're trying to figure out, "How do you break systems, so that we can fix them?" That's an interesting paradigm, similar to what you mentioned with printers, where you can give it a trigger and it doesn't really know what you're doing. LLMs are the same way. You can tell it, "I want to make this binding site in this domain that's similar to ricin," but the LLM has no opinion about whether you're making a toxin that's going to harm someone, or whether you're making a new therapeutic against that toxin.
And so in a lot of ways, it's pretty easy to take the existing, for example, therapeutic for COVID, and just flip it from, "I want to make something that's going to cure COVID," to, "I want to make something that's actually going to have a lot of toxicity." And this is actually built into a lot of AI models already, because when you're doing drug discovery, a large part of it is your toxicity. It's how much is your molecule, that you're going to put into somebody else's body, also going to affect their body. Not just cure or treat whatever you're trying to treat your target. And it's part of all phase one clinical trials. "What is your toxicity?" Not even thinking about curing or treating, just, "What is your toxicity of the body if I take it, and does it cause any harm?"
And so, many of these models are already built to try to optimize for less toxicity. That's just a flip of a switch in your objective function, to now make that model optimize the exact opposite. And there was actually a paper on this in Nature that made some big waves, maybe a year ago or two years ago, where the researchers did exactly that. They just took popular models that were used for drug discovery, flipped a switch to instead of optimizing for less toxic, optimize for more toxic. And they supposedly found new nerve gas agents that were undiscovered before.

Danny Crichton:
It's interesting you bring up the paper, because I wrote a strong rejoinder to that paper, because it was really focused on chemical weapons. It was a little bit different from biological weapons, although I think the principle holds the same, which is Mother Nature has already produced amazing biological weapons. Ebola kills a majority of folks, although that number has actually gone down as we've gotten better with the protocols, similar to COVID, so I think it was like 70%, 20 years ago, and now it's actually under 50% based on some of the stuff that I saw in the last year. But nonetheless, it was talking about how compared to VX nerve gas, we can actually create even better nerve gases that kill faster and easier, et cetera. And I said, "Well, look, how much VX nerve gas do you need to kill someone?" And it's like, "Well, it's like two sand grains worth of VX nerve gas, and now we have a new design that kills you in one sand grain." And I was like, "Is this really a fundamental qualitative difference that goes on?"
And that piece actually got picked up quite a lot, like I gave a briefing to the Senate, they had reached out. And I was just like, "Look, we have to be very careful in structuring this, because in many cases we already have the worst possible thing. Now what gets interesting here, and I think this is where Aclid potentially could be a real solution is, it's one thing to have a different nerve agent that's just a little bit better, because that doesn't really change anything. But if you can construct it easier, or in a way that's not visible. So, for instance, VX, the signatures you have on a building to create VX nerve gas are visible from space. It's actually very hard to hide, which is why we know it was used in Syria, and a couple of other countries over the years. We can actually identify that, from satellites, based on the kind of constituent components in the biolab, that that is what you're trying to do.
But could you potentially use AI to create different pathways to creating the same drug, that were more hidden, or use more off the shelf, so to speak. That, and exterior architectural elements, so that it's more hidden? That to me is where it gets much more complicated, and that is the more interesting part of the story, is the different routes to the same end goal. And to me, where Aclid comes in is really saying, "Look, there are millions of sequences. Maybe they all work on the same binding sites, may they all have the same toxicity." But if you don't know that they are the same in the first place, then this new sequence I found that no one knows, could get through a bio manufacturer's process. And we're going to be able to catch it, we're going to be able to filter that, almost like a spam filter. You called it antivirus, which I kind of find ironic, given the context. True antivirus software, but you're going to be able to make that comparison, in a way that I think just looking at the letters, does not.

Kevin Flyangolts:
We don't even know what we're looking for, in some cases. It's not like you necessarily know to look for X, Y or Z. Sometimes you're looking for certain patterns, or you're looking for some indications. And then the problem becomes more of, "Okay, I see these indications, I see that there's potentially a pattern of misuse. What do I do with that?" And we're helping with that piece, as well. Talking to customers, and when I say customers, I mean end users that are purchasing DNA. Talking to them, getting information about what it is that they're trying to build. Do they have a real facility? Do they have a biosafety form? Have they thought about this from a security perspective? And just trying to get a sense of what it is that they're doing. So that pattern, while it may look like misuse, might actually be totally legitimate research.
And I think a good analogy here is like you have in finance, where you have many, many transactions going through banks every single day. Many of them are actually flagged for potential money laundering or fraud, but many of them aren't actually that, it's false positives. You transfer $10,000 out of nowhere from one bank account to another, that might get flagged as potential money laundering. You set up a new account, and all of a sudden there's $100,000 in it. That might be flagged as money laundering. And so we have a similar problem here, where there's sequences that are being ordered that have some remote similarity to something pathogenic, that have clear similarity, clear indication of something pathogenic. But where, in both cases, the researcher has totally legitimate use of it.

Danny Crichton:
I think the analogy with know your customer laws, and know your customer software, in finance, is a good one. In which, you know, there are different layers. So in some cases, this is just word matching in FinTech, KYC. So my husband was selling a board game called Cuba to a friend, and in Venmo wrote, "For Cuba," and $30. And then it got flagged, and they sent a whole KYC AML like SWAT team over. And like, "Oh my God, are you trying to break sanctions?" It's like, "No, we're selling a board game literally called Cuba." But the word Cuba, I'm sure Iran, you can imagine kind of the keywords that show up in the memo line of your checks, triggers the review.
And then there's the actual behavior, and this is where we went from kind of a linear 1.0 world, to a kind of AI driven, modeling driven 2.0 world where you went from, "Okay, it doesn't mention Cuba, but it mentions Fidel." And somehow that gets through, because that wasn't a keyword. Now we look at patterns and we say, "Okay, well, why do you keep structuring a transaction this way? That must be something that you're cheating through the system." I think we're seeing the same sort of track pattern in DNA where, look, 20, 30 years ago, even today, you can go to the CDC... If you're a qualified individual with a BSL, a biosafety level 4 lab, you can request Ebola. You can request a serious pathogen for research purposes, in very prescribed circumstances.
And so you can go to the catalog, basically the Sears Roebuck equivalent at the CDC and be like, "I would like Q fever, and I'm going to do an investigation in my lab, and be safe," or whatever. Now the challenge is, in our "came an explosion around bio", you have dozens of manufacturers who can go do this. As you're pointing out, I think it's called the, what is it? The Infinite Monkey hypothesis, about monkeys typing on keyboards? Given the number of different combinatorics around DNA, there might be millions of different ways to get to the same outcome in terms of proteins, based on the actual DNA sequence that underlies that.
And so you have to go from this 1.0 world, "We're looking for a direct match of, here's the Ebola genome on GitHub, make sure that doesn't get printed," to, "Well, what does Ebola actually do, as a virus?" This is the site that's using, this is how it works. In the COVID case, we all know about the spike protein, but the spike protein is a key aspect of COVID and how it functions. Can we identify that functionality? So even if we don't know what the exact code is, we're able to properly fold it using artificial intelligence and machine learning, we're actually able to detect exactly what you're trying to do.
And to me, that's the evolution that's taking place with your company, and broadly in the industry. And that's important because, we were just talking with the National Security Commission on Emerging Biotechnology, the NSCEB, which is driving a lot of work, that has had a major report out in December around protecting the industry. Because our capabilities for manufacturing have jumped so far forward, but sort of our filtering and our safety mechanisms have not caught up to match a new level of capability.

Kevin Flyangolts:
And I think one of the issues is that virulence research has been really behind, a lot of virulence research is literally just, "Well, we see this virus, or we see this bacteria, and we're going to take one gene out and see what happens. If the virus or the bacteria becomes more pathogenic, it becomes less pathogenic, we know that gene is responsible for some type of pathogenicity. If the virus or bacteria just ceases to replicate, we know it's probably something to do with replication."
And the ability to write DNA, at large, has given us the chance to not just take genes and remove them, but actually take totally new genes, put them into a cell that you know is not harmful, and see, does that potentially do something bad? Does that potentially make it harmful, all of a sudden? And then you start to get a little bit more information on not just, "Does this gene correspond to an increase or decrease in pathogenicity?" But, "Does this gene actually correspond to endowing or transferring pathogenicity?" Which is what we ultimately care about. There's tons of genes in Ebola and anthrax, in organisms or pathogens, that don't do anything. If you put them in another organism, or you put them in another cell and any other context, they would have very little effect.
And so finding the ones that are really the most problematic, the ones that can actually transfer their harmful effects, are the ones we need to figure out. And there's a lot of work there because, it's somewhat gain-of-function research, to some extent. And gain-of-function's a very loaded term. Everything you do-

Danny Crichton:
It became a lot of, it became very controversial, in the last year or two. Yes.

Kevin Flyangolts:
Ultimately, everything you do in synthetic biology is gain-of-function, right? Anytime you're taking a unique sequence-

Danny Crichton:
Well, don't say that. You're going to trigger every person on the edge. But it's true. Yes, yes.

Kevin Flyangolts:
And so, we have to separate what that means, but then also there's the other aspect of... Well, let's say we took some sequence that exists within Ebola, or some sequence that exists within a known pathogen. And we put that a organism that wasn't normally pathogenic, just to see, does this sequence make this organism pathogenic? That's also gain-of-function in the potentially negative way of, you're making something that didn't use to be harmful, harmful all of a sudden. But some of this research also needs to be done to some extent, because we need to understand, how do these things work? And while these AI tools are great, and they're going to give us a lot of information that we haven't discovered yet, none of them are experimentally verified. And we don't have a really good way to go from, "This is what the AI tool predicted as the structure and the function of this thing," to, "Does this actually work?"

Danny Crichton:
Let me ask you, I mean, another way to position this question. So when we look at your analogy in FinTech, or financial services, a lot of the laws do come from the executive branch. So OFAC, the Office of Foreign Asset Control, one of a medley of offices that are in charge of tracking assets, and making sure they're going to the right places. They're the ones who enforce KYC and AML, Know Your Customer and Anti-Money Laundering laws. The equivalent in FinTech might still be like FINRA, sort of an industry run mechanism for self-regulation, where the industry realizes that it's bad to have brokers who don't know what they're doing. It's bad to have brokers who are lying to you. It lowers trust in the system, and everyone kind of loses.
When you look at the bio world, do you believe that this is an industry that needs legislative action? Where there's an office that is tracking everything, that's putting in place rules, whether that's the National Institutes of Science and Technology, whether that's the National Institutes of Health, CDC. There's a bunch of places that could be located. Or do you think this is one that, with the right software, whether there's stuff that you're building or others, plus industry coordination. That the major manufacturers, there's not an infinite number of them, there's a couple of major ones, were to get together and say, "Look, the industry needs to work together on a set of principles to minimize harm. We're going to follow these procedures to make sure that we don't have that random leak that causes kind of an outbreak, and we have all this loss of trust with the public."
Is that viable? Which path do you think we're on, and which one do you think we'll end up with?

Kevin Flyangolts:
Right now, we're pretty much in the path of, there's a bunch of companies that have come together, realizing this is a risk both for their reputations and brands, but also for just their business. Again, this industry was under-mined. There was less demand for gene constructs, there would be many businesses that would be wiped out. They are billion dollar businesses that would cease to exist, or be much, much smaller because of that.
So one of the things that's happening today is, there's already companies that care about this. There's this consortium of companies called the IGSC, the International Gene Synthesis Consortium. They include many of the top manufacturers in the world, Twist Biosciences, IDT, even some companies in China like BGI are a part of it. And they see this as really a risk to their business, and a risk to their reputations, and so they're doing something. But the problems that come up are, because there's no regulation, because there's no legislative body responsible for this, when they tell a customer, "Hey, we found something pathogenic, and we're worried about sending this to you. Can you give us some additional documentation?"
What that documentation is, is still up in the air. There's no standards around what authorizes someone to get access to something, and there's also no leg for many of these manufacturers to stand on to say, "We don't want to ship something to you." Because there's no regulatory or law that says they can't. And customers naturally get a little angry, and a little frustrated with their vendors, when they tell them no. And the vendor can't tell them anything other than, "This is just not within our policy."
So some type of authority in the space would be helpful. It could come from a regulatory body, and then be implemented or enforced throughout, by industry. You see this in cyber, where there are some regulations like HIPAA, and then there's industry led standards like ISO, and SOC 2. And the way that it gets implemented is that you have auditing firms that will validate that you are actually complying with these standards, and then there are larger companies, typically enterprises, that know they're liable or vulnerable, if they don't follow them. And so then they impose this onto all of their vendors. If there was something like that within synthetic biology, that could also work. It doesn't necessarily have to be a regulatory agency like FINRA, or the SEC, that's receiving suspicious activity reports and constantly monitoring. As long as there's some way to incentivize this across the industry, I think that could work.
And then the last piece here is that industry typically does not have the bandwidth, or even the resources, or the know-all of how to investigate whether somebody has a legitimate use. And so there must be some mechanism for companies to put that off to somebody else, where they can focus on getting the requirements, getting all the information that they can, and then pass that information off to somebody else to investigate. To see patterns, to see if there's something potentially wrong here. If this researcher has ordered multiple pathogenic genes, not just one, that might raise the cause for alarm. But all this can't happen within the industry, industry doesn't typically share that kind of information with other players. There probably has to be either a nonprofit, or some government agency that's receiving those reports, whenever something meets a certain threshold.

Danny Crichton:
I think the financial crimes is a good model to think about. I definitely think, on the cybersecurity analogy, it also really works. Which is, sharing information, that kind of fusion center is collecting as many of pieces of data as possible, because you might only have one puzzle piece of it. If I'm looking for a particular, as you just pointed out, I have three different genes I'm trying to create, that are together the powerful explosive piece that I'm looking for. I'll probably order it from several folks, and they don't coordinate, there's no way for you to know that I was trying to order that. So someone out there has to actually go do this work.
Now, where I do think it's a little bit different in bio, in cyber, my attacks, or at least my goal, can be very directed. I have the ability to, "I want to commit a crime. I want to move my money illegally across borders," whatever the case may be. And that's harmful in different ways, but fundamentally, I have control over that outcome and that's what I'm looking to do.
Where bio is different, and that's what I always try to emphasize in lot of my conversations is, bio is uncontrollable. Once you create a virus and you kind of put it out into the universe, one of the reasons the US gave up bioweapons research unilaterally, is because, just because you have it doesn't mean you can use it. It's actually very not strategic to use in war, because that same virus that attacks an enemy can easily attack your own soldiers, unless you have a vaccine in advance. And then if you have a vaccine in advance, well then it's not a very good virus anymore, for offensive purposes. And so you get kind of into this Catch-22, because it's uncontrollable.
And so, the big fear that I hear oftentimes in DC, is sort of this bespoke synthetic virus that's designed to assassinate a single individual. It's almost like a Hollywood script. Can you identify a single gene, or a single site that's unique to one person, maybe a political leader or something like that. Identify it, and target it. The broader population, there's oftentimes a lot more cooperation. No country is incentivized to harm their population along with others, because so often we find it's a zero-sum game, but bio is a little bit different in that COVID hit China just as much as it hit the US, ultimately. And no one benefits from having a lot of their own population in hospitals, suffering from disease. And that lack of control is just not as useful as, say, a cyber weapon where we do see foreign adversaries of the United States use those tools very effectively for economic warfare, and other.

Kevin Flyangolts:
The one thing I'll say there, is that while it's very hard to make a virus that only targets one individual, it is somewhat easy to make a toxin that only targets one individual. You can get ricin, you can get anthrax, you can get botulinum, and you can get that in a very concentrated amount, and target a single individual with that. And while synthetic biology doesn't make that new, there's no new botulinum toxin, or... You can create it, but there isn't a new one that's necessarily making it more targeted, there's a new pathway to getting it.
So in the past, if you wanted to get anthrax toxin, you want to get botulinum toxin, you wanted to get ricin, you need to get access to the actual sample. You need to find anthrax in the soil, you need to culture it, you need to grow it. It's a really expensive process. It's pretty hard, and it's also much easier to track, because there's not a lot of places where you can find some of these things. With DNA, that's a little bit harder to track. If I can make ricin in a small lab in my kitchen, that becomes more of a problem, because now somebody has access to a killing weapon with very, very little way to trace it back to anyone. That's where I think there's a slight difference of, yes, it can harm both. But it also gives you a new pathway to getting something that used to be pretty hard to acquire.

Danny Crichton:
So you're like two minutes from midnight, or I don't know what the equivalent of the Bulletin of Atomic Scientists is, from nuclear weapons over to bioweapons, but you're somewhere a couple of minutes from midnight, and I tend to be in this little... Sort of what you just said, but it's like, there's not a lot of new capabilities here. When I think about the Washington DC consensus right now, and how much focus is on synthetic biology. Yeah, there's a lot of new capabilities. We have new ways of doing things, but again, Mother Nature is really, really impactful. We've had terrible viruses that are already available today, and we could construct them before. They're just expensive. So we're lowering the bar, in some ways.
But that doesn't necessarily fundamentally change the game. I always try to balance this of, everything didn't just change in the last three years. We would get better about it, we want to have more resilience in the same way that, in cybersecurity, we've got to get better around the tools we use to protect our systems. But it's not like we shouldn't have had the systems in the first place. And I do think the conversation sort of goes that direction sometimes, where people are like, "We should just shut down AIML research around bio," which is something that I was actually told to me, from a policymaker. And I was just like, this is my big fear of you, talk about tail risks. But I also talk about tail opportunities. And there are huge opportunities, which is all kinds of therapies that are coming, that we can use with these different technologies and tools, that we didn't have access to before. And so, minimizing the bad, emphasizing and accentuating the positive, is what we're trying to do in the bio industry. And Kevin, it sounds like you're definitely on that formula as well.

Kevin Flyangolts:
And I 100% agree with that, despite what we're actually working on, the main thing that we want is for this industry to thrive and for there to be more innovation. There should be access to AIML models. We should be open sourcing a lot of synthetic biology research. It's more about the fact that, in the last 10 years, we didn't have any safeguards. This industry operated with really just some guidelines, and trust that people are doing the right thing. And I think for the most part, people were, but now that the capabilities have gotten better, and we've gotten to a point where the bar has gotten lowered, as you said. It's not that we should shut everything down, it's more that we just need to put in the right safeguards. The internet started in the 1990s. There was no cybersecurity, nobody cared. You can go create password databases with passwords stored in plain text. Now, nobody does that. And so-

Danny Crichton:
Ah, some people might still do that, but yes.

Kevin Flyangolts:
But I think the point is that we should have something similar to that.

Danny Crichton:
Well, I'm so excited for your mission, and your future success. Kevin, of Aclid, thank you so much for joining us.

Kevin Flyangolts:
Thanks so much for having me.

continue
listening