Photo by Mohammed Haneefa Nizamudeen via iStockPhoto / Getty Images
With hundreds of thousands on organ waiting lists, science fiction is ready to become science fact
When I left TechCrunch at the end of 2021, one of my last features was a deep dive into the U.S. organ procurement market. It focused on how UNOS, the government-contracted (and often heavily criticized) network for organ procurement, was using new technology to deliver organs faster to transplant centers with the hope of offering the gift of life to one fortunate patient. America’s waiting lists remain gargantuan: roughly 100,000 people are waiting for a kidney and the freedom it offers from regular dialysis visits.
Three years later, I’ve come full circle. I’m so excited to write about Lux’s investment in eGenesis, where we led a $191 million Series D round to commercialize the best option we have for procuring more organs: xenotransplantation. This is our biggest initial check in Lux’s 20-year-plus history, and it’s exactly the kind of intrepid bet that we need more of in the venture capital industry.
The first organ transplantations began a couple of decades ago, and today, they are a mainstay at many large hospital systems. These awe-inspiring feats of medical science have only been slowed by a dearth of organs, which must be procured from a human donor within a very tight timeline (known as ischemic time). Organ donations are rarely planned, and so the entire system is designed to operate in crisis to respond to a sudden availability and deliver that organ into a patient within the strict time limits imposed by biology.
Up until now, solving this shortage involved science fiction. Scientists have approached the problem from two ways: xenotransplantation (procuring organs from mammals, often porcine donors) or lab-grown organs (using modern tissue engineering to construct an organ de novo). Both routes are fraught. With xenotransplantation, the human body’s immune system actively works to reject the new organ, sensing a foreign object that must be fought. Lab-grown organs have struggled to both meet the functionality of natural organs, and also to scale without biocontamination.
The invention of CRISPR-Cas9 a decade ago paved the way for xenotransplantation to make the transition from science fiction to science fact and take the lead in this technological race. With precise gene editing, organs can be tailored for compatibility with the human immune system while limiting the growth of possible infection vectors. Scientists had made a breakthrough, but it’s one thing to evaluate an organ in the lab, and another to transplant it into a person who relies on their organs for life.
Earlier this year, eGenesis’s technology was used to transplant a kidney from a porcine donor into a 62-year-old patient at Massachusetts General Hospital, a landmark milestone in the pursuit of xenotransplantation. That xenotransplantation was the first through an Expanded Access pathway study authorized by the Food and Drug Administration, which offers a route for experimental therapies to be offered to patients who would otherwise suffer life-threatening conditions with no recourse to alternative therapies.
Early experimental results have been promising, and now, the goal with this Series D funding is to scale eGenesis’s study further in the pursuit of a permanent solution to America’s kidney organ shortage. As with any new clinical therapy, the company will be assiduously collecting data to ensure robust safety and efficacy for all patients involved.
From a Riskgaming perspective, xenotransplantation brings up a whole panoply of interesting questions over the next decade, some that mirror the recent debates around GLP-1 drugs like Ozempic. How much can the cost of transplantations be reduced with a more reliable source of organs? If we don’t have to operate in a crisis mode with shifts of transplant surgeons on call in the event that an organ suddenly arrives, could we make the transplantation system vastly more efficient? The typical kidney transplantation in the United States costs $442,500 — an extraordinary sum.
Without a shortage to be worried about, can we preventatively transplant organs into healthier patients so that they are more likely to have better long-term outcomes? In order to get prioritized on an organ waiting list today, a patient must be evaluated as the most critically in need, which generally means in the worst health. How much could we improve outcomes if patients didn’t have to wait to be on the cusp of death to get the treatment they need?
Finally, how do we bring such therapies to everyone who needs them globally? Transplantation surgeries are among the most complex in medicine, and also involve extensive pre-op and post-op evaluations. The vast majority of the planet can’t offer these treatments, but with a more reliable source of organs, do we have the ability to bridge this inequality?
Those are questions I get to intellectualize in the years ahead. Right now though, a team of brilliant scientists are arduously working to bring this important technology to a wider population. I can’t wait for them to succeed, and prove once again that science fiction can and will become science fact.
In addition to Lux, eGenesis’s Series D had participation from existing investors ARCH Ventures, Khosla Ventures, Farallon Capital Management, Alta Partners, Fresenius Medical Care Ventures, and Leaps by Bayer as well as new investors DaVita, Eisai Innovation, NATCO Pharmaceuticals, and Parkwood Corporation.
Japan’s Sakana AI raises nine-figure Series A
While we are talking about massive fundraises, I have previously covered Tokyo-based Sakana AI, which is increasingly becoming a national AI champion for Japan, in “Sakana, Subways, and Share Buybacks” as well as a podcast. Lux led the $30 million founding seed round for the company, and now, it’s raised more than $100 million from NEA, and Khosla with participation from Nvidia. Nikkei had a front cover story on the round.
What’s new? The coolest demonstration has been something the Sakana team has dubbed “AI Scientist,” which was profiled by Nature. The idea is to use a large-language model to automate all aspects of conducting science, including reading and interpreting the existing scientific literature, identifying potential experiments, executing those experiments and then interpreting the results and writing up a final paper. It’s early but inspiring days for the potential of AI to radically improve the productivity of science.
Podcast: Silicon Valley’s secret industrial spy war
Silicon Valley couldn’t be farther from the confines of Langley or Fort Meade, let alone Beijing or Moscow. Yet, the verdant foothills of suburban sprawl that encompass the Bay Area have played host to some of the most technically sophisticated espionage missions the world has ever seen. As the home of pivotal technologies from semiconductors to databases, artificial intelligence and more, no place has a greater grip on the technological edge than California — and every nation and their intelligence services want access.
Zach and I talk about Silicon Valley’s history in industrial espionage, the tricky mechanics of intercepting and disabling chip shipments to the Soviet Union, why the U.S.S.R. was so keen on learning the market dynamics of computing in America, the risks for today’s companies around insider threats, Wirecard and Jan Marsalek and finally, some thoughts on Xi Jinping and how China’s rollup of the CIA’s mainland intelligence network affected his leadership of America’s current greatest adversary.
The Orthogonal Bet: Bio Trajectories and the Importance of Long-Term Thinking
In this episode, Lux’s scientist-in-residence Sam Arbesman speaks with Adrian Tchaikovsky, the celebrated novelist of numerous science fiction and fantasy books, including his Children of Time series, Final Architects series and The Doors of Eden. Among many other topics, Adrian’s novels often explore evolutionary history, combining “what-if” questions with an expansive view of the possible directions biology can take, with implications for both Earth and alien life. This is particularly evident in The Doors of Eden, which examines alternate potential paths for evolution and intelligence on Earth.
Sam was interested in speaking with Adrian to learn how he thinks about evolution, how he builds the worlds in his stories, and how he envisions the far future of human civilization. They discussed a wide range of topics, including short-term versus long-term thinking, terraforming planets versus altering human biology for space, the Fermi Paradox and SETI, the logic of evolution, world-building, and even how advances in AI relate to science fiction depictions of artificial intelligence.
In shameless self-promotion, I chatted with Jason Scharf of the Austin Next podcast on a Texas-scaled sequence of topics, including media economics, xenotransplantation, Riskgaming and the future of venture capital. Be sure to check it out on Spotify and Apple.
Sam enjoyed Lev Grossman’s brand-new novel The Bright Sword: A Novel of King Arthur, in what’s being dubbed “The first major Arthurian epic of the new millennium.” Kiersten White at The New York Timesenjoyed it, writing “Story lines veer from mundane to absurdly fantastical in the blink of an eye. Supernatural contests against devils and the Green Knight contrast with desperate, messy knife fights with humans. Climactic battles happen far before the end of the book, leaving the reader wondering what could be left. (Turns out, quite a bit.)”
I really enjoyed this preprint paper on Arxiv from Jieyu Zheng and Markus Meister on “The Unbearable Slowness of Being.” “Human behaviors, including motor function, perception, and cognition, operate at a speed limit of 10 bit/s. At the same time, single neurons can transmit information at that same rate or faster. Furthermore, some portions of our brain, such as the peripheral sensory regions, clearly process information at least a million-fold faster. Some obvious questions arise: What sets the speed limit on human behavior? And what can that teach us about the neural mechanisms of cognition?”
Sam enjoyed Erik Hoel’s new essay in The Intrinsic Perspective on “Curious George and the case of the unconscious culture.” “But it seems to me the more fundamental shift is that, at every economic and social scale, the workings of our conscious minds play less of a role. The growing high strangeness I sense is that culture is draining of human consciousness, and therefore of sense itself.”
Finally, I enjoyed Ted Chiang’s essay in The New Yorker on “Why A.I. Isn’t Going to Make Art.” “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?”
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”