This newsletter now sponsored by bubble tea
I’m on vacation the next week and a half in the Republic of China — catching a glimpse of the inauguration of Taiwan’s next president William Lai, but mostly just eating and drinking my way from night market to night market. Normal newsletter columns will return in a few weeks, and we have the Riskgaming podcast all set and locked in the meantime (if you aren’t subscribed, what are you waiting for?).
Thanks as ever for reading.
Podcast: The soon-to-be-solved protein problem that will accelerate drug discovery
We’ve known for decades that one of the key mechanisms of biology — and of life itself — is the binding of molecules to proteins. Once bound, proteins change shape and thus their function, allowing our bodies to adapt and change their molecular machinery as needed for survival. The challenge that remains unsolved is to predict — across billions of potential proteins and a similar number of molecules — how those proteins change and how they might interact with each other.
The fervent hope of many scientists and entrepreneurs is that artificial intelligence coupled with experimental and synthetic datasets, may finally unlock this critical piece of the biological puzzle, ushering in a new wave of therapeutics.
My guest today is one of those science entrepreneurs, Laksh Aithani, the co-founder and CEO of Charm Therapeutics. He’s made cancer the focus of his work, and through Charm and his team, is building expansive datasets to develop AI models that can predict the 3D shape of proteins.
Alongside my Lux colleague Tess van Stekelenburg, we explore protein folding’s past, present and future, the utility and risks of synthetic data in biological research, how much money and time we might expect for future drug discovery, what individualized medicine might look like decades from now, and how new grads can get into the field as the century of biology kicks off.
🔊 Listen to “The soon-to-be-solved protein problem that will accelerate drug discovery”
Lux Recommends
- One blockbuster piece this past week comes courtesy of Australia, where a Chinese spy has defected and spilled the tea on the country’s overseas intelligence and disruption operations. “The unit is called the Political Security Protection Bureau, or the 1st Bureau. It is one of the Chinese Communist Party's (CCP) key tools of repression, operating across the globe to surveil, kidnap and silence critics of the party, particularly President Xi Jinping. ‘It is the darkest department of the Chinese government,’ Eric said. ‘When dealing with people who oppose the CCP, they can behave as if these people are not protected by the law. They can do whatever they want to them.’”
- Our former Lux editorial associate Ken Bui recommends Francis J. Gavin’s lead essay for the Texas National Security Review, "Cracks in the Ivory Tower?” “Erasmus loathed conflict, loved peace, and preached tolerance. He was, however, a moderate in an extreme age, disliked and mocked both by the reactionaries within the Church and the reformers from without. It goes without saying that our current world could use more figures in Erasmus’ mold.”
- Following up on our podcast episode from two weeks ago (“Margaret Mead and the psychedelic community that theorized AI”), our scientist-in-residence Sam Arbesman recommends Benjamin Breen’s new essay, “LLM-based educational games will be a big deal.”Discussions of LLMs in education so far have tended to fixate on students using ChatGPT to cheat on assignments. Yet in my own experience, I’ve found them to be amazingly fertile tools for self-directed learning. GPT-4 has helped me learn colloquial expressions in Farsi, write basic Python scripts, game out possible questions for oral history interviews, translate and summarize historical sources, and develop dozens of different personae for my world history students to use for an in-class activity where they role-played as ancient Roman townsfolk.”
- Nami Matsuura has a nice piece in Nikkei Asia on how “Robot sommeliers and baristas go to work in labor-starved South Korea.” “Last month in Seoul, [Doosan Robotics] began testing a barista robot that can pour 80 cups of coffee an hour and make latte art in partnership with a local cafe chain. It also tested a cooking robot that can run six deep-frying baskets at the same time at a high school, serving 500 people in two hours. … South Korea leads the world in robot density, with 1,012 robots for every 10,000 workers as of 2022, the International Federation of Robotics reports. The figure is well above second-ranked Singapore's 730, and double or triple the numbers in Germany, Japan, China and the U.S.”
- Finally, Sam recommends the always cerebral Paul Ford’s sardonic new essay in Wired, "Generative AI Is Totally Shameless. I Want to Be It.” “Hilariously, the makers of ChatGPT—AI people in general—keep trying to teach these systems shame, in the form of special preambles, rules, guidance (don’t draw everyone as a white person, avoid racist language), which of course leads to armies of dorks trying to make the bot say racist things and screenshotting the results. But the current crop of AI leadership is absolutely unsuited to this work. They are themselves shameless, grasping at venture capital and talking about how their products will run the world, asking for billions or even trillions in investment. They insist we remake civilization around them and promise it will work out. But how are they going to teach a computer to behave if they can’t?”
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.