Riskgaming

America’s 248th

Photo by rozbyshaka via iStockPhoto / Getty Images

Happy U.S. Independence Day!

No column this week due to the holidays — we’ll be back next week.

Three Podcast Episodes for the Long Weekend

Design by Chris Gates.
Design by Chris Gates.

Given all the road trips this week, we figured a few extra Riskgaming podcast episodes would help bring the family together around AI’s impending takeover of the planet, the complex international relations of subsea cables and the future of biological science. Something for everyone!

First up, I chatted with Eric Newcomer of Newcomer and Reed Albergotti of Semafor on the recent dust up surrounding Perplexity, an AI company that got into trouble in recent weeks over its swiping of original journalism and then summarizing those articles for free on the web. The three of us talk about the coming AI wave, how it will influence the quality and business of news media, and why journalists need to embrace the future rather than protect the past.

🔊 Listen to “Is AI killing journalism? Pitchforks, Perplexity and reporters yelling ‘Boo!’”

Second, I chatted with Eurasia Group’s geotechnology analyst Scott Bade on a whole slew of Riskgaming topics. They include the recent riots on France’s New Caledonia island in the Pacific that led Emmanuel Macron’s government to shut down TikTok there; the increasing contest over prized subsea cables lines across the Pacific Ocean; the rise of data centers as an economic development strategy in the Middle East and elsewhere; and finally, AI and whether it is leaving behind the Global South. Scott will be in NYC in two weeks to host a Eurasia Group x Salesforce conference on Global AI Leadership.

🔊 Listen to “‘The commons are under attack’ from TikTok and subsea cables to data centers and elections”

Design by Chris Gates.
Design by Chris Gates.

Third and finally, our scientist-in-residence Sam Arbesman is back with another episode of his The Orthogonal Bet miniseries, this time with long-time Nature editor Philip Ball. Ball not only works on the frontiers of biology, but is also a passionate science communicator, recently publishing the well-reviewed book, How Life Works: A User’s Guide to the New Biology. Sam and Philip talk about the emerging understanding of the machinery of life, and how complexity science has ushered in new ways of seeing the most fundamental chemical processes of how all organisms survive and thrive.

🔊 Listen to “The Orthogonal Bet: Unveiling the Complexity of Life: A Conversation with Philip Ball on ‘How Life Works’

Lux Recommends

  • For a holiday weekend flick, Deena Shakir (and yours truly) loved Inside Out 2, the follow-up movie to Pixar’s 2015 hit. It’s a brilliant, witty, spastic and fun film about growing up and the emotions that come with maturation. For Pixar, the film has now grossed nearly $1.1 billion globally at the box office, turning around a narrative that the animation studio had lost its mojo during and after the pandemic.
  • Sam greatly enjoyed Siobhan Roberts’s look at the 50th anniversary of the Rubik’s Cube, which has offered a tantalizing challenge for math theorists and competitive solvers alike. “There are many paths to solving the Cube. During his lecture, Dr. [Tomas Rokicki] zeroed in on a specific number: What is the minimum number of moves necessary to solve even the most scrambled positions? Dr. Rokicki set out to calculate this quantity, known as God’s number, in 1999. In 2010 he found the answer: 20. He had the help of many talented people, particularly Herbert Kociemba, a German hobbyist cuber and programmer known for his namesake algorithm. The feat also benefited from a lot of computer time donated by Google, and another algorithm that took advantage of the Cube’s symmetries, reducing the number of necessary calculations by a factor of 48, and in turn reducing the necessary computing power.”
  • Our associate Dev Gupta recommends a new working paper on Arxiv on "Pandora's White-Box: Precise Training Data Detection and Extraction in Large Language Models.” “In this paper we develop state-of-the-art privacy attacks against Large Language Models (LLMs), where an adversary with some access to the model tries to learn something about the underlying training data. Our headline results are new membership inference attacks (MIAs) against pretrained LLMs that perform hundreds of times better than baseline attacks, and a pipeline showing that over 50% (!) of the fine-tuning dataset can be extracted from a fine-tuned LLM in natural settings.”
  • Sam recommends a new essay by Pradyumna Prasad and my friend Jordan Schneider (author of the great ChinaTalk newsletter and podcast) on “When RAND Made Magic in Santa Monica.” “As the Cold War intensified, the mission became the sell. The aim of RAND, as the historian David Hounshell has it, ‘was nothing short of the salvation of the human race.’ The researchers attracted to that project believed that the only environment in which that aim could be realized was independent of the Air Force, its conventional wisdom, and — in particular — it’s conventional disciplinary boundaries.”
  • Shaq Vayda enjoyed a reflection by Bruce Booth of Atlas Venture on "A Molecular Biologist’s Advice For Life.” “In the human body, once you’re a dendritic cell, it’s generally not possible to de-differentiate back to a pluripotent HSC. Life is the same way: with time, life terminally differentiates you. It’s often easy or encouraged to specialize quickly in one’s career. But I’d advocate against this: try to resist locking into your “fate” by staying broad and remaining intellectually curious.”
  • I recommend Neil J. Young’s Coming Out Republican: A History of the Gay Right as a fascinating reminder that the polarized politics in the United States are almost always more complex and profound than is visible on the surface.
  • Finally, Sam recommends Paul Kedrosky’s fascinating look at the history and usage of the word ‘delve.’ “Its current overuse in LLM-generated text reveals a related phenomenon. The term's multi-layered meanings, coupled with its slightly elevated rhetorical tone, make it an attractive choice for AI systems trying to sound authoritative and academic. In this way the frequent appearance of 'delve' in AI writing serves as a linguistic marker, a reminder of LLMs' tendency to default to words that imply depth and thoroughness, often at the expense of more varied and precise language choices.”

That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.

continue
reading