Riskgaming

The Future of AI

A Lux-saber battle with our own Grace Isford, Lan Jiang and David Yang. Photo by Ben Hider.

At the 2024 Lux AI Summit in NYC, ambitious visions for the future tempered by growing fears

We hosted the second-annual Lux AI Summit this week in New York City, bringing together several hundred researchers, engineers and builders to exchange knowledge at the frontiers of AI, from product development and protein folding to server infrastructure and modern war. We posted a recap yesterday, but I wanted to highlight some of my personal favorite moments.

Sen. Mark Warner with Lux’s Josh Wolfe. Photo by Ben Hider.
Sen. Mark Warner with Lux’s Josh Wolfe. Photo by Ben Hider.

We started off with Senate Intelligence Committee chairman Mark Warner, who connected his background in tech and venture capital (in a previous life, he was the co-founder of the company that became Nextel) to his concerns today at the center of the intelligence world. On AI and electoral interference:

To a degree, at least in terms of, say, deepfakes, AI has been the dog that didn't bark. We didn't see it.

When might we see it?

I believe with almost certainty, particularly Russia, as we get closer the use of AI disinformation, misinformation will ramp closer as we get to the election.

But it’s actually what happens after November 5th that concerns him most.

My particular fear is in the 72 hours, 96 hours after the election, if you suddenly see a figure that may not be a presidential candidate, but somebody is represented as a election official, appearing to tear up ballots or stuff ballots, you could see violence in the streets.
Clem Delangue of Hugging Face and Vipul Ved Prakash of Together. Photo by Ben Hider.
Clem Delangue of Hugging Face and Vipul Ved Prakash of Together. Photo by Ben Hider.

The future of AI and policy was on the minds of many speakers, including Vipul Ved Prakash of AI production cloud Together and Clem Delangue of machine learning community platform Hugging Face. Vipul highlighted America’s massive energy infrastructure problem:

I think we need more power and more data centers. That's pretty clear. Like right now, it's way [too] difficult to find anything above 15 megawatts in North America. All these data centers have already been reserved and will not provide enough capacity for building and serving models. So I think we need more power.

I also think the GPU power envelope is quite off. […] These systems are consuming 10 times more power require 10 times more cooling. It's pretty hard to scale just the physics of it; becomes hard to remove the heat. So I think we have to have more efficient silicon that does this.

Clem also focused on the need for more efficiency in AI itself in order to scale:

I also think that we can do some things to make AI more energy efficient today. I think this movement of only using and focusing a lot of our efforts on large generalist models is a mistake in many aspects.

You don't need to take a private jet to go to work. In a similar way when you're doing like a specialized, customized use case, as I mentioned, you don't need a model that is going to tell you about the meaning of life. You can actually use a smaller model that is going to take less energy to train, take less energy to run.

The world is a bit biased right now, and a lot of the investment goes towards large, very energy-intensive models and directions, I think as a field, we can take a different direction and focus on specialized, customized, smaller models that give us a more credible path to continuing to build AI capabilities without ruining the planet.
Cristóbal Valenzuela of Runway with Lux’s Grace Isford. Photo by Ben Hider.
Cristóbal Valenzuela of Runway with Lux’s Grace Isford. Photo by Ben Hider.

Talking about Hollywood and the movie industry, Cristóbal Valenzuela, CEO and founder of generative-video platform Runway, emphasized that user education is key for augmenting existing industries with AI:

Many people in Hollywood thinks that's the case that you go into Runway, you type ‘movie,’ enter, and you get a movie, and that's it, and the movie is incredibly well-done and well-produced. And it's like, we're out of a job, you know?

And so a lot of the literacy and education we're doing with studios and production teams and creatives is like, educate them in showing them that this is not a binary tool that creates copyrighted content with no control, you actually have a lot of control, and if you don't, then you're actually probably not going to make anything good.
NYU’s He He (Left) on frontier research in AI. Photo by Ben Hider.
NYU’s He He (Left) on frontier research in AI. Photo by Ben Hider.

One of the most important aspects of building generative models is getting human feedback, which when coupled with algorithmic reinforcement learning can optimize a model for best results. But He He, an assistant professor at New York University and an affiliate of the CILVR Lab, the Machine Learning for Language Group, and the Alignment Research Group, noted that who is doing the human feedback matters a great deal — and this is overlooked by too many companies:

In our work, we find two problems with the proxy human feedback. So first, they're very noisy, either because of general human cognitive bias or because they just lack the context and incentive of real users that aim to compute certain tasks.

Another problem we saw is that these proxy human annotators are not necessarily representative of your user population. So as a result, the preference or supervision signal you get from these humans would be biased, and from a research perspective, I feel there should be more work trying to extract some implicit supervision signal from user feedback, because it's really expensive to ask users to provide annotations. Their interest is really in completing the task.
NYU’s Rahul Satija alongside Alex Rives of Evolutionary Scale. Photo by Ben Hider.
NYU’s Rahul Satija alongside Alex Rives of Evolutionary Scale. Photo by Ben Hider.

While text, audio and video take the lion’s share of media attention around generative AI, it’s other areas like bio that are proving to be the most electrifying right now. Rahul Satija, professor of biology at NYU within the New York Genome Center, noted the massive potential to unlock new discoveries in bio in the years ahead:

I like that the theme of this, which is on the back wall, is ‘Impossible to Inevitable,’ which I think really highlights the idea of this virtual AI cell.

What an incredible opportunity that would be: half of my lab at NYU in the New York Genome Center is focused on doing experiments. We take cells, we perturb them in very specific ways, and then we run these very intricate measurement technologies to figure out what's happening. This costs hundreds of thousands to millions of dollars. We have PhD students, postdocs feeding the cells, making these measurements, and we're doing individual experiments at a time.

The idea that we could do this at scale on a computer is absolutely transformative and incredibly exciting at the same time.
West Point dean and brigadier general Shane Reeves in conversation with retired general Tony “T2” Thomas, former commander of SOCOM. Photo by Ben Hider.
West Point dean and brigadier general Shane Reeves in conversation with retired general Tony “T2” Thomas, former commander of SOCOM. Photo by Ben Hider.

Finally, we wrapped up the summit with a conversation on AI and modern war with dean of West Point and brigadier general Shane Reeves:

Pick your area. If you want to talk about trying to solve the contested logistics problem in the Pacific; if you want to talk about targeting in a really complicated urban environment in Gaza or southern Lebanon; if you want to talk about collecting all the ubiquitous data that's all over the battlefield in Ukraine and trying to process it to the into the drone revolution that's taking place; or you talk about what probably Israel is trying to do right now with the Iron Dome, with missiles coming into Tel Aviv.

At this moment, it's all going to be reliant on artificial intelligence … and what we have known — and we know — is that technology always wins. Technology always finds its way into the into the battle space — that's never not happened.

There's a long history of this. The one I always go to is famously Pope Urban II in the end of the 11th century, realized crossbows were very effective, and they were good at piercing armor, and it was upsetting the societal norms of the time. So what did he do? He's like, ‘We're going to ban crossbows,’ right? And crossbows, anyone who used them, are excommunicated.

So what does everybody do? Well, let's get a bunch of crossbows, right? And this has happened repeatedly, aircraft, submarines, balloons — you pick it — the technology finds itself into the battle space.

From the floating molecules of the cell to, well, Pope Urban II — we covered it all. We doubled the Summit in size in one year — and we still couldn’t fit everyone inside. We’ll try to double it again – come join the whole Lux team next year!

An almost complete assemblage of the Lux team. Photo by Ben Hider.
An almost complete assemblage of the Lux team. Photo by Ben Hider.

Podcast: From Satellites to Submarines: The Power of Open Source Intelligence in Global Conflict

Design by Chris Gates.
Design by Chris Gates.

We did a quick-hit news episode between me and our occasional Riskgaming columnist Michael Magnani where we dissect the explosive rise of legalized sports betting in America and its far-reaching consequences. We then pivot to broader geopolitical topics, including the role of open-source intelligence in modern warfare and how technology is changing the defense landscape. Then we wrap the episode up with a look at Japan’s election results and the shifting political dynamics that could alter the balance of power in the Indo-Pacific.

🔊 Listen to “From Satellites to Submarines: The Power of Open Source Intelligence in Global Conflict”

The Orthogonal Bet: Artificial Life and Robotic Evolution

Design by Chris Gates.
Design by Chris Gates.

In this episode, Lux’s scientist-in-residence Sam Arbesman speaks with ⁠Tarin Ziyaee⁠, a technologist and founder, about the world of artificial life. The field of artificial life explores ways to describe and encapsulate aspects of life within software and computer code. Tarin has extensive experience in machine learning and AI, having worked at Meta and Apple, and is currently building a company in the field of Artificial Life. This new company — which, full disclosure, Sam is also advising — aims to embody aspects of life within software to accelerate evolution and develop robust methods for controlling robotic behavior in the real world.

Sam wanted to speak with Tarin to discuss the nature of artificial life, its similarities and differences to more traditional artificial intelligence approaches, the idea of open-endedness, and more. They also had a chance to chat about tool usage and intelligence, large-language models versus large-action models, and even robots.

🔊 Listen to “Artificial Life and Robotic Evolution”

Lux Recommends

  • I really enjoyed Claire L. Evans’s short essay on “What's a Brain?” “Cognition isn’t reserved only to vertebrates with language, reason, or self-awareness. There are more primitive cognitive subsystems within us, around us, and all along the ladder of evolutionary time. Studying them is the purview of an emerging interdisciplinary field in biology: ‘minimal cognition’ or ‘basal cognition.’”
  • One popular book that many of us at Lux have been reading (including Brandon Reeves and Tess Van Stekelenburg) is Annie Jacobsen’s Nuclear War: A Scenario. Sam recommended this one back in April, and it’s still making shockwaves with us (and also represents our first repeat Lux Recommends in more than five years).
  • I have recommended articles in the past on the crisis of sand — we’re running out of it, or at least, the kinds of sand that we need for cement and other key infrastructure. But Wesley Crump at Practical Engineering has a counter take and tells the critics to, well, pound sand. “I tried to track down the original source of this idea that we can’t use rounded grains in concrete, but got nowhere. Beiser cites an article from the UN, which itself cites a 2006 paper about using two types of desert sand from China in concrete. But that paper doesn’t mention the roundness of the particles at all. They didn’t include any measure of the shape of the grains in their study, and they didn’t make any suggestions about how that particular property of the desert sand may have affected the results of their tests.”
  • Sam enjoyed this little romp in fake zoo sightings in “Is That a Panda? Or a Dog in Disguise?” “You’re running a zoo. But visitors are clamoring for something different, more exotic. You aren’t lucky enough to possess Moo Deng, the popular pygmy hippo from Thailand. In fact, you haven’t been able to get a hold of anything impressive or cute or charming enough to impress the masses on TikTok. What to do? Have you considered simply disguising a prosaic animal as something more exciting?”
  • Finally, I’m always a fan of a good Lunch with the FT, and last week’s with Signal’s Meredith Whittaker was great. “More than a decade later, as president of the Signal Foundation, she remains a privacy absolutist — committed to the idea of end-to-end encryption and the need for pockets of digital anonymity in an industry fuelled by the monetisation of data — despite political pushback from governments all over the world.”

That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.

continue
reading