Securities

The AI Arms Race That Wasn’t

Photo by Austris Augusts on Unsplash

Arms races are endemic to much of modern software. AI applications are (mostly) not.

If competition is the great accelerant of technical innovation, then arms races are the cauldrons of infinite advancement. Arms races force continuous evolutionary progress, driving all participants to seize the disruptive initiative with acute alacrity. While arms races can be evenly matched for a time, one side is often dominant and must continually defend its territory and leadership against a constant fusillade of asymmetric attacks by others.

Development of artificial intelligence appears to be an obvious arms race. Hackers use AI to crack computer systems, and so we need better AI cybersecurity to fight back. Computational propagandists will generate endless synthetic deepfakes, forcing us to build resilient AI systems to identify this junk and flush it out of our social systems.

I argue though that AI applications by and large aren’t arms races, which has some extremely important implications for the development of the field in the years ahead.

Broadly, most software is used by “friendly” users, in the sense that the user is ultimately on the same side as the software. Word processors, photo editors, integrated developer environments, and even games: all of our apps are designed to help us, and ultimately, we want to work with them to finish our tasks or just play.

Almost all software developed prior to the internet was in this vein, but as software transitioned from single-player experiences installed on our own desktops to the wider web, it increasingly encountered the arms race dynamic. For instance, all networked software must now account for cybersecurity, constantly mutating and improving as hackers and state actors find new holes and flaws in code.

Arms races don’t just show up with cybersecurity though, but also in core functionality. Social networks like Facebook, Twitter, and YouTube must continuously adapt their algorithms to fight new forms of spam, traffic gaming and illegal content distribution. Wikipedia needs editorial and moderation systems that fight off the sulfurous vultures at PR agencies. It’s even harder at Google, where there are massive incentives for ranking higher on search results through search-engine optimization (SEO). One estimate places the annual SEO market at about $80 billion, all devoted to gaming rankings and improving a site’s performance on Google.

This was a jarring lesson for me. Back at Stanford, I took the graduate course on information retrieval, and it’s incredible how quickly you can build a modern search engine using off-the-shelf, open-source tools. Using Apache Lucene, I could design a search engine during a single academic quarter to peer into a corpus of millions of documents in multiple languages with pretty good results just on my personal computer.

A few weeks later, I joined the Google+ search team working with Frances Haugen (who coincidentally a decade later, would become the ‘Facebook Whistleblower’ and whose new book is arriving soon). Suddenly, I went from ranking millions of documents on my personal computer to working with a massive search team trying to highlight social content from hundreds of thousands of users, many of whom were actively trying to sabotage the rankings and boost their own profile.

I had, inadvertently, entered into a software arms race. While indexing and searching my own documents, I had no reason to fight my own software — it was a single-player experience, and I wanted my search engine to succeed because it was fundamentally serving me. There was also a point where my custom Lucene instance was “good enough” – it found what I wanted and it didn’t really need any further improvements. But online, Google+ search needed to outwit the smartest optimizers on the planet, and had to do that meticulously in real time.

Unsurprisingly, companies fighting an arms race have to invest aggressively in maintenance and innovation to stay competitive. Google’s search quality goes up and down over time as the company gets ahead of and falls behind the actors trying to thwart it. That focus keeps the company’s systems and management limber — get lazy, and a few weeks later the search results will quite literally be junk. But it also means that Google search takes on an outsized level of executive attention, and not just because it’s the major profit center of the company. It’s always possible to slip up and lose whatever lead you might have had against spammers.

Fast forward to the current battle of the AI chatbots like ChatGPT from OpenAI (plus by extension, Sydney from Microsoft) and Bard from Google. Are these AI chatbots and other AI applications in an arms race? And if so, where and with whom?

Given that they’re almost exclusively single-player interfaces right now, the chatbot experience harkens back to the software of yore. Users ask questions, and they get answers — no one else is involved. While many tech writers have pushed chatbots to do illegal activities or offer up confidential information, that doesn’t suddenly foist an arms race dynamic on AI. You can just as easily use Microsoft Word and Adobe Photoshop to commit illegal acts given their capabilities. Avoiding negative publicity is not the same as an arms race.

There is a potential arms race among these applications in trying to sway the corpus for the models that underpin these bots, in much the way an SEO firm might try to spin a web site for Google’s search engine. But with OpenAI and others sucking up in the entire written output of humanity in their quest for complete dominance, it seems hard to imagine that some edits on a site (or even extensive edits across thousands of sites) would radically change a bot’s output. This is even more true as the black box of these chatbots has become ever closer in shade to Vantablack. Without knowledge of how these models are constructed, it’s hard to glean precisely what it would take to influence them.

Here’s where it gets analytically challenging: there’s certainly competition between chatbots and other AI applications. OpenAI, Google and others want their chatbots to be better and more useful to users than their competitors’ products. It’s easy to look at this market structure and argue that AI right now is indeed a cauldron of infinite advancement.

Instead, we should look at the marginal utility for users of those infinite advancements. Will every additional improvement to an AI application ultimately lead to better user outcomes? No, since there’s often a plateau of capability where an AI app will surpass a threshold of competence — they’ll be “good enough” for users, and any further gains will be mostly superfluous. This isn’t about reaching that pinnacle of intelligence, AGI, but rather the reality that AI apps at some point soon will just do what we expect them to do.

That has huge implications for the long-term value that can be created by companies around AI and whether models will tend toward open or closed approaches. It’s very hard to compete in an arms race with open-source software, since the constant workload and investment required to sustain a competitive edge isn’t conducive to the decentralized development model of open software. But if the goal is to reach a threshold of competence, then open-source AI models absolutely have a chance to dominate the market in the years ahead.

The most obvious analogy is to Wikipedia, which is an open-sourced encyclopedia that also runs on open software. It’s eminently possible (and I dare say likely) that model building and tuning will happen in a fully open and democratized way like Wikipedia in the years ahead. This becomes even more possible in a multi-model world with domain specificity, where decentralized networks of experts could optimize the model of their own field.

There will still be categories where AI exhibits an arms race. Deepfake detection and other authenticity verification tools will be battling the purveyors of these media. Big companies could theoretically be built in such sectors, since their agility and steady investment will offer them a moat that allows them to capture value.

The bulk of AI use cases though don’t have that arms race dynamic, where “good enough” technology will be all that most users need. In these markets, it’s going to be much harder for a for-profit company to monopolize a market, since open-source solutions will likely be cheaper, more flexible to integrate via APIs, and more extensible than proprietary options. Companies may be the first to cut through the thicket of challenges to deliver apps to the marketplace, but as learnings seep out, it becomes easier for those behind them — including open applications — to catch up.

Investing in AI requires very precise attention on the competition dynamics with users and products as well as between products themselves. Where marginal outperformance leads to cumulative market advantages, there are outsized profits to be seized. But where there are diminishing marginal utilities, expect a plateau of capabilities and more openness of technology. The cauldron of infinite advancement requires a very specific alchemy, and it’s not one definitionally shared by all software. In fact, much like the alchemy of the medieval era, it’s actually relatively elusive and rare.

“Securities” Podcast: “It subverts the structure even of other stories that are told about creation”

Danny Crichton, Eliot Peper and Sam Arbesman on the
Danny Crichton, Eliot Peper and Sam Arbesman on the "Securities" podcast

We’ve added it multiple times in the Lux Recommends section, but Tomorrow, and Tomorrow, and Tomorrow by Gabrielle Zevin has been a popular novel around the Lux offices (I glimpsed Josh Wolfe reading a copy recently). It’s also been a smash hit, becoming Amazon’s book of the year for 2022 while also securing a major film adaptation. At its core, the novel is a bildungsroman of two video game creators who tussle, make up, and learn from each other over two decades of designing and playing their worlds.

Zevin’s description of the creative process inspired us to do a podcast on some of the themes of the novel and how they relate to our own creative lives. Joining me in this week’s episode of the “Securities” podcast was novelist Eliot Peper (last seen on the podcast in Speculative fiction is a prism to understand people) and our own scientist-in-residence and multi-time book author Sam Arbesman.

We talk about the building of virtual worlds, the hero’s journey of creation, the uniqueness versus repetitiveness of producing art, whether video games are entering the literary zeitgeist, why the book garnered such popular success and finally, narratives of individuals versus groups.

🔊 Take a listen here

Lux Recommends

  • Ina Deljkic is excited about the prospects of extraterrestrial intelligence from the news last week that a basic building block of life was discovered in outer space. “Scientists have discovered the chemical compound uracil, one of the building blocks of RNA, in just 10 milligrams of material from the asteroid Ryugu… The finding lends weight to a longstanding theory that life on Earth may have been seeded from outer space when asteroids crashed into our planet carrying fundamental elements.”
  • Bilal Zuberi is interested (as I was in today's column) in the future evolution of AI apps, pointing to a post by Paul Kedrosky and Eric Norlin of SK Ventures talking about “AI, Mass Evolution, and Weickian Loops.” "For example, and we can push this analogy too far, we know that in biology over-fast evolution leads to instabilities; we know that slow-evolving species tend to do better than fast-evolving ones, in part because the latter respond too readily to transient stimulus, rather than exploiting their ecological niche.”
  • I recommend an interview with … myself. After years of plugging Five Books interviews, I finally got to sit in the hot chair on a topic I know and love: industrial policy. Here are the 5 best books on industrial policy in a lengthy overview of the field.
  • Sam Arbesman recommends Peng Shepherd’s novel, The Cartographers, which has been wildly successful and is described as “an ode to art and science, history and magic.”
  • Finally, I particularly enjoyed Marty Baron’s landscape of the future of objectivity and the press. It’s a tough subject, but he crafts a very nuanced view on how objectivity and reporting can fit together.

That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.

continue
reading