Riskgaming

The Unreasonable Middle

Photo by atosan via iStockPhoto / Getty Images

Our reasonable desire to compromise is a terrible first instinct

One of the great ironies of modern life is that compromise is heralded as a key virtue in business and political discussions, yet those very same people would never select the middle seat on a plane. In fact, the only options in my flight app for default seat preference are window or aisle — middle isn’t even an option (now there’s a product manager who knows their customer).

We intuitively understand that not all compromises are ideal, but repeatedly across Riskgaming runthroughs, I’ve seen players on the cusp of making an ambitious and uncompromising decision only to be negotiated off the ledge by other players.

Take our recent experiences hosting Experimental Automata in New York, Los Angeles and Washington, DC. About 150 people played the game, ranging from senior biotech executives and diplomats to startup founders and venture capitalists. The game is simple; groups of players debate a series of policy questions and then vote for one of three possible options. Each player has two metrics they are trying to increase — think U.S. fiscal strength or European research competitiveness — and every policy option affects these indicators positively or negatively.

I designed each debate to offer two reasonable extremes and then one compromise option. For instance, one debate centers on what to do with patient healthcare data in the age of AI. The options are to create an anonymized and universal global databank of all medical records to maximize AI growth; to keep the status quo of national record systems; and finally, to use privacy-preserving technologies (“blockchain”) so that there are no centralized repositories.

In all of our runthroughs, players chose number two: the status quo. Not only that, but in one runthrough, the vote was actually unanimous across a dozen players, all of whom had conflicting indicators and should have been advocating for other options.

Players noted that it was socially easiest to advocate for the middle option. Even though many of them realized that one of the other options was better for their own objectives, they felt they would never be able to persuade others to go along with a more extreme option. This was obvious in our data: across the 15 debates in our first scene of the game, 12 groups decided on the middle option. In addition, some players were worried about what might happen next in the game, and felt that choosing a compromise option would prepare them best for what’s in store.

We noticed this pattern as hosts, and so starting in the second scene, we began encouraging everyone to more deeply consider options other than the compromise choice. Our push and perhaps more comfortability with the game helped: across the next scenes, players averaged six decisions at the center, cutting the rate in half. Nonetheless, it always remained the most popular option.

While the gameplay for Experimental Automata heavily emphasizes discussion, every player votes privately on the policy options, allowing for defection from any group consensus that may have emerged. We rarely saw defection, though, even after reminding players that no one will know how they voted. That bonhomie is what led to unanimous decisions that were clearly the wrong outcome for many players. In other words, the middle seat was chosen over and over again.

The optimistic view is that cooperation, negotiation, debate and ultimately consensus is a positive outcome. At a time of great passions, it’s reassuring that people role-playing characters with opposing objectives can actually come together and reach a conclusion. Isn’t that the best example of the democratic spirit?

Yet, a darker view is that the drive toward blinkered consensus is a forthright weakness, the cause of the malaise and stagnation we see in so many countries around the world (thinking of you, Germany). Rather than matching the tremendous challenges facing us today with equally ambitious strategies, we instead develop a set of milquetoast proposals that get smashed together into an incoherent compromise that solves none of our problems.

Occasionally, I saw more aggressive action. In Experimental Automata, there is a global climate change crisis triggered by the exponential growth of AI technologies and their immense energy demands. In two of our runthroughs, the winning vote was to aggressively geoengineer the Earth’s atmosphere — perhaps indicating just how optimistically enthusiastic our tech-centric audience is about the prospect of testing the frontiers of science.

In those cases, though, the vote in favor of geoengineering wasn’t so much the result of a sober debate about the benefits and dangers of radically altering the environment; it was simply the cool option to choose. “I just wanted to see what happened,” one player told me. That’s understandable — we’re playing a fictional scenario about the future. Experimental is in the game’s title, after all.

Yet, I keep coming back to the key lesson of venture capital, which is that an uncompromising vision of the future is the only path toward building an enduring company of excellence. Of course founders must be agile and flexible, adapting to the changing needs of the marketplace and the dynamics of strategic competition. Yet, without some inviolable kernel — an opinion about engineering, product, pricing, customers or something else — young and even mature startups become unmoored, fashionably pivoting around. Uncompromising doesn’t imply stagnation.

I have designed several games with the goal of pushing players toward holding the line on their character’s needs, but nearly everyone wants to compromise. In Hampton at the Cross-Roads, one of my early designs pits the character of Chief of Naval Operations Admiral Reid against all other players, emulating the tension between federal power and the needs of state and local politicians and businesses.

Then I ran some demos of the game, and I found that Reid always surrendered in the first scene of the game. Frustrated in one instance, I sternly reminded the player that they were role-playing a four-star flag officer and were compromising the Navy’s readiness in order to appease the owner of a local bar called McKinley’s. I then watched agog as the player compromised anyway, and I ended up having to redesign the game to not require any of the players to have a backbone.

I’m still curious about how to offer an ordeal to players that would force them to hold out against the demands of their peers under incredible pressure. Maybe that doesn’t make for the type of congenial experience that is our general fare, but it would develop our players in a unique way. Until then, enjoy that middle seat.

“We have an addiction to prediction”

Design by Chris Gates.
Design by Chris Gates.

This week, since Laurence Pevsner and I are traveling, we have our independent Riskgaming designer Ian Curtiss interviewing Graham Norris. Norris has a doctorate in organizational psychology and for the past decade, has consulted with all kinds of businesses around the world on his theories of foresight psychology.

🔊 Listen to “‘We have an addiction to prediction’”

Below is a condensed and edited clip of the episode.

Ian Curtiss: With Riskgaming, a big part is getting people to experience a potential future where they have to make decisions and gain a deeper understanding of the trends and the trade-offs of a scenario.

Inevitably what happens when we build these games, though, is they become these complex systems themselves where all sorts of things can happen. You get different players, they bring their own individual personalities to the game. And almost every single time there's something that happens in the game that's like, "Oh, wow, I didn't expect that. I've never seen that before." Something emerges. And so this perspective of, "I'm trying to figure out the future," even in a game scenario that I've written, even I'm surprised. And this is three-hour scenario of the future, let alone actual reality.

There are so many things that can happen. How do you go about getting people to work through that, just that complexity?   Graham Norris: I mean, we have what I describe as a psychological allergy to uncertainty, which repels us from thinking about the long-term future. There was research that shows that we find uncertainty very, very stressful. There was a study conducted at the University College London where they gave people electric shocks. Some people knew they were going to get an electric shock, some people knew they would not, and for other people it's 50-50. Those who knew they weren't going to get a shock were pretty relaxed. The most stressed were those who didn't know if they were going to get a shock or not.

The games that you make are a fantastic way to experience uncertainty in a safe way in a safe space and try to learn from it. The learnings are the fuel for your imagination in understanding the future, and that's how you base your decisions.

I mean, what's your takeaway in how people are psychologically grappling with the uncertainties and how they're dealing with it?   Ian Curtiss: It’s funny you ask. I'm in Tokyo right now and ran a game here in Japan, and it was the most unique game that I've seen yet. When I read out the players on the outcomes and how the game went, they said, "Oh yeah, this sounds like a very Japanese experience." It was an incredibly balanced, steady game. Very thoughtful, strategic, timely investments were made, versus I've seen other games, people come in with guns blazing, throwing out investments left and right in the first scene.

So people come in to these environments of uncertainty heavily biased by their culture, the society that they're growing up in, or the silo of education that they've risen the ranks in. Policy-minded folks play one way, the tech executives play another way. The British games that I've seen played one way, and now the Japanese games another, and Americans another way.

Lux Recommends

  • Tess Van Stekelenburg enjoyed Thomas Wolf’s essay on “The Einstein AI model.” “This perspective misses the most crucial aspect of science: the skill to ask the right questions and to challenge even what one has learned. A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say ‘despite all his training dataset’-, that the earth may orbit the sun rather than the other way around. To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. One that writes 'What if everyone is wrong about this?' when all textbooks, experts, and common knowledge suggest otherwise.”
  • Jordan Schneider has been pumping out great articles on U.S.-China dynamics as the publisher of ChinaTalk. Two recent ones are on the AI agent Manus by Lily Ottinger and one from September on "Innovation and AI in China’s Biotech Sector” by Nicholas Welch and Angela Shen. “Without the ability to outsource to these Chinese CROs and CDMOs, however, US drug development costs are projected to rise. The greatest impact would be on manufacturing capacity, with slowdowns in clinical trials delaying innovation pipelines and raising costs. Small, mid-size, and virtual biotech companies in the US and Europe would be especially affected, as their business models commonly rely on outsourcing proof-of-concept drugs until they reach clinical stages.”
  • Our scientist-in-residence Sam Arbesman recommends Ian Webster’s beautiful visualization of Earth at different periods of history.
  • Two blockbuster pieces on spycraft and the rise of Wirecard’s Jan Marsalek from The Financial Times and The Insider in “‘Let’s hire an ISIS suicide bomber to blow him up in the street!’: Europe’s most wanted man plotted my murder — and that of my colleague.” “The surveillance team rented an Airbnb across the street from my Vienna address, set up a constant surveillance camera to film me coming and going, and even bribed a corrupt employee of Swissport in order to gain access to airline booking data, thereby allowing them not only to know in advance when and where Roman and I were traveling, but also to place their agents next to us on flights. They used actual spy glasses to record me getting in and out of the plane — and even to spy on Roman texting me from inside the cabin.”
  • Finally, for those who love intense economic reading, here’s a full-review from Robert L. Axtell and J. Doyne Farmer on “Agent-Based Modeling in Economics and Finance: Past, Present, and Future” for the Journal of Economic Literature, either in working paper PDF or paywalled American Economic Association format. “ABM has enriched our understanding of markets, industrial organization, labor, macro, development, public policy, and environmental economics. In financial markets, substantial accomplishments include understanding clustered volatility, market impact, systemic risk, and housing markets. We present a vision for how ABMs might be used in the future to build more realistic models of the economy and review some of the hurdles that must be overcome to achieve this.”

That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.

continue
reading