Riskgaming

Why immersion — and not realism — is critical for wargaming

Design by Christopher Gates

Despite centuries of experience designing and playing war games, there is still very little rigorous research on how to evaluate what makes a good game. What’s the design goal? How much should (or even can) a game reflect reality? Are tighter or looser rules more likely to lead to productive learning? Is having fun important? That lack of rigorous analysis has historically stymied the wargaming profession, but a new generation of researchers want to push the field forward.

Today, with both ⁠Danny Crichton⁠ and ⁠Laurence Pevsner⁠ on vacation, we bring back our independent Riskgaming designer ⁠Ian Curtiss⁠ to host ⁠David Banks⁠. David is senior lecturer in wargaming at the Department of War Studies at King’s College London, where his research focuses on the empirical evaluation of war games and how the craft can evolve in the years ahead. He is also the academic director of the King’s Wargaming Network.

Ian and David discuss the antecedents of wargaming, firming up the foundations of the field, why realism isn’t as useful a metric as engagement, why balancing play and realism is so challenging, how to consider internal validity in games and why its important not just to evaluate a game as a whole, but also its constituent parts.

Produced by ⁠⁠⁠⁠⁠⁠⁠⁠Christopher Gates⁠⁠⁠⁠⁠⁠⁠⁠

Music by ⁠⁠⁠⁠⁠⁠⁠⁠George Ko

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Ian Curtiss:

David, thank you so much for joining us.

Just to kick us off, tell me a bit more about your background, where you came from, and how you got so deep into the wargaming field.

David Banks:

Okay. Well, I think I'll do it in reverse chronological sequence. So tell you where I am now and then tell you how I got here.

I'm a senior lecturer in the war studies department in King's College, London. Specifically, I'm the wargaming lecturer where I teach two different MA courses on wargaming, which I'm happy to talk about, and also serve as the academic director of the King's Wargaming Network, which is a collection of scholars located in King's.

This is an organization founded before I arrived, but it's a group of both PhD students, people have finished their PhDs since they initially were part of the organization as founders, and faculty in King's who are all using or studying wargaming in a meaningful way in their research.

My training is international relations theory. I got my PhD in George Washington University in 2015, and I actually lived in the United States. I'm Irish, you may or may not be able to tell from my accent because it comes and goes. But I lived in the United States from basically 2004 to 2020 and I got my PhD in the United States and I taught at American University in Washington D.C. for about seven years as an IR scholar.

My research is IR, I've published an IR, I have a book that is hopefully going to come out next year that's an, I don't want to say classic IR, but classic IR themes, diplomacy.

I came into wargaming originally by using them in my classroom because I thought they'd be a great tool for giving students basically an experience of politics that they otherwise wouldn't guess.

And then in 2015, I was approached by a colleague of mine at AU called Ben Jensen, who is very prominent in the professional wargaming world in the United States. He works very closely with the Marine Corps University. He's at Minerva, in the Pentagon. He's an officer in the US army.

He approached me and said, "Hey, you're the guy who does games in the classroom?" And I was like, "That's right, that's me." He said, "Would you like to help design games for research project on cyber security?" It was a very indirect road into that.

And in doing that project, I read the wargaming classics like Peter Perla, Phil Sabin, my predecessor. I read what was out there and I said, "Okay, this seems to be how the method works. Let's build a game."

I, it's very common in the wargaming world, I'm a hobby wargamer so I play hobby wargames in my spare time and have done since I was a boy, built a game and did some research about cyber security stuff and that kickstarted that.

And then I did some other smaller things on commission and then I got the job at King's and now, this has become my core research focus and specifically wargaming as a methodology.

So not just building and using games, which I do do, but I'm more interested in how do we know they work? How do we know they're working well or poorly? How can we evaluate them? How can we compare them with one another so that we can produce better games and produce better synthetic from those games and have a little bit more confidence in the results, the analytic results that these games play?

As a final point on this, because you're asking what was my road into this, I think my road in some sense is the standard road into academic wargaming, which is to say I'm something else first.

Every academic wargamer who's out there that I'm aware of with perhaps one exception is basically was not a wargamer but was an IR scholar, most people I know, who then used wargaming in some way and then find themselves becoming a wargame scholar as well, which really speaks to the fact that as an academic field of inquiry and as a method, from the academic perspective, it's very much in its infancy.

There isn't a such thing as wargaming an academic discipline or an academic method that you could be the way you can become a statistician or an ethnographer. That's not really a thing that exists in academia. But hopefully, over the next few decades, the work that's been done by myself and my colleagues at King's and also the various academics spread out across the world, a lot of them in the United States, we're going to see the development of it as an academic tool.

Ian Curtiss:

The fact that it's so new in so many ways is so interesting to me because it's not new at all.

Gaming as a practice of decision-making has existed thousands of years, wargaming kriegsspiel 200 years, and yet, like you said, there's so much that we don't know about it that we run on either assumptions or just like, "Well, it worked really well this last time, so I think it should work again this time." This sort of like if then assumptions.

Could you break it down for me a little bit of where are we at now? I guess in the sense of level of confidence maybe of the methodology, let's call it that.

David Banks:

Yeah. Well, I think to that first point about its trajectory.

It is a really interesting field to be in as an academic because as you say, it's been around for a long time. Obviously, we just talked about play in general. Play here as a human activity precedes most of the structures that organize our society. Play predates trade, it predates capitalism, it predates money, predates probably religion. It's one of the fundamental features of human activity that is eternal across time and space. There's no culture in which there isn't play.

There's a very famous book called Homo Ludens, Man the Player, which essentially makes this point. This is a natural feature of humanities that we play, although usually we do more of it when we're children than when we're grownups, which is something we can talk about later on about how seriously people take it sometimes.

As you say, wargaming as a distinct tool used to study planning, or strategy, or analysis, or education as a specific sort of thing is more... It depends how you date it but somewhere between 200 and 250 years old and has been used extensively, especially in militaries. Some militaries in particular, the Prussian military, especially the US Navy, on and off, other militaries, the British, the French, the Russians, the Germans persistently. And yes, academically, like you say, it's really very much in its infancy.

I think... As I said, when I first did my research in 2015 when I was approached to participate in building games for research, I went and found the books I thought I needed to read. I'm a methodology minor, my PhD's in IR but my minor's in methods.

I was very used to that idea of reading long dense books on here's how you're going to do a statistical test. I assumed those books existed and they sort of did. Like I said, I read Peter Perla, I read Phil Sabin, I read Rubel, I read ED McGrady.

There was enough there to read that I felt like, "Okay, I've got a sense of this method. I'm probably doing it wrong, but let's get going. Research can never be perfect." I've got enough to have a sense of what I'm doing.

But I also have this feeling that I bet you if I go back and really read this and really get to grips with it, I'll get to the core foundation of it, which is essentially the experience I had with statistics.

I was doing statistics for years as a graduate student, and then finally when I prepared for my comprehensive exams, I was like, "Okay, I really have to sit down and really understand the fundamental assumptions of this method."

And when I did, they were there. You may not agree with them and you might find the assumptions of any method questionable, which is neither here nor there, but they were there. It was like, "Okay, if you're doing an OLS regression, here's the seven assumptions you're making about reality." You could see them written out, you could see them written out in English and an equation, and you're like, "Okay, I get it. I get how you're making claims about reality using this method. So if I want to be cynical about it, I know how I can be cynical about it. I know when I can say, 'Okay, that's not how these things work. I don't agree with that assumption."

What I found in reading the wargaming literature, I couldn't quite get that sense of that deep foundation. There was a lot of essentially very plausible, credible, logical statements about this but there wasn't that really fundamental, I felt deep, deep foundation, this bedrock that you could land it on and say, "Okay, if I accept these following assumptions and the conclusions of this method, I can be accepted." That wasn't there for me.

I think from the academic side, that literature appeared initially. Peter Perla's very influential book of course, The Art of Wargaming, came out I think in 1990 and that started a brief debate about epistemology, how we know knowledge is being produced.

And there was a flurry of activity for about 10 years of a few articles being published, and then it stopped. I think what might've happened there might've been a question of people saying, "Okay, we've got enough to be able to work with. That's sufficient foundation for a practitioner to be able to go, 'Okay, I've got to get on with the process of making these games."

Because what I would say here is that I think part of the reason it hasn't developed further is not because practitioners don't worry about these questions or they don't have the ability to answer these questions, I think their incentive structure is very different.

I've done some research and I'm hoping to get it out for publication this year. I interviewed a bunch of wargamers, 25 professional wargamers, about how they deal with things I had concerns about like bias and the way you've designed your game and selected subject matter experts. Had all these standard methodological concerns.

And what was very heartening to hear when I spoke to a lot of them is they had them too, and they also had a lot of very adaptive and sophisticated responses to these problems.

What they didn't have is they didn't have that transmission belt of an academic setting where there was an incentive for them to write down their protocols and distribute those protocols.

There is some of this stuff already taking place inside the professional wargaming world but as a practice, it doesn't require you to explain these things and it doesn't incentivize the people who are thinking about these things to publish around these ideas. They're too busy getting ready to build the next game.

I think now that academics like myself are entering this space, we have the time and the incentive structure so we can bracket anything about training or intelligence to one side, but we have the time and the incentive structure to ask and try to answer these kinds of questions.

And in doing so, it's a very exciting academic space because so many of these questions are fundamentally unanswered. We have a lot of plausible answers, potential answers, so it's not like no one's thought about them, but actually in terms of evaluating those substantially, being able to really choose between them that work very often hasn't been done.

Ian Curtiss:

You spoke to the cycle of research and the role of it, and because it's a practitioner's tool, it's not just theory, you make a game because you have a decision that you need to make, so you don't have time to sit too long on the game and think about it and reiterate. Once you run the game with a certain number of people, they need to go make the decision or they have to go run the next game for the next big decision, whatever they're preparing for. In the end, it's funding for research of understanding what this tool is and how do we know whether it's good or not and so forth because there's always another decision to be made.

It's fascinating. You're at the Wargaming Network where you guys are actually figuring this out. So you're just right at the center of it all. What is your take on the defense industry's funding of it as a tool in the US?

The GAO, government accountability office just came out with a report about wargaming and how it could be improved for applications and so forth. And I'm always shocked when I hear like, "Oh, we have one of the first studies ever done that measures impact of wargaming on certain learning outcomes." It's like, "Wait, we're just now doing this? We've been wargaming for 80 years or whatever it is and the DOD has never thought to fund a learning's outcome research and so forth?"

I'm just curious to hear your take on that and the system of funding in support of the research.

David Banks:

Yeah. I'm not as familiar with the US funding ecosystem as I am with the U.K. one which is obviously a bit smaller, but I should say the MOD, the Ministry of Defense over here is very behind wargaming at the moment, and their Dstl, Defence Science and Technology Laboratory, which is the U.K. equivalent of DARPA, has been expanding their wargaming activity considerably over the last few years. Essentially, I might be misspeaking here but I think roughly right in saying about doubled in size.

As well as this, the MOD under STRATCOM has set up a different body called the Defence Wargaming and Experimentation Hub, which is kind of there to help unite and localize or centralize wargaming activity in the U.K. defense sector, and not to replace anything but to help sure that it's being coordinated in the most efficient way. For example, it has a huge wargaming space that you can play in a secure environment. So they're taking it very seriously.

I think now in the U.K. MOD, they've started to really put money behind it. Some of the things now that's being requested is a bit more of exactly what you're describing. Okay, what is our bang for our buck here? How do we know?

I'm assisting currently on a project exactly around that is how do we evaluate impact or return on investment? How do you know it's working?

And I think this is a really tricky thing. I think there's probably at least three things or two things we should flag very quickly here, which is one is the purpose of the game.

We have to be very careful that. Very often, we play a game and we take a personal experience out of that game.

But really one of the things we need to be doing when we establish or evaluate the value of a game is, and this is something that is very much in the literature of professional wargamers as well, what's the game built for? There's no point in playing a game and saying, "That wasn't realistic." Of course, it wasn't realistic. It's a game.

I really don't buy into the concept of realism as a key metric or indicator because as I've said so many times to my students, "I've got these wargames on my shelf. Some of them have eight pages, some of them have 65 pages. But if you play an eight-page rule book about World War II and you play a 65-page rule book about World War II, those two games have a lot more in common with each other than they do with World War II."

This thing about realism versus non-realism I don't think is a very effective measure.

I think a better measure is what was the game supposed to be doing? Did it do the thing it was supposed to do? If there was a particular thing we're trying to get out of it, did we get the thing?

I think the other part then which is sometimes related to this is when we ask that very specific question of a game, there's the bigger question of, well, what is the game really an analytic game or an educational game or an entertainment game, which is typically not what the MOD or the DOD is worried about.

But games, in principle, have this core meta purpose, and very often, when you play a game, it can do all three. You can play an educational game that's also fun, that might also give you some sense and understanding of some sort of dynamic, become a better researcher/understander of that kind of thing. But that may not be its core purpose.

We have to be very careful when we evaluate things that we're not looking for the wrong thing. People can often walk out of a game and say, "Oh, that was really..." They might say, "Oh, I was really bored. That game sucked. I was bored." But if the game's fundamentally...

I think there's other reasons you don't want your players to be bored.

But if the game's not there to be for entertainment, then their happiness or boredom is not really an essential measure of what's good or bad. I think there's actually a reason you always want players to be engaged, but if we bracket that for a second.

Or if people walk out afterwards and say, "Oh, that was really unfair," that's a very common one in classroom settings where I might design a game, which is politically realistic and part of political realism is that it's not fair. So someone team might go, "This is ridiculous. Everyone's ganging up on us over again." You're like, "Well, that's the actual strategic situation."

You learn what it's like to have to deal with being ganged up on. That's the lesson, what it's like to be without allies in a region or something, but it may have not been a very fun game. We have to be quite careful what it is we're actually looking for.

I think that's why some of these studies are quite interesting now because when you ask how do we, as an organization, know we're getting what we want?

I think there's good work on gaming and education broadly but wargaming and education, there's really not very much at all. That's part of that question.

I don't really have the answer to that because as I said earlier, these are studies that are actively taking place.

I think what is encouraging is that they're happening now and so that does mean that we have to start thinking, "Okay, well, what's a metric of evaluation? How can I know something has happened?" That can actually be quite difficult to assess those things but there are ways.

Ian Curtiss:

Two things that you spoke to that I'm curious to touch on.

One is the sense of a game has to be it's a serious thing. We're discussing something, this is a real life situation. We need to be serious. If you're having fun, then you can't be learning is oftentimes an assumption they subconsciously have is, "If I'm having fun, surely I'm not learning anything or I'm not appreciating the subject sufficiently." I'm curious to speak to that.

So interesting that this methodology is so frequently used in war, in the defense industry, which is one of the most professions that one can have and yet this methodology that's so frequently pushed aside because it can't be a serious thing, is most frequently used by the most serious professionals on earth. What is that about? How is that still an argument used in other industries or sectors?

David Banks:

I think there's possibly two or three different things going on at the same time, but we can talk about one of them.

One that I think here is you're right, your point about fun. Right now, the term I tend to use now in an effort to make it sound a little bit more expansive or scientific is engagement. Because you can also be having a game that you're really into it, but you actually may not be having a... You might actually be getting quite stressed or worn out or exhausted, but it doesn't mean you don't still want to participate. You're still in it and you want to be in that space.

For anyone who's listening and in yourself, if you've ever played a game which has really pulled you in, we have lots of different words for this. Immersion, the magic circle, all these kinds of ideas. But you'll know that experience. If you've ever played a game like that where you sit down and say, "Okay, let's play this Risk," or whatever it is, and then you look up and you go, "Oh my God. Four hours have passed."

That thing of engagement that pulls people in, that I think on one level embarrasses people because then they go, "Well, this is childish. This is for kids. I used to play Risk when I was a kid. What am I doing playing this as a 40-year-old man? What a silly thing to do." I think people are just inherently embarrassed by that.

And I'll also assume if they're having a good time, this can't be producing something meaningful.

Now, I think in fact, this is the bit that wargaming needs to be a little bit less concerned about because you'll hear some people use the term conflict simulation or serious game, and I just don't think it's necessary. I don't think... Instead of hiding from the fact that games are engaging, I think we should just say, "Yeah, that's right."

In my estimation, that's one of its key distinguishing factors as a method. Not only does it do this, I would contend it, must do this, in order for it to produce educational or analytic outcomes. You require the players to be sucked in because then they're playing authentically to the game. You want them to be playing their best version of themselves so the game can generate those interesting outcomes or teach those interesting lessons.

But I don't think there's anything to be ashamed of. I think that's just a function of... A bit of a social... The society in which we live, in which we are expected...

As St. Paul says to put away childish things. At some point, we're just expected to not be children anymore.

I think anyone who... I bet you anyone who says to you, "Oh, I think games are for children," will then also explain to you how they have a fantastic golf session on a Sunday. What kind of games do you think count?

There's very few people I know who don't watch or participate in something that is a game that would be one part of that.

But back to your military bit, which is, well, why does the military like this so much if at the same time they have a mild allergic reaction to it?

I think one of the reasons is because gaming, and this is true for not just manual analog games but also for, or indeed even more so for, digital games, is that they're very good at modeling kinetic things in a consistent way, in a way that we have a lot of validity about the results.

If you're trying to model something that's kinetic, that is, say, material physical and exclusively physical, gaming or simulation is a great way to do that.

For anyone who's listening to this who's ever played a video game like Microsoft Flight Simulator, you can learn how to fly playing that game, and it's still a game, but we have such confidence about the physical realm. We've got such confidence about the internal validity of the models we use to model that realm, that we can actually build a really effective facsimile of that reality to the point where you might actually be able to land a plane in a crisis if you had to having played Microsoft Flight Simulator. It's not totally made up.

I think for militaries, that's one of the reasons they've used it is because especially when you're talking about the operational or the tactical level, those kinetic physical factors really come to the fore and really aren't the... They're not only thing that's taking place. Your rules of engagement might strongly affect who you're targeting, your morale of your soldiers is something that's very hard for us to model consistently, but a lot of it is weapons hitting targets at range and we know the range, we know the explosive impact, we know how shrapnel works.

I think militaries quite understandably embraced it because they knew that the models, especially of kinetic elements, were quite on the mark. They're quite accurate.

In fact, the Navy War College, one of the reasons you'll hear why the Navy's embraced this gaming method so much even more than Army's is because they don't even have to deal with terrain features because the sea is a flat plane. So you really are just dealing with the weapon platforms. It is just speed of the weapon, of the platform, the ship, or whatever it is, and then the trajectory of the weapons that are being fired and so on and so forth. So they could really model this stuff really accurately.

Also, I think for militaries, war is a contest. That concept of, "I'm trying to take this hill on my left," you might also have a whole plan where you're on the reverse slope of that hill waiting for me to come over the ridge.

It's an effective tool for the kinds of problems that militaries face, especially at that tactical and operational level.

I think we can talk about this in greater detail. I think where gaming starts to have much more questionable, things we have to take much more seriously and get serious about talking about is once you leave the kinetic realm behind, because we don't have as much confidence as social scientists here, I'm speaking as a social scientist, about how do I know I'm looking at a democracy? How do I know I'm looking at the national will? How do I know I've identified interest groups correctly and the way they work together?

Because we have so many different ways of not just modeling these behaviors, but measuring these things. My measure of a functioning democracy might be quite different from your measure of a functioning democracy, and I might fold that into my game in one way and you might fold it into your game in another way, and then the games are going to be producing very different kinds of activity and results.

I think we don't have that common language of validity that we have for the physical realm, which is why it gets more tricky when you start to move into the strategic space, for example.

But I think the militaries have embraced it because when they do embrace this, because they do realize this is actually a pretty effective way to quite cheaply, comparatively speaking, test some of their theories about how a doctrine change might work, how weapon system might work, that kind of stuff.

That's certainly how it was used historically by, say, for example, organizations like the US Navy.

Ian Curtiss:

This gets to the second point that I was going to raise was your comment about immersion and people's ability to get immersed in a scenario. One of the cheapest ways possible to get people to actually think about a future scenario. I'm curious to get your take on the validity of the modeling or the theory behind a scenario. How important is that versus just getting people to think about that?

So many people are so stuck and from a very practical sense, most of us are stuck in the here now, "I got to get that..." "My deadline's in two hours. I got to finish that." "Oh, I forgot to call Sally the other day." "Oh, I, got to go pick up my kids." Whatever it is.

To spend a solid two hours thinking about a potential future scenario that forces you to wrestle with feel the pain, the cortisol, the stress, the dopamine, the learning drug of our biological systems around a potentiality. So in the end we're already asking, well, would it work that way? Would it not? We're already questioning ourselves and the system I think naturally. That's my take on it. And so the value of it is worth more than killing oneself over building the perfect model. I'm curious to get your take on that.

David Banks:

Yeah. We can talk about building the perfect model in terms of...

For me, the perfect model. Firstly, we don't have a consistent evaluative metric to identify how I would know my model is perfect, but I would say I think as a general North Star is the way I always say to my students just something you're roughly going in the right direction is you always ask yourself, "What's the purpose of my game?"

The game should have... You should be able to articulate that purpose reasonably clearly and then that can help you know, "Hang on a second. Am I going off-piece?"

Now, an off-piece tier, we can be specific about what I mean about this. For example, adding design features to a game that really don't serve any function for your game, I want to ask a question about how can armored divisions move from these kinds of terrain to those kinds of terrain over a hundred miles or something?

If I start adding in all this kind of crunchy detail about place names and... Why am I doing that? Right now, we might be doing that for immersive reasons but in terms of the game mechanics, it's like, "Am I adding a bunch of rules about you've got to stop and get petrol every few hours?" If that's what I want to study, if that's part of my question, then yes. If not, I'm just adding complexity.

It's actually quite useful. Knowing your purpose helps you know when to cost, when to add, it can also help you identify, "Okay, I might need some mechanic inside my game, which helps the game move as a game." That turns the gears at the gameplay, but might actually not have a logic outside of that. And so you can identify artifacts inside your game. They're like, "Hey, we've got a mechanism in here, which is not based on anything real," but it's necessary to make the game work and be able to pull apart your model like that.

I think that's about the purpose thing.

Your point about immersion, I think we have a few different words for this and my PhD student, Evan D'Alessandro, this is his entire research project about immersion, which is why I was eager to be a supervisor because it's such an important part of it.

I think the argument that's shared by both professional academic wargamers at the moment is that the immersion is what makes it a better method than alternatives under certain conditions because the argument is that if you make people play as the general in charge of the nuclear arsenal or whatever it is, and you immerse them into that experience, especially if they have some sort of prior expertise in that area, they're going to make choices like the real thing. That's the argument.

They're going to make a decision in this game and that's actually going to be reflective of the decision that the real decision maker would make. And the immersion, the fact that the game is pulling them in is what's doing that more than if you had, for example, just ask them a survey question. If you say, "Hey, if you were the general in charge of the nuclear arsenal, would you escalate?"

The point is that the pulling them into the game... When we think about immersion, what we're not sure... Well, we have sense of we have a good language of the things that pull you in. A lot of different things can pull you in. For example, adding the place names to the map.

I have a game, I run a diplomacy game, which is set in 1914, and I have six different countries. I have the flags of those countries. They have a little flag on their table. You've got the Austrians, you've got the Russians. They're time specific. It's the Austrian flag circa 1914 and people are waving those flags around. They have a little malice that they can bang a gavel to bang on the table when they're chairing the meeting. People in their fifties banging that like a maniac. That sort of stuff can help pull people in.

But I think the other thing that can help pull them in which I mentioned earlier is the game itself. That's part of what immerses you is that you're in a competitive scenario with somebody else.

I think to your question about why don't you just ask them the question is that's one of the big differences. It's in fact for me, one of the reasons I think games you want them to run for a few turns at least because, I'm paraphrasing Thomas Schelling, the economist, and I'm getting the quote slightly wrong, but the one thing that nobody can ever imagine is something that never occurred to them.

That's what gaming gives you, is that you can play a game of chess against yourself in terms of testing out strategies, but you can't actually surprise yourself in a game of chess, or at least I can't. I can't go, "Oh my God, I didn't even realize I moved the bishop there." Of course I did. I just moved the bishop there.

But when you're playing against somebody else, you might not spot that they move the bishop there, or they might not spot that you moved the bishop there, and it does change what happens next, and you can't anticipate that future. They might play better than you expected or worse than you expected, but they won't play exactly like you expected.

That's where a lot of the unique and interesting stuff comes out of the game is that when you ask somebody what's going to happen in the future, what would happen in this scenario, they give you 60 or 70 possible outcomes, but the game collapses it into a single decision, and then furthermore, it moves on and there's another set of decisions that have to be made around this new reality and it starts to go in paths that people can't anticipate, and I think that's what can be really valuable.

We built a game for NATO, the Wargaming Network. In building the game, we adopted a design feature used by two people who were at RAND at the time, Becca Wasser and Stacie Pettyjohn, who they built a rigid game, so a game with all the kind rules in the box, like a board game you'd buy in a store. They built that by building looser games, games more like Dungeons & Dragons, people talking around a map. We built those games first, then we took the results of those games to turn into a slightly more rigid game, took the results of that to build a very rigid game.

Now, the reason I mentioned this is when we played the very loose game, we had a bunch of experts in the room, foreign policy experts on China, the US, Asia, climate security, the U.K., Europe, space, bunch of different domains, and of course, they're all experts in their specific things but they're not experts in all of these things, and it's the way they interact that generates really interesting stuff.

There's a lot of stuff that comes out of the game that's not usable. It's silly outcomes. You're like, "Oh, that wouldn't really happen." "This is an odd line of activity that you're choosing." But occasionally, they would propose things and then they'd adjudicate themselves as a group.

One that always stuck with me is there was an instance in that game where the US team recognized, maybe 10 years in the future from now, Taiwan without Taiwan being invaded. Now, in this specific scenario, the reason they recognized it is because in this specific game, what had happened as other events had taken place is South Korea had developed nuclear weapons autonomously in reaction to North Korean threats.

And what the US team, which was populated by US specialists, these are some people who've studied in the US foreign policy for 20 years were like, "Okay, in this instance, under these circumstances, it is plausible that the United States might choose to recognize Taiwan," in the side of the game scenario because they want to send a signal to the region, "Hey, don't panic, don't go nuclear. We're here and in order to prove that we're here, we're going to do something very costly and risky to show you how much you, our allies, i.e. Japan, can trust us. Don't follow Korea down this road."

Now, what was so interesting, as I said in that game is that it's not a likely outcome. It's not something we expect to see.

As an event, as a type of event, it was highly implausible. But in that specific instance in the game, given those preceding conditions, this was a plausible outcome.

What it gave us here is what I call a synthetic causal mechanism, a non-real causal route towards an outcome that prior to it happening in the game, nobody in the room anticipated that. That's what was interesting. Nobody comes into the room going, "Well, I know what's going to happen down the road is..."

I suspect if you'd add most of those people ahead of time, do you think the US has any conditions in which the US would recognize Taiwan short of an invasion? They would've said, "No." But the game creates a circumstance in which actually something you never thought of now becomes plausible or viable.

Ian Curtiss:

Complexity theory is this is a complex... You're building complex systems where emergencies occur that nobody thought could occur. Nobody knows how it developed per se but we got to a point where all of a sudden it seems the best course of action and nobody ever thought that it would be the best course of action.

David Banks:

You'll actually hear stories of that in other wargames where people will come into the room and say, "I'm a nuclear policy specialist. Under no circumstance in this game am I ever going to authorize nuclear weapons use?" And then six hours later, they find themselves authorizing nuclear weapons use. I think it's exactly that.

Now we have to be very, very careful about, okay, what's in the game? How much of this is an artifact of the game? The game is forcing them down certain roads.

There's interesting research by a scholar called John Emery about how RAND wargames in the fifties and sixties, nuclear wargames, depending on how they were designed, the ones that were more mathematical and crunchy in that way tended to have more nuclear escalation and nuclear outcomes than the ones that were a little bit more human, for want of a better phrase in their design, and allowed for more negotiation. So it's a model. So garbage in, garbage out. You've got to be very careful about.

But if you are able to identify that, and this is part of my interest in evaluation, if you can identify ahead of time your own places in your model in which you're confident about, "Hey, I think we've modeled this bit really well," and you can say upfront, "I actually know we think ourselves, this part of our model is not very good," it could be useful then in terms of when you start to produce your results, instead of going, "Hey, here's what happened in our game, take it or leave it." You go, "Okay, something's happened in our game that I actually feel quite comfortable defending as an activity or a decision and other things happen that yes, they happened in our game, but I would go ahead and discount those."

That's part of my own research project at the moment is ways of being able to report game results that disaggregate the game into its constituent pieces, individual decisions, events and outcomes, and be much more willing to and much more clear about which bits of those are good. Rather than handing over the game and say, "Believe it or don't believe it," you go, "I think you should believe these bits quite a bit and maybe not believe these bits at all."

And in the game we built for NATO, the reports which we've just finished, it says exactly that, "Hey, these bits are good. These bits are not so good. So don't read this or play this game and walk away going, 'Aha, now we know X,' because we ourselves, as the designers said, 'This game cannot tell you X. It doesn't discuss it, or anything that happens around X is actually a bit of modeling that we had in for some reason that might be there to help grease the wheels of the game rather than actually is generating useful data. But other stuff we are more comfortable and confident about you should take those bits more seriously."

Ian Curtiss:

Yeah. I want to circle back.

You spoke to people coming in with assumptions and so forth, and that's just one other thing that we always talk about is an important part of it is people are bringing in their subconscious assumptions and they're thinking consciously about things that they've read books about, they've studied, they've done masters, PhDs in whatever it is, but then all of a sudden you put them in the game and they realize they have really strong emotions about some of these topics, and their emotions override their conscious thinking, and it really leads to some interesting agency theory analysis for the games.

But we're running a long time. What's coming down the pipeline for you, and can you speak to one of the most exciting games that you've either developed or you've worked with or supported? Whatever comes to mind.

David Banks:

Sure.

Well, so then the pipeline. I have this research about subject matter experts, which actually I completed all the primary research, the actual interviews over a year ago and now I've just been trying to really write the motivating literature theory around it more, but it's essentially ready to go. It's a classic tinkering with something and just actually get rid of it.

But that's fundamentally, as I said, about how do gamers, professional wargamers deal with problems of say, "I'm using the language of bias. Maybe validity would work just as well?" But how do you know you've got the right model? And in particular, it's about subject matter experts and how you know you've got the right people in the room and you know you're getting the best out of them when they're in the room because we do know, as you said, that people suffer from biases. Even experts suffer from bias.

There's a large literature here about expert political judgment and how poor it is essentially.

That's been really interesting because it's really getting in under the hood of a lot of the existing best practices that professional wargamers are doing. And so I'm actually quite hopeful that when it gets published, it'll be useful for them as much as it would be for academics in terms of, "Hey, this is what you guys are doing."

One of the really useful lessons that we learned, I learned from that research that we use when we were building our next game was when people start to behave really, like you said, they often get really engaged and emotional in the game, a very obvious, but it was useful for me, a very simple thing you can do is just periodically pause and say, "Hey, I just want to remind everyone what the purpose of this game is, why we are actually in this room because right now you're in gamer mode. You're busy trying to make the US win but actually, I need you to remember that you're a SME who we hired to do something and we need you to..."

That's a very useful little facilitation trick. But every day and again, you just sober everyone up and go, "Hey, it's more important for us that you play authentically rather than you play to win right now," because often they won't adjudicate harshly against their own team or something because they're trying to win.

That's been very useful and so hopefully, that's going to be a very interesting project.

That's a precursor. It's combined with two other research projects I have right now about evaluation, how I can identify the strengths and weaknesses of a game model using as the initial way in this tension, as I mentioned, between representation and playability. So I'm trying to come up with ways that you can essentially evaluate your own game along those dimensions, as a representative model and as a playable model, that you can then look through it and say, "Okay, I think our game has got this level of quality," before the game is run.

It's a way of trying to diagnose the quality of your game design prior to it actually starting to generate data. And so that can help you establish, as I said, ahead of time to the reader, "Hey, you can go ahead and ignore a bunch of this stuff because this part of the game we know isn't so great, but please pay close attention to this stuff."

That's all going to hopefully feed into a project that I do hope to get started, I keep saying next year, but I do hope it gets started probably in late 2025 on a wargaming book.

I do want to write a book about wargaming methodology. There are some books out there. I think we're in the middle of this great debate now it's great time to get in again.

In terms of game designs, what's exciting is there's more professional style games coming out onto the market. It's sometimes harder to get your hands on but they're there.

RAND has a game called Hegemony, which is about strategic decision making.

Sebastian Bae's game Littoral Commander about operational tactical warfighting at the coastal level.

The Air War College just has a new air force wargame out. UK Fight Club has Take That Hill! which is an infantry wargame.

Dstl has a game now, which is about strategic wargame called CONTESTED.

Now, I mentioned all of these games because they're examples of professional style wargames. Now a lot of them have a commercial DNA, where commercial wargaming is where all the design innovation is, but a lot of commercial wargamers or a lot of professional wargames are also commercial wargame players and sometimes designers. So it's no surprise there's a lot of cross-pollination.

But they are really good examples of if listeners are like, "I want to play a wargame that the professionals are playing." These are titles, most of which I mentioned you can purchase, you can find copies of these that you can purchase.

Most recently, we ourselves, King's have built games, individual members and also now as an organization, the Wargaming Network has twice built games for NATO. The more recent one that we built this game called Horizons is one I'm very proud of.

We did this seminar style game, which is a game we have a lot of SMEs talking. We had an initial idea. Well, what happened is NATO approached us and they said... NATO ACT, I should be clear. NATO ACT in Norfolk, Virginia approached us and said, "We are going to run some games that's set in 2045. We'd like you to run some complementary games set in 2045."

And for a lot of different reasons, we thought that that wouldn't work but the fundamental thing was, "Well, we don't really have access to the stuff you're looking at, so we're probably going to build a bunch of games that are just not going to communicate with your games at all."

We discussed it for some time and then we agreed actually a better idea is we'll build a game that starts in 2024 and it ends in 2045, and then what you can do is we can compare our game results with your game starting points. It's a complement rather than a comparator. It's you would look at...

We ran this game a number of times for NATO, 14 times in total, but we've also run it at subsequent times a number of times.

We had this initial design, which was a seminar game. Very loose. That fed into a free kriegsspiel, a slightly rigid game, think Dungeons & Dragons. Some of the rules are quite crunchy, some of the rules are very open. And then that fundamentally fed into a final rigid game called Horizons, which is a game starts in 2024, ends in 2045.

It's a game in which you play one of either NATO, the United States, the E.U., Russia, or China and it's about strategic competition sub threshold. These countries can never go directly to war with one another, but they can go to war with a lot of other places on the map. It deals with strategic competition, economic development, political influence in both countries and regions, populism, elections, revisionist states, migration, and then technological developments, multi-domain operations, and at the back of it all, a looming climate crisis that if the players do not address early comes for them at the end.

It's a game I'm very proud of because it has design... All games have limitations. Rigid games really suffer from limitations because they're everything. They're God in a box.

But I'm very proud of how myself and our team working together, we were able to build a game that really gets at a lot of the types of politics that are going on in the world and we're hoping to get that up on Game Crafter in the next few months. There'll be announcements if you're interested, follow on all of our KCL Wargaming comms on Twitter and LinkedIn and so on and so forth. But that's going to be available for purchase.

We're happy because we'll be able to add it to that stable of games out there that I've already mentioned of professional style wargames that people can purchase and play at home or play in a classroom.

Ian Curtiss:

That's awesome.

Okay, great. Yeah, I'll definitely keep an eye out for that. I've heard many things about the game. I'm excited to get my hands on it. That's awesome, David.

Well, thank you so much for joining us. This has been awesome and fascinating and so intriguing. There's lots of opportunities for more research to be done. Well, everybody agrees it's a good thing. We are all working out how to make it a better thing.

David Banks:

Yeah.

Ian Curtiss:

It's awesome.

David Banks:

If I could take a final note on this, it is such... For those of you who are academic in this podcast and are looking for a research program, this is a phenomenally exciting space to be in because so many of these questions need to be answered, and it can't be done by a single person. Somebody has to come along and answer these individual discrete questions what does immersion, how do organizations learn, which is one of my other PhD students, Boukje Kistemake, is working on.

It's an exciting place. Almost any question you have to ask is waiting to be answered. And so it means the sky's the limit. We don't know what the answers are to so many of these questions, so when we finally get them, it's going to be a real surprise.

Ian Curtiss:

Yeah. Thank you, David. Appreciate it.