Something is rotten in the state of the internet. Social networks that were once meant to be entertaining diversions have become riven with vituperative political combat that leaves all but the most blinkered acolytes running for the safety of a funny YouTube channel. Bots swarm through the discourse, as do trolls and other bad actors. How did we let such a crucial communications medium become enshittified and can we build something else in its stead?
Joining host Danny Crichton and Riskgaming director of programming Laurence Pevsner is Renée DiResta. She is a leader in the field of internet research and is currently an Associate Research Professor at the McCourt School of Public Policy at Georgetown. She’s written recently on the surges of users migrating from one internet platform to another, as well as on the future of social platforms in the age of personal agentic AI.
Today, the three of us talk about how social networks like X, Reddit, Bluesky and Mastodon are each taking new approaches to mitigate some of the dark patterns we have seen in the past from social media. We then talk about how the metaphor of gardening is useful in the course of improving the internet, and finally, how private messaging spaces are increasingly the default for authentic communication.
Produced by Christopher Gates
Music by George Ko
Transcript
Danny Crichton:
Renee, thank you so much for joining us.
Renee DiResta:
Thanks for having me.
Danny Crichton:
So Renee, there's so much going on in the social media world I think. When I look at your research, you cover disinformation, the future of federation and social networks. So many, so many different topics and they're all on the front page of every magazine, every newspaper around the world. You must be overwhelmed these days, both with requests from people like us, but just trying to cover and keep up to date with everything that's taking place in your research area.
Renee DiResta:
It's really busy. I mean, I think that's because right now we're in some exciting transitional times. Yeah, and some weird transitional times too. Exciting, but you know. No, I think it's fantastic that people are paying attention to these topics now, and now you're starting to see the public pay much more attention to the role that big tech plays in our lives, and sometimes negative consequences that brings.
Danny Crichton:
When you think about 2025, I mean there's sort of arrows of social media. We go into 2000s and into 2010s, we have sort of a burst of moderation, moderation sort of being pulled back. When you look at 2025 and the world that you're looking at today, what are some of the top two or three factors you're thinking about, in terms of your research and the things that people should be focused on in social?
Renee DiResta:
I am thinking about two big areas that we might call future of internet research. The first is middleware, and that is a term that refers to third party providers enabling users to interface with, in this case social media a little bit differently. Third party provider that might act kind of as an agent for you. Meaning let's say you're on a site and you want to curate your feed in a particular way, you could potentially use a middleware provider to do that for you.
Where maybe you subscribe to The New York Times feed or the Fox News feed or whatever it is. You can see this really having an impact in curation and moderation, because there's so much dissatisfaction on that front and I think that giving more control to users is really important. And the thing that I love about working on middleware right now, is that we have a platform, Bluesky that's really giving users an incredible amount of control. So it's sort of like... Whereas the old conversation about middleware focused on trying to prevail upon big tech to do it, now we have new tech and I feel like that move towards decentralized spaces is going to be really transformative.
The other big area that I'll just mention more briefly, as you have more and more agentic AI, so AI that operates autonomously on behalf of a person. One of the interesting questions that we're going to be faced with is what is a person who is acting in a particular sphere, is the thing that you are talking to on social media a bot or a human? Maybe you don't need to necessarily know which person it is, but you might want to know if it's human or not. So there's different degrees of what we call attestation, where you are indicating that something is operating on your behalf perhaps, where you're indicating that you are who you say you are, like you're an individual person, or you're just indicating that you are human and you're anonymously engaging.
So I just kind of reference this in the context of social media, but it's also really going to impact ways that we engage with systems like public comment or verification for online services. A lot of services move to voice ID. So thinking about ways that we can credential who we are in ways that are still privacy protecting, and not necessarily solely controlled by the government.
Danny Crichton:
So let's go to the first one. So I mean, when you think about the social media history, let's just use Facebook as an example, everyone uses the same app. We all got the same algorithm. That algorithm was determined by a team of machine learning engineers and systems engineers over in Menlo Park, and we all either got the For You or sort of the following feed, and there wasn't a lot of choice. With Bluesky and a lot of these new federated chatting models, you essentially can download whatever algorithm you want. You can even design your own.
And I think what's interesting here is, one, it's not the expectation that billions of people are all going to customize their algorithms. But what's interesting and what I've seen emerge out of Bluesky is this idea of just different classes of algorithms. You can almost sort of download from a library of say 20 or 30 choices. So as a researcher myself and as a writer, there were tools that allow me to only find the ones that have links to articles and I can actually curate almost as an RSS feed. I can find the ones that don't have any links to articles and are just discussions. I can also find better ways to search. And so you have this sort of nuanced flexibility that I think opens up these apps in sort of a very different way.
And then I think part two is, that customizes the experience in a way that can also be negative. So the positive is obviously I get to have my control of my software in a unique way, open source is available. But then I can actually create filter bubbles that weren't there before, that didn't really have an option to control as much. How do you balance between those positives and negatives, when it comes to opening these platforms up and allowing people to get into the protocols themselves, what they can see and what don't?
Renee DiResta:
This is one of the real interesting questions, which is as we move into a realm where this is possible, what do users actually do? And so a lot of what I am trying to do now is move from... We wrote a paper on middleware, myself and Richard Reisman and Luke Hogg of FAI, Foundation for American Innovation, sorry. We tried to write kind of a bipartisan piece on what sort of regulatory environment helps third party providers actually do this. There has to be some way for them to be incentivized to do it.
Right now on Bluesky, what you're describing in the realm of curation is, people who have just decided out of their own altruistic interest to go and do that. There are also moderation feeds on Bluesky, so you can use particular types of labelers. And in my early conversations with people who are running labelers, they're doing it out of love for a community. Oftentimes they're really inspired to moderate in a way a particular community wants to enable that for their community. So there's very, very, very granular types of labels that you can assign and that you can then filter out of your feed, which is great. It's also user choice, but they're doing it again as volunteers.
And then there's a lot of reports already about how overwhelming that experience actually is. So we have the technology to do it. Now we have to create environments where people want to do it. As far as my experience on Bluesky, I joined very, very early, think they gave us all these user badges. I was number 4,000 or something like that, and I didn't know what to do with it at first. I have to say I felt like when I first landed I was like, "Oh, it feels very politically lefty. I don't know if I fit here."
Danny Crichton:
Right.
Renee DiResta:
It's way to the left of me. And I found it interesting and then everybody was sharing AI generated art. This was around the time that Midjourney kind of released and I was like, "Well, I'm not really so into that." Once they started doing two things, one where you can create lists of people to follow, the starter packs really all of a sudden made it easier to figure out who was there and to solve that kind of cold start problem that so many new entities have. And then the labeled lists... Sorry, the sort of feeds. I'm like a really bad gardener. I try, but I kill everything [inaudible 00:06:46]. And so I subscribe to a gardening feed and that was where I was like, "Okay, I get it. This is the value."
It's not a replacement for Twitter, where I'm going to expect some magical curation algorithm to figure out what I want. It actually kind of puts the power in my hands and with a little bit of legwork, couple hours of picking feeds and hiding and moving things around, I can really have an incredibly tailored experience. And I think what we're going to see is, as it becomes more politically diverse, as more communities see it as an infrastructural system as opposed to live Twitter, you're going to see a proliferation of different communities that sort of find their niche and are really using it as a tool, like using the protocol as a way to create environments.
And then from there, I think it's just a question of can we incentivize people to engage constructively, right? Can we build, for example, feeds that surface politically distinct voices in the same feed but are really working on bridging, as opposed to amplifying the most sensational, terrible, hateful content. And my hope is that people will be drawn to feeds like that, but again, now we have to wait and see what users actually do.
Laurence Pevsner:
It's not surprising to me that you're a gardener because I really liked one of the metaphors you used in your recent piece in Noema Magazine about all of these topics. Where you talked about how previously some of these major platforms like Facebook, like Twitter, now X, where you described them as walled gardens and that now we've moved to... If you look at these more federalized options, you can think of them as community gardens where there isn't one of these people who's coming from the top and saying, this is how it is and I'm the hedge trimmer. And it's more community coming together and building their own gardens.
But that for me raises the question of, well, what about the defaults? Most people are not you. You're not experts on social media, aren't really digging in and spending that startup cost and instead, they just download the app and then whatever the feed is that they originally get, that's kind of what they stick with. I remember even I, I think of myself as a fairly sophisticated user and still whenever I would go on Twitter, I saw that there was the tab that was just my followers, and I knew intellectually that was probably what I wanted.
Danny Crichton:
Right.
Laurence Pevsner:
Was just to follow and see the tweets from the people that I intentionally wanted to see. And yet, because it defaults you to the algorithmic feed, just so often I was still looking at that instead. So how do we think about in this idea, okay, we're moving to this world where you can customize and think about that, but most people won't. How do you think about that?
Renee DiResta:
The onboarding is such a key part of it. So I think every now and then there's some either political upheaval or something happens on Twitter or something happens on Facebook. There's, I think 3 million user influx the day that Zuck dropped his, the fact-checkers are biased speech and went on Rogan about it. So you see people moving, and I allude to this in the piece that you're talking about, which was called the Great Decentralization, it's in Noema. And what I argued was that people used to move to social platforms because of features and increasingly you do see them moving because of vibes. And so what is the onboarding experience there.
When you see these new users come over, the 3 million or so who popped over a couple weeks ago, I think you also saw a big influx when TikTok was temporarily down for like 48 hours-
Laurence Pevsner:
Yes, everyone was going on RedNote and-
Renee DiResta:
Yeah.
Laurence Pevsner:
Yeah.
Renee DiResta:
Yeah. And so some people went to RedNote, but other people came to Bluesky and there was this interesting phenomenon where people are like, "Okay, I think this is supposed to be like Twitter, but I can't figure out what to do." And that's because it's not immediately obvious. The team is so new, they're focused on building the infrastructure, building out the protocol. Not necessarily on the onboarding and growth experience that you would see from bigger company like Threads for example, where there was already a social graph to build on, recommendations to build on. So teaching people that in this place, you pick from these things over here, you can moderate in this way.
It's actually really difficult to figure out who's using feeds and who's using the moderators. I imagine Bluesky has that visibility internally, but even though so much of the architecture is transparent and you can use tools like Clear Sky to see all sorts of different things about it, the question of adoption in those things is a little bit opaque right now. So I hope to be studying that over the next couple of months and just having a lot of user conversations about why are you here and what are you doing.
Danny Crichton:
The internet is not new anymore, right? It's old. And so we have these affordances we expect from Facebook, from Twitter, from Reddit, from Myspace even, potentially all the way back in the day when you have this new piece of software and it has new features that we've never seen before. There's not the sort of obvious experience, the affordances, the idea of like, okay, I know exactly what I have to do. I've been here before. I need to link my address book, I need to connect with my first 10 friends, etcetera. And Facebook and Twitter spent years building teams whose goal was to onboard folks to get them to 20, 50, a hundred of their friends, that would lock them in permanently with content that was able to bring them back daily.
The question I have for you though is you're talking about cultivating this kind of community garden, and the flip side of that is the weed killer aspect, which is in every community garden, you got the good stuff, the tomatoes, the flowers you wanted. But then you also have the weeds that come in and that the equivalent would be the trolls that sort of show up. And one of the complaints as people have kind of migrated from Twitter or X, onto Bluesky has been, well, the trolls have come along as well.
And so one of the questions I have is obviously software affords people to do what they want. They can post anything in that little box. We've trained people over, I think 10 to 15 years to say, we know how to hack the human mind. We know how to hack your psychology. We know how to get a rise out of you. If I say this, I say that, I know you're going to respond and you're going to respond in this negative way. And I want to ask the follow-up to your earlier comment of this, how do we create these systems and incentive structures within these networks to say, look, if you stay to the positive realm, if you're constructive, there's a really positive path for you. The community garden is happy. You all get to be part of the potluck, versus you show up, spray weed-killer on everyone's lawns, everyone is dying from biotoxins in the bloodstream or whatever the case may be. I'm stretching your analogy as far as it'll go.
Renee DiResta:
No, actually it's really funny you're doing that because whenever you write a long essay, you have to pick one central metaphor and stick with it. And actually, I had gone with that because in my real community garden experience, twice now, I've had this experience where... I don't know if you've ever grown winter squash, but it takes them forever to ripen, like a butternut squash or something. It turns out you're going to leave the thing on the vine for six months, and inevitably someone will come and steal them. And so you've invested all this time.
Danny Crichton:
Yes, yes, yes.
Renee DiResta:
It's taken from you and you're like, nobody is in charge of this. There is nobody who is going to keep this from happening. There's no walled garden, right?
Danny Crichton:
Right, exactly. Look, I don't think you've lived in New York for a community garden, but I will say the only thing worse than the co-op board is the community garden governing board of plots, allocating plots, who gets which plots that have sun.
Renee DiResta:
[inaudible 00:13:45], no, it's all its own thing.
Danny Crichton:
Oh, so much shit.
Laurence Pevsner:
And none of that keeps out the rats, by the way.
Danny Crichton:
Exactly.
Renee DiResta:
And there are rats, turns out you have to trellis your squash.
Danny Crichton:
Yes.
Renee DiResta:
And there is a metaphor there. I wound up actually cutting some of that from the essay, because I felt like it was too much Renee's revealed traumas as opposed to... But no, in all seriousness, that is the trade-off. And this is the thing where... There's a saying for people who do trust and safety work, that the problem with social media is people. We look for technological solutions to human problems. And the reality is some people are antisocial and awful. Some people will steal your damned squash. And so you wind up in this position where you're like, all right, well, how do you create norms where that's frowned upon?
Social networks change. When there were like 4,000 people's very different feeling than when there are millions of people. But this culture of... It's got a very rapid block culture, meaning people don't spend a whole lot of time engaging with trolls. The culture is sort of just block and move on. And the way that their block function works, it really limits. You don't have people continuing to have conversations in the replies. It's just sort of you're done. That's not in your feed, you're not going to see it anymore. On the flip side though, what this means is that sometimes you can't see stuff that you might want to, right? Like the question of who is responsible for dealing with doxing.
And the platform that I think has been the most effective, from a federated standpoint actually is Reddit, which has this sort of centralized governance that sits up at the top, that says terrorism, CSAM, the illegal stuff. Like certain types of content, like pro-anorexia content where it's what, Daphne Keller's term for it is lawful but awful. There's no first amendment prevention. The First Amendment legally doesn't apply of course, but even culturally values-based, there's no first amendment prohibition on pro-anorexia content. But most people kind of agree that we don't want to see it, that we don't want it serve to our kids.
And so you do see platforms like Reddit that have that top-level governance, and then a lot of the power for the day-to-day moderation is in the hands of the local mods, who again are volunteers. And this is where you get at that trade-off of the professionalized experience versus people who just do this because they care. With Reddit, there are cat picture groups where it's against terms of service to post a dog picture in it. [inaudible 00:16:14], okay, you violated the rules.
Laurence Pevsner:
Right, right.
Renee DiResta:
We're just going to boot you. And can sit there and scream about censorship if you want to, but nobody cares, because we're all there because we agree that in this community we do these things. These are our rules. And that I think... So Reddit has this very powerful federated governance structure. And Bluesky is interesting right now because it is still largely... It's this one central instance as opposed to Mastodon, which is smaller, but you do actually have that much more. The server administrator is controlling a lot of the moderation. What's the term people use sometimes? Like agreeable tyrants, basically you sort of opted into this particular overlord on that server, but again, you're doing it because your values aligned.
So the question we come to is at what level is ideal for having that kind of moderation, and that sort of experience of creating a pleasant environment where people want to be.
Laurence Pevsner:
Maybe my favorite subreddit on Reddit is the AskHistorian subreddit. Have you ever been on this, where-
Danny Crichton:
Yeah.
Laurence Pevsner:
What's incredible about it for listeners you don't know, is they have some of the most extreme content moderation policies. You go to the page and it's like thread deleted, thread deleted, thread deleted, comment deleted, comment deleted, and it looks like it's a ghost town. But their policies are all about, well, you have to be an historian who responded with a really thorough answer to these questions. You have to have sources, and if anything doesn't meet these standards, like sayonara. And the people love this because it makes it for a very high quality discussion where you get to engage with real historians.
Now, you wouldn't want that kind of moderation on say, just the pictures subreddit or the funny subreddit, but it makes sense there. One big difference though with Reddit versus Bluesky versus Twitter and a lot of the other platforms, is Reddit is a mostly anonymous space. Most people have usernames that don't identify who they really are. You can voluntarily identify yourself, but it's actually not the norm. Versus somewhere like Bluesky, like Twitter, most people, not all people, but most people are trying to identify as themselves.
And so that kind of gets us to this question of impersonation and also the agentic question that we were talking about at the beginning. It's okay, even if it is you and not just a made-up bot, but you're sending your own bot to act on your behalf. Is that okay? Maybe the question is, does this federization not work as well when you also don't have anonymization to go with it?
Renee DiResta:
So Reddit is interesting because it has persistent pseudonymity. So you're not using a throwaway alias for every post and people who want to use a throwaway alias in some of the more sensitive medical topic Reddits, will say this is a throwaway. And so that persistent pseudonymity where you have the little number that appears alongside your account, that kind of conveys that you're not a complete newbie. There are some subs that will do, you must have achieved a particular rating before you're even allowed to post here. And if you don't like that, well too bad. Go post in other places and then get your comment cred up and then come back.
One of the things that we wrote about in the middleware paper that I did with FAI that I mentioned earlier, do you guys remember Clout? Remember the startup Clout?
Laurence Pevsner:
Yeah.
Renee DiResta:
Yeah, right.
Laurence Pevsner:
Yeah.
Danny Crichton:
Yeah.
Renee DiResta:
So taking you back to, was it 2015 era, 2011 era internet. That was interesting because it is in a sense, not technically, but it was basically a middleware provider. It was like, here is your third party identity expertise credentialing software. So for those who are under 40 or whatever, the way that Clout worked was that you could actually assign expertise to your friends. So you could say... And it became kind of a joke, right? I remember I had a friend and we were having a conversation about poutine, which I am not a fan of. After this fight, it was sort of an inside joke. He gave me a plus K in poutine, which then actually shows up on your profile and so it becomes... This was maybe the quainter, more pleasant days of the internet, but it was just a funny way to indicate that if this person is talking about this thing, they probably have some expertise in it.
Which as you're alluding to with your historian example, people actually want. Not everybody wants the damn hot take from the weird blue check that is an expert in hurricanes, wildfires, politics, voting machines and UFOs. But that's what we get when determinant of what is surfaced is popularity or a large follower count, as opposed to things where you can envision creating better systems to surface expertise in some way, kind of akin to how Reddit does it. My hope is that we get to something a little bit more like that because again, it's community driven. It's not bestowed from on high by some centralized opaque private power that controls "the algorithm."
It's a much more democratized process. It's going to be adversarially manipulated, we all know that. But it is an interesting way to move into that.
Danny Crichton:
So we go from a world of Facebook, Twitter where universe of a billion people are all sort of flat. Everyone's sort of equal in that world, you can talk to anyone, you can reach out. At some point there are celebrities and so there are sort of VIP accounts and there's a whole world of knowledge we've learned from leaks and stuff, however that world works. I am not a celebrity, so I have no access to these tools, but I hear from friends. But then in the last couple of years, we've seen this migration to private messaging apps, channels, WhatsApp groups, etcetera, which are not public, which are in some cases undiscoverable through any sort of affordances on any of these products.
How do you balance this, the public social versus the private social, and are we getting the balance right and are the tools in good shape or do you see it going one way or the other?
Renee DiResta:
One of the things that became unpleasant about Twitter for a lot of people I think, was that the kind of gladiatorial arena aspect came to kind of define the experience for a lot of people, celebrities and normies alike. Where you made one wayward comment and all of a sudden you were the main character of the day. You remember, I enjoy drinking coffee with my husband.
Danny Crichton:
Right, yes.
Renee DiResta:
[inaudible 00:22:21] privilege [inaudible 00:22:22]. Think about the people who don't have husbands or coffee or gardens or whatever the hell it was. And it's ridiculous and we laugh at it, but also it's kind of real. And so I think for a lot of people, or God forbid you post something in one of these nut picking accounts picks you up and tries to turn you into an avatar for all that is evil about the other political tribe, and that dynamic is bad. And then you have the dynamic of the sort of cancel mob type vibes across the political spectrum, like how do we silence this person and push them out of the conversation.
And I think it did lead to a lot of people moving to smaller, more intimate social media experiences or... For younger kids in particular, there's... I have my public Insta, but then I have my real one that's private for friends. And there's an innate sense of, you have your small group chats with the people that you actually are going to discuss controversial ideas with, share articles, get into the kind of conversations we have with friends in real life, versus the public performance internet.
And the public performance internet is important for shaping public opinion, and there are people who choose to be in that arena for those reasons. But I think that people increasingly see a divide between real authentic conversation, versus performative social media.
Danny Crichton:
Let's wrap that back into agentic AI, because one of the biggest challenges obviously is adversaries, either domestic, overseas adversaries going onto the internet, or maybe your commercial adversaries who are trying to shape public opinion, use automated accounts, bot networks, etcetera, to make stuff go viral that wouldn't otherwise go viral. Basically all this inauthentic content. And one of the reasons that I enjoy private social media is generally I know who I'm talking to. I know the friends, I've met them in real life. I know they're real people. They're flesh and blood, unless somehow they perish, someone took their phone numbers and are somehow remarkably capable of reproducing the same text and same jokes that they did before.
How do you, in this world where we see so much accuracy in the Turing test as sort of being broken, particularly on texts. And I think of a tweet where there's only 20, 25 words, it is really hard to identify AI versus non-AI content in that kind of context, 'cause there's just so little information to work with in any authentication or as you said, attestation algorithm versus video or audio or other kind of higher information, entropy texts.
How do you start to evaluate how agentic AI influences public social media, and is that sort of a long-term dampener on the industry?
Renee DiResta:
It's a really interesting question. Some of it, there's demonstrable AI accounts that are like avowedly AI, like influencers on Instagram that are AI models and whatnot, AI creations, who actually wind up with fairly large followings. Also, that you do see people engaging with it. There's the rise of the, again, a valid chatbot ecosystem like character.ai and Replika and things like that. So some people really don't care or enjoy engaging with a machine for various reasons. Maybe you can tell it things you wouldn't say to a real person.
And I think that there will be AIs that will identify themselves proactively. Twitter has that little bot thing that you can attach to automated accounts, stuff like that if you want to proactively disclose. So the question is really like how do you avoid the manipulative kind? And the thing that is really challenging is that it's very hard to gate things out at this point, because they are more and more sophisticated, harder and harder to detect.
The question is the platform going to want to invest in playing this cat and mouse game with adversarial accounts? I mean, candidly, some of these social platforms don't do a very good job dealing with bullshit click farm stuff. How often do you go to an AI slop page on Facebook and realize that there's 32,000 comments and probably 30,000 of them are from some random click farm, right? You click on the profile, it basically doesn't exist. And if you look at Meta's kind of transparency reports, you'll see how many millions of accounts they are taking down.
I do think that platforms are going to eventually try to create a space with proactive attestation to get in. So you're not compelled to do it to participate. You're sort of choosing to enter a space where everybody else has also chosen to certify themselves, if you will. And the question is, if you have to do that by showing identity documents, people don't want to do that. That's a very high cost. Why would you? I mean, we all see these data breaches and things like that. I remember Parler, sort of the right-wing social network, they would let people upload their ID and then they would get, I think it was a red badge that just said that you had uploaded your ID and you were real. But then they got hacked or they had some sort of data breach, some sort of leak maybe, and it turned out that a lot of those images of people's driver's licenses were just out there in the world all of a sudden.
So the question of what you credential with is, I think a really important one. And again, do you need to identify that you're you or do you just need to identify that you're human? And so right now because of things like AI is able to pass the Turing test, you're seeing people start to think about what is possible. I think we're still kind of in the early phases of, do people feel a need for it yet? What is the legitimacy? What are the areas where you'll want to authenticate? I don't know if you guys have ever been voice authenticated by a system before, but I had this very sort of jarring experience.
Danny Crichton:
Yeah, yeah, yeah. You may not even realize it 'cause it's actually very natural.
Renee DiResta:
Right. [inaudible 00:27:47].
Danny Crichton:
They've already pre-done it. And if you call Delta or some of the airlines or banks, they will do it automatically.
Renee DiResta:
Yeah. And I remember, I called my HSA. When I was at Stanford, I called it and it voice authenticated me, and I was like, I say my name on podcasts couple times a month.
Danny Crichton:
Right, right, right. If you're a podcast host, they have your voice.
Renee DiResta:
Yeah.
Danny Crichton:
There's no way...
Renee DiResta:
And it's not even famous people or public people.
Danny Crichton:
No.
Renee DiResta:
It's, you made a TikTok once, like boom, there it is. I think that we need to be taking this seriously, and I understand that the idea of government doing it is really creepy to people. So I think we've got to sort out what types of credentialing systems move through government, what move through foundations or private platforms, and where does this stuff live and what degree of information do you provide in order to access particular systems? And how do you maintain your privacy so that we can continue to have pseudonymous internet and things like that.
Danny Crichton:
When we talk about this issue, I mean, we first met you in person at our Riskgaming session in Washington D.C. We were hosting this game, DeepFaked and DeepSixed, which focuses on AI election security. And this was right before the election in November. And one of the threats that comes out of that is actually a voice hack, in which I think it's the Russians or maybe the North Koreans. Probably the Russians in the way the game is designed, download clips of local pastors from YouTube, from different churches, make a voice print of them, and then deliver phone calls or say, Hey, this is Reverend so-and-so from the local church. I just want to make sure you're voting on Tuesday. It's really important for the issues that we do.
And the fact that you have to kind of correlate information from the telephone services because none of this is public. The fact that someone is able to download these voice prints, and one of the arguments we made is it's actually not famous podcasters. It's not like a Joe Rogan who has a huge audience, where if Joe Rogan called you on the phone, you'd be surprised and be like, this is highly unlikely. It's actually someone local in your community. And the one I identified was churches 'cause they tend to broadcast online. There's actual data availability. It's hard to do that with your boss or someone else.
To what degree, when you start to think about these deepfakes, we passed the election, things we survived 2024 from a deepfakes perspective, there wasn't... There was always all these attacks but... Senator Mark Warner was in that room that you were at as well, and he also said there just wasn't as much as we sort of expected. Do you think that this changes either in '26 with the midterms, '28 and going forward, that these deepfakes and this ability to do attestation becomes more and more important in terms of the public dialogue around elections?
Renee DiResta:
I do. Not even just elections though. Again, my sort of most creepy experience to date was the health system validator, of being a voice validator. Do you guys know that example of there's a guy in Hong Kong, working with a financial firm and received-
Danny Crichton:
Oh, yes, yeah.
Renee DiResta:
... Telling him to transfer something like the equivalent of $50 million or something like that, and did what he was supposed to do, went on video and they had a video.
Danny Crichton:
Yes.
Renee DiResta:
Video deepfake of the CEO as well. Yes, and he wound up wiring the money. Look, my friend Katie Harpeth has this phrase, like panic responsibly, right? You don't want to say, "Oh, the sky is falling and everything's going to be terrible." I think this was for the 2024 election. I think it was important to say, these are the risks. This is what could happen. Recognizing that this could happen, how do we think about mitigating it if it does, or making the public aware that there's potential there? I think that's responsible risk management. Let's just educate as many people as we can about this.
It's going to continue to become more democratized and easier, and that's been the trend. We've got new models like LLMs that are incredibly cheap, cheap to train, open source. And so there are going to be fewer guardrails on what's possible in the future. Things that maybe open AI wouldn't let you do. Or generating photorealistic images of real people, for example. You're going to see that become very, very possible. And it already is, candidly. It's not necessarily evenly distributed, but it's out there.
And so this is why I do think, again, you can't really put that genie back in the bottle. There's no way to regulate model outputs 'cause of the myriad uses, and it'd be an incredibly difficult thing to do. So then the question is how do we let people who want to say, I am real [inaudible 00:32:01]. And again, nobody's saying... The government's not forcing you to do it. There's no compulsion here, but it is saying we're going to need better ways to authenticate. And whether that's participation in social systems or companies for example saying like, okay, you need some sort of credential in order to prevent voice or face spoofing with what happened to the Hong Kong guy.
So I think it's important to be aware of the trends and the risks, and then to decide rationally and communally if we need to, what we consider appropriate and reasonable for mitigation.
Danny Crichton:
We need to know who everyone is in the community garden. And again, if you've ever been in New York City, you know no one ever knows who's all at the community garden. But I think we've stretched that analogy as far as it will go. Renee, thank you so much for joining us.
Renee DiResta:
Thank you for having me.