Jonathan Stray On Journalism, AI, and U.S. Democracy
You can download this video from Vimeo for offline viewing.
Heidi: Hi, this is Heidi Burgess, and I'm here with my partner in everything, Guy Burgess. And we're talking to Jonathan Stray. We first learned of Jonathan through his Better Conflict Bulletin, which — I don't know if you call it a blog or what. I call it a blog.
Jonathan: A newsletter.
Heidi: Ah, yes. A newsletter...on how to do conflict better. And it's just fabulous. If you don't know about it, it's at better conflict bulletin.org, and you should definitely check it out.
But once we got to know him a little bit, we learned that his background is much broader than that. He's been a journalist for, if I remember right, about 20 years. He worked with AP and ProPublica. And he's also a computer scientist. So at Berkeley, he's a Senior Scientist at the Center for Human Compatible AI. And he's working on how computer algorithms and social media lead us astray or might be reworked so that they do a better job of bringing us together and ameliorating conflict, as opposed to making it worse. So we're interested in talking to you about all that stuff. And I know Guy has a particular take on this.
Guy: Well, as you know, we've been struggling with intractable conflicts for a long, long, long time. And one of the big factors that determines whether a conflict is intractable or not is simply scale. Society-wide conflicts are dramatically different from interpersonal round-a-table conflicts. We have this little lecture on orders of magnitude, and it's like six orders of magnitude bigger. It's just stunning.
So what that means is that almost all the conflict interactions that matter with respect to these big conflicts occur through the media, in one way or another, either through what people read from various journalistic sources, how they interact on social media, and how the social media algorithms work, how the news feed algorithms work, and increasingly how AI is going to start changing all of this.
Now, when we talk to most of our colleagues in the conflict and peacebuilding field, they describe themselves as experts on managing conversations around a table. You're on the very short list of people who have genuine expertise in trying to understand how these very large-scale conflicts work, and the technology and the social institutions through which they're mediated. And that's really where the solutions are.
And so we're really eager to hear what you have to tell us about how journalism works. I've been really intrigued with the stuff on social media algorithms. One of the stories we tell about the Community Relations Service is that they deal with intractable conflicts successfully at a time when we thought nobody could do that. And nobody thinks you can deal with these algorithms. But then again, you've got projects that are actually working on that. So, at any rate, we are very much interested in what you have to say.
Jonathan: Well, good to be here.
Heidi: I was thinking that a good place to start might be to get you to just tell a short story of your professional trajectory because it's not obvious to those of us who aren't in computer science or journalism field, how all that would fit together and how it all relates. It's pretty obvious how it relates to conflict, but I'm not sure how you got in all three of those places and how you're meshing them. So I think that would be an interesting place to start.
Jonathan: All right. Well, yeah. Like many people, I think my route to conflict work was circuitous. I started as a computer scientist. So I grew up in Toronto, did my undergraduate and master's at the University of Toronto in Computer Science. And after graduation, I went to work for Adobe. I moved out here to California. Adobe is a computer graphics company. And so I did computer graphics for 10 years. That was my first career. And I wrote parts of After Effects and Photoshop, which are big Adobe apps. And after about 10 years of that, I had solved all of the purely technical problems I was interested in solving. And I had taken some time off to go traveling.
I had been backpacking all through Southeast Asia, all through West Africa, parts of the Middle East, and starting to write. I thought I'd try my hand as a writer, which I'd always had an interest in. I got a few articles published and started to think about what was happening in these little villages in West Africa where there wasn't enough food to go around. I wrote about systemic pressures. This is my first exposure to what it really looked like.
Then I moved to Hong Kong and went to journalism school at the University of Hong Kong. And it was a great place to study journalism because it is very international. And I was out there for a year, and I thought that I would have a career as a foreign correspondent, move to Beijing and all that stuff. But, I ended up getting hired by the Associated Press in New York. And so that was the beginning of my professional journalism career.
Around the same time, when I got to New York, I started running into people who were working on conflict stuff, originally through the crisis mapping community. I don't know if you remember that. And then I started to meet peacebuilders. In 2013, I did a summer program in Bologna at the International Peace and Security Institute. Many of the figures in the field were there, but I was particularly struck by Erica Chenoweth and their work. [They work on nonviolence.] I had a long conversation with them and kept working in journalism, but had an increasing interest in conflict, especially because around the same time, American society started to visibly polarize, especially in the early teens. You've probably heard the phrase "the great awokening." And you can see it in data. This is something I was starting to do. I was starting to look at data from social networks. I was starting to look at content analysis because I was working as a data journalist. So I was already watching all this stuff. And I could see the society tearing itself apart. And I could see it in my own social networks, too. Everybody used to hang out in one group, and suddenly, they were hanging out in two different groups. So I worked in journalism for almost a decade in New York.
Eventually, I taught the computer science and journalism master's program at Columbia and eventually moved out here for love and for the weather. Well, that's another story. But I decided to go back into technology and start studying the effects of social media and AI, which I'd always had an interest in, because I'd always had an interest in public discourse from my journalism years. And before too long, I ended up at Berkeley, where I was very fortunate that they let me study pretty much whatever I wanted as long as it relates to the consequences of AI. And I've really been doubling down in the past few years and made conflict my primary interest.
Heidi: So tell us what you're doing with conflict and AI.
Jonathan: I've done a number of projects with a number of people. For example, I ran a project called the ProSocial Ranking Challenge. So in January last year, we announced that we were running a competition, and you had to submit a prototype social media algorithm to enter it. And the prize was we paid to test it. And from July through February, we recruited, in the end, almost 10,000 people and paid them to use a custom browser extension. We didn't tell them what it did, but what it actually did is it changed the content of the first 50 or so items every time you opened up Facebook, Reddit, or Twitter. And according to one of the five algorithms that we eventually chose for testing.
We ran it for six months. And then early this year, we started doing the first data analyses, and we showed that we could, in fact, reduce polarization among the people who use this modified algorithm. That was the first time that anybody had shown that conflict-reducing social media algorithms are possible in a production environment. People had done lab studies. They'd done various types of tests that suggested that it would succeed. Otherwise, we wouldn't have put so much time and money into it. But we proved that it was possible. And so, now one of the projects that I'm involved in is turning that into a product that anyone can use. So that'll be a feed for Bluesky.
Heidi: How did you measure reduction in polarization?
Jonathan: Affective polarization, survey questions, very standard stuff.
Heidi: And did you give them one of those before and then after?
Jonathan: Three, actually, the beginning, middle, and end, and then we compared it to a control group. And significantly, not only did we reduce polarization, but in many cases, we actually increased the time they spent on the platform.
That's one of the big concerns about this — that maybe you can give people a healthier feed, but then they don't want to use it. So there's this idea that there's an economic incentive against building feeds that reduce polarization. And our experiment showed that that is partially true. It is true in some cases and not true in others. So it is not the case that every platform is on the Pareto Frontier of good for society and good for business, which has major policy implications.
Guy: Have you made any progress on actually getting this to the point where lots of people can use it? At least so it's available to lots of people or maybe incorporated into the routine news feeds of some of these big organizations?
Jonathan: Yeah. So there are various routes to this. The most direct route is one of the projects we've got now. So you've probably heard of Bluesky. You probably think of it as the small social media network that is full of liberals, which is true. It is that. But it is also the world's first truly successful open social media protocol. So it's this content universe that you can build an app that you can read and write to. And so there's already all these other apps built on the same protocol, like Skylight, which is like TikTok. It's only videos, short videos. But notably, it has what you could call a "feed store." It has a way for third parties to offer custom feeds without asking anybody's permission. You can just put it in front of people, and if people like it, they'll use it and nobody will stop you, which is completely impossible to do with other platforms. So we're building it out for that. And that project, because it's an infrastructure project for Bluesky, we call it Green Earth.
Heidi: So it still sounds to me like something that's going to get progressives to flock to it, and conservatives much less so.
Jonathan: Well, we haven't used the phrase bridging algorithm yet. So I suppose I should say what that is. So bridging-based ranking is when let's say something happens, right? So some conflict-related event appears. The Charlie Kirk shooting would be a perfect example. So, rather than showing the thing which just has the most clicks overall, which is probably outrage-inducing for one side or the other, you show the thing where it gets approval from people across the conflict divide, so in this case, from both liberals and conservatives.
So that's called bridging-based ranking. You rank content based on whether it seems to cross a divide. And there's various ways to do this. But all of the algorithms that we tested in the pro-social ranking challenge were bridging-based algorithms. Anyway, there is something interesting about trying to run a bridging-based algorithm on a social network that is already highly one-sided or highly polarized, maybe what you would call it. So that's kind of interesting.
I mean, we're mostly doing it because that's a place where we can do it. We're talking to another platform which has a very strongly conservative audience. So we might end up doing it in two very lopsided political spaces, which could be very interesting. So that's one way that these algorithms might get to people is, we're just doing it. But of course, most people are going to use one of the big commercial platforms.
And to our surprise, it actually might be Twitter or X now, which tries this first, because the Community Notes team at X, which is also a bridging-based ranking algorithm. But in this case, they're ranking notes on posts, right? The misinformation labels. They're starting to play around with ranking posts on the main feed this way. So they now have a label that says, I think they call it "liked by people who disagree." So that's the fundamental concept. You look for points of agreement, not disagreement. And it may actually end up rolling out on X first. But we're in conversations with the engineers at these platforms, and they're thinking about this stuff. So we'll see. We'll see what happens with the next generation of recommender systems for these products.
Guy: I seem to remember that there's, in various ways, a bit of a movement to give users of these platforms control over the algorithms.
Jonathan: Yeah.
Guy: So you can select algorithms with certain characteristics. And it would seem like getting at least something like this on the menu of possibilities would be a huge step forward. There's still a question of whether people really want it. But I think you're likely to get a fair number of people who do.
Jonathan: Yeah. Going back to Charlie Kirk on Twitter again, I think a lot of people were expressing a conflict fatigue, so much so, that actually Elon Musk noticed. And he had some posts where he talked about this and said, we're working on the algorithm.
And so my view is that, while in the short term, you can get more eyeballs by showing highly conflictual, outrageous stuff for reasons that both of you understand and have talked about at length — our evolutionary instincts to attend to threats, basically. In the longer run, it's tiring. It's not a good experience.
And so we may get there purely on product quality considerations. So we'll see. This is one of the things that we're testing with Green Earth. We're testing consumer demand for this type of product. And not just consumer demand, but institutional demand. So one of the things that we're going to try to do is actually sell bridging-based ranking to organizations.
So let's say you're the Catholic Church or you're Harvard or you're a synagogue whose members are arguing about who's in the right after October 7th, which, as, is happening. All of these institutions are involved in highly divisive conflicts internally. Maybe you want, on your own internal communications platforms, some facility to help people understand each other, get along better, however you want to express the goals of a conflict intervention, which is a whole other conversation.
Guy: Now, on a certain level, it's the problem that accompanies any invention. People don't know they want it, because they don't know it exists. And if you've been looking, it isn't as if divisive media is a new thing. It's maybe developing new ways of doing it, but it goes back long, long, long ways. But we've had a post on restructuring information feeds. And I made the analogy that information systems are a bit like manuals. If you want to do something, you get information on how to do that. And right now, our media system is structured to give us information on how to fight, because that's the way people think. And that's not going to tell you how to manage a society in more compromised, less divisive way. How to get along and how to resolve conflicts. But one could make a manual on that and in an information system.
Jonathan: Yeah. There's various levels to think about this, right? I think by the time you're asking yourself the question, "how do I make this conflict better?" You're already on the other side of what you have so aptly called the great reframing, right? Of course, people think that I am politically weird because I'm not trying to win, right? I don't see winning as a how can one win the culture war exactly, right? You've also expressed this into-the-sea framing, right? By completely excluding the other side from political voice forever, right? The concept of winning the American Culture War doesn't even make any sense to me. So I think by the time you're asking that question, you're already looking at it in a very different way.
But well before then, there's a whole bunch of things that better media, let's say conflict-sensitive media, for lack of a better word, can help. And one of them is generally lowering the temperature kind of thing, where you show the reasonable take instead of the extreme take.
Another really big one is correcting misperceptions of the other side. So as I and many others have documented, we really anchor on extreme stereotypes of what the other side is up to, which are, in fact, false. I mean, not that extreme people don't exist, but most people are not extreme. So just getting to the point where citizens have an accurate theory of mind of the people they oppose politically would be a huge step.
Heidi: One of the things that I'm mulling over in my mind is that you read polls, and this certainly corresponds to my experience with people, that most people are really sick of the divisions. They're really sick of fighting. They're really sick of being estranged from their relatives on the other side. They want a way out. They have no idea what it is or how to do it or how to get there. And aside from nice little posts that you probably put out, and we put out, lots of other people put out right before Thanksgiving and Christmas, telling people how to talk over the dinner table. Beyond that, there isn't much out there.
And I'm also thinking, this sounds very disparate, but it's coming together in my mind. Guy keeps on telling the story that nobody knew they needed an iPhone before they'd seen an iPhone because they never imagined such a thing. Obviously, everybody has an iPhone or an Android or something, and we can't live without them. How do we create something that nobody's imagined that everybody wants and needs in terms of pro-social media?
Jonathan: Yeah. Well, I hinted at this earlier. I think you operate at two levels, right? So one is for those people who wake up in the morning and say, "I want to do something civic today. I want to make the American conflict better." Right? Which is not a lot of people. They exist. And I think, actually, perhaps there's an increasing number of them. So for those people, there's all kinds of tools, right? There's platforms for collective deliberation and finding consensus. There's AI-driven mediators that are being tested. There's conflict analytics tools that people use.
And the AI has made this easier than it ever has. Especially if you're a professional, right? If you're a professional digital peacebuilder, you can now do something like, rather than spending a month building training examples for and hand-tuning a custom classifier so you can do something like, "Tell me all of the people who are arguing about, oh I don't know, whether white supremacists should be allowed in the Republican Party," which is, of course, a live conversation at this point, this moment. And instead, you can literally just type that as a prompt. And there are now tools. So I'm thinking of, for example, Phoenix, from Build Up, which does exactly this, which will scan social media according to that prompt. So already, there are technical capabilities that were just unheard of even two years ago. For the non-profit folks, though. Right.
So for the non-professionals, I think very strongly that you have to build this stuff into the product, right? It has to be in your everyday media systems. And from one point of view, the fact that there is a platform oligarchy, there's only really four or five platforms that everybody in the world uses, is a dangerous state of affairs because it's too easy to centralize control. But from another point of view, that means there's really only a few dozen people you have to convince if you want the platforms to change, right? So that's the plus side.
And so I like to say change requires an ecosystem. And you've done this too with your beautiful map of the roles of democracy. My job is to produce trusted scientific knowledge on what's going to work. Some of my colleagues in this space work for advocacy organizations. Their job is to do the lobbying, get the laws passed, apply pressure if needed, have the back-channel conversations, have the public-media conversations to try to get these things to happen.
So I do think, unfortunately, the route passes through the big platforms. But I rightly or wrongly am more optimistic than most about that, in part because I'm having conversations with people at these platforms about these things. And at the product level or the team level, they are receptive. Hopefully, that will translate to actually shipping changes. But we'll see. I think the next couple of years are going to be very interesting.
And then, honestly, I think journalism has to change too. And that's a much harder nut to crack in many ways.
Heidi: Let's talk about that. How does journalism have to change?
Jonathan: Well, okay. So there's been a tremendous amount of journalism criticism. And I should say all of this is in the context of increasing dysfunction and centralization of power and authoritarianism, especially, although not uniquely under the Trump administration.
So I think many people feel that the media failed us, either failed to tell them the truth, and they don't trust it at all anymore, or failed to prevent this erosion of democracy, which was, after all, what they saw as their job. And I would say many on the liberal side, their criticism is that "you didn't fight hard enough against Trump or the Republicans or the conservatives."
I don't agree with that analysis. I think that the best explanation of what happened is— I'm stealing this from Jen McCoy, who is a polarization scholar who we both know. And she's got this wonderful article on polarization where she's got a line that says something like "polarization happens when political entrepreneurs use divisive strategies and then elites respond in turn with divisive strategies or fail to develop non-polarizing responses."
So that little phrase, I think, is what happened with journalism. They were faced with an extremely polarizing actor, namely Trump. They didn't know how to respond in a way that would not increase polarization. And the way they responded was to say, "This guy's awful," which I think he is awful. But the problem is, you can't go in fighting, if the strategy that this actor is using to gain political power is a divisive one, because that makes the strategy work right?
So they didn't understand how to do this. And I don't blame them because journalists are not trained to think about peacebuilding and conflict, with some exceptions. But generally, that's just not what they think about and the frame they use. So I've been thinking a lot about this, in combination with some other folks in the journalism and journalism training spaces and the bridge-building space: what journalism would have to look like to be non-polarizing in the face of divisive political actors?
And it's not very complicated, really. It has to be pluralist. You have to charitably represent the views on both sides, or all sides, which used to be a mainstream tenet of journalism, but actually disappeared for a number of reasons, including economic reasons.
There's a fundamental economic shift that happened in journalism over the last two decades or so, which is that news organizations went from being majority ad-supported to majority audience-supported subscriptions. This sounds great and in many ways it is a more sustainable model. However, advertisers don't really care about politics, right? They just want their products sold. They don't care which side the newspaper is on or whether it reports this viewpoint or that viewpoint. Audiences do.
And we saw this when Bezos announced that The Washington Post wasn't going to run political endorsements anymore, and they lost a quarter of a million people in a week. So there is now an economic incentive against being pluralistic, because of essentially audience capture. And if you look at the audience composition of all the major legacy news organizations, everything from The New York Times to Fox over the last 10 years, what you see is that the audiences have become farther apart. They've become increasingly unipolar. So the simplest thing you can do — I say simple. It's actually very hard to do — is to build a news organization that has an audience where people disagree with each other. And there's really only a few people who are trying to do this.
The standard example is Tangle, and I'm sure you're familiar with Tangle. But Tangle is a news aggregator. They operate by collecting news that other people publish. One of the questions that I and others are really gnawing on now is, how you do this within a single article? How do you write one article on the Charlie Kirk shooting that people from across the political spectrum are going to be, "Yeah, that represents my view," talking about the way they've analyzed this.
I think one of the answers is internal adversarial collaboration. So you've probably heard the phrase adversarial collaboration. You're scholars, so you've probably heard this. This was a phrase that actually, Daniel Kahneman came up with about 20 years ago. And he said, "Well, on highly contested scientific questions, what we should do, is we should get some scientists together who disagree with each other to work together." And you could do this in a newsroom. You could get people with different politics to work together on the reporting. And it almost never happens. And that's because not only have the audiences polarized, but journalists have polarized.
If you look at the staff composition, it's basically exactly what you would think, right? The journalists as a group are pretty lefty, much farther left than the median American, except for the news outlets that are on the right side — your Fox and One America and Breitbart and Federalist and so on.
So structurally, the press is not equipped to try to speak to people across divides, because they don't have people inside the newsroom who are capable of representing those viewpoints charitably. So, instead, they either represent only one side, or they have a weak or bad or punching bag version of the other side, which just allows people who disagree to get off on how stupid they are. So that's the structural problem.
Guy: Listening to you talk and just thinking about politics and how both the left and the right have gone out further and further to the extremes, leaving a bigger section in the center — the people who are disgruntled and politically homeless. But the same thing applies to news stories. The hook that gets people to pay attention is some outrage or another. And mostly what the news stories are, are some outrage and the moral of the story is to go even further to the left or the right. But it seems like, at least in theory, that you could write stories where the outrage is, "My God, look how far they went to the left or the right — they went too far." And the real story isn't that bad. And you should be outraged that we're getting pushed this way. So you still get the outrage points on the attention meter. But you start pushing people back toward the center. And there's probably a big enough audience there that that's actually might well be commercially viable if you can build an organization around that, a concept like that.
Heidi: It strikes me that Moore in Common is doing that. They're not really a news organization, but they are trying to point out that we have more in common than we think we do. And they run stories on particular events and show how attitudes are more in common. But they're, I'm sure, a nonprofit. They're not a for-profit, and they probably don't have much to do with journalism at all. But I could imagine them working with journalists.
Jonathan: Yeah. I have been making a list of strategic goals to defend and transform a democracy in the United States. Notice that I don't say "restore," because I think this is one of the big traps that you can fall into is to say, "Well, what we want to do is go back." And you can never go back.
So what's the next step for American democracy is the question that I've been asking. Anyway, on this list, one of the items is greater coordination between the bridge-building and journalism sectors or the peacebuilding and journalism sectors. Now, I expect you're right that it's possible to build a viable product. But the problem is we don't even have a language for this. It's not centrist, but pluralist. Let's say a pluralist news organization. And we have a few examples. Tangle is the most salient one. But I expect it to be difficult, both structurally and also economically, because if it wasn't, we would have seen it already, right? So I feel like this is one of the fundamental maxims that I use to think about conflict dynamics, which is that the incentives all point the wrong way, which is what makes the conflict continue. If the incentives pointed the other way, the conflict wouldn't sustain itself. So you can expect it to be an uphill battle in one way or another, but that doesn't mean it's not possible.
Guy: There's what we sometimes call the "rock-bottom bounce theory of change." When you get pushed, in this case, further to the sides instead of the bottom, you open up opportunities in the center that didn't previously exist. And this also lines up with the boiled frog story, that things that get slowly bad, you get accustomed to. But at some point, you say, "My God, this is intolerable, and we've got to jump and change." And at some level, what we imagine that we're doing is creating things that could be put into place quickly if there's an event that convinces people that, "My God, we've got to reverse course." And that could happen very quickly. I would have hoped something like Charlie Kirk would have precipitated that, but an awful lot of folks on the left seem way too comfortable with that.
Jonathan: Yeah. An awful lot of folks, although not really that high a percentage. I mean, I can respond to your broader point in a second. But one of the things I did is that I collected a sample of social media posts, and I actually just counted the percentages, of how many people actually approve of violence. And it was a few percent, very much in line with careful surveys about support for violence. Most naive survey-based estimates of support for political violence give you huge overestimates for methodological reasons, including that people don't pay attention. If you ask people how many of you can drive a nuclear submarine, you get about 12%, right? So right there, you're going to get a just nonsense baseline.
But careful work shows that it's only a few percent, right? And I was really working hard to try to deflate the perception, especially on the right, that thousands of people on the left wanted them dead. Because it's not true, right? Now, there are some people, and that is definitely worrisome. But we didn't evolve to watch 10,000 people say, "I'm glad someone on your side got killed." And we don't know how to process the fact that 10,000 people is actually a very tiny fraction of America.
But when you see that concentrated, and this is also an algorithmic problem. You shouldn't see that concentrated. And so some of my research is on, not hate speech, but fear speech. Fear speech is much more common than hate speech and much harder to decide what to do with, because you don't, especially if there's a conflict zone where there's danger, you don't necessarily want to suppress expressions of fear, because that can be used to keep yourself safe. So it's a very thorny problem.
But one thing I am almost certain of is that we see too much of it. And so I wrote a post last month called "The Case for Downranking Fear Speech." So I completely forgot the broader point you were making. I got stuck on this sidetrack about Kirk and estimates for political violence.
Guy: Well, the truth is, so did I.
Heidi: We'll go back and look at the tape. I was doing what the active listener is not supposed to do in thinking about my next question. So I'll go onto that. We've talked in the past, not today, but in earlier conversations about the demise of local media and how that has contributed to the problem. I'm wondering if you could speak to that for a couple of minutes. And I'm thinking to myself that these pluralist approaches to media might be easier to start at the local level. The same way a lot of conflict interveners, peacebuilders, are finding now that they're being able to do a lot of what is being called civic hub building, democracy building at the local level as opposed to at the national level. What about reinvigorating local newsrooms in that way?
Jonathan: Well, you're not alone in wanting that. So, local journalism is disappearing rapidly. It is very hard to support a city-level newsroom, basically because it was an advertising model and there's just better ways to reach people as an advertiser. And then as a subscription model, there's just not that many people in a local community, as opposed to a national newspaper, which has a much larger potential subscriber pool. So it's very challenging. There are people who spend all day every day thinking about how to reinvigorate local journalism and the economics of that. And I'm not one of them, so I'm not going to pretend to have answers.
But I will say that there are direct conflict effects through two channels. One is that local media tends to be more trusted than national media. If you take these people who say, "Oh, don't trust the media. They lie." And you ask them about their local TV channel, they're like, "Oh, yeah, yeah. Channel 7, that's all right." Right? So it's a professional news source that people trust a lot more than the nationalized news sources. And the other is that in many cases, once the local media is gone, nobody is doing local-level reporting. And so there's a civic problem, which is where's the accountability coming from?
Now, there's a very complicated conversation, which is that, in an age where anybody can publish, what even, is news? What's the need for professional journalists? And a professor named Jay Rosen has a wonderful saying, which is that we now live in an age where anybody can "commit an act of journalism." So I don't want to downplay what private citizens or non-professionals can do. But I actually have a whole talk where I try to talk about what are amateurs good at in terms of civic information, versus what are professionals good at? And I really think we do need professionals for a bunch of reasons.
One of which is that most information is not online. It feels like everything is online, but that is not true. Most information in the world is still in private files or people's heads. And, now I'm going to sound old and grumpy. But one of the things that you have to do in journalism school these days is train the kids to pick up the phone. And journalists are almost the only people who actually use the phone anymore. But it's an amazing skill for getting information. Let me tell you. So somebody's got to make the phone call, however passé that might be, and write it down. And of course, anybody can do that, but there's really something to having someone paid who's doing that consistently. And that's what we're losing.
Heidi: Yeah. And we're also losing any knowledge of what's going on in our local communities. So we used to be really active in local government and local schools when we had kids in school. And now we're not because we have no notion about what's going on. We don't have time to go to the city council meetings and listen to them or even listen to them online because they're five hours every week. And there's no news coverage about what was talked about in council last week.
Jonathan: So this is potentially a solvable problem. I agree it's a real loss. So being able to put the council meetings online is big because that means that interested busybodies can watch the council meetings so you don't have to. But AI summarization is pretty big here, right? We're not quite, but quite close to the point where you can go into your chat bot and say, "Tell me what happened in City Council that week." And it'll watch the video and say, "They talked about this, this, and this," and then you can say, "Oh, tell me more about that." That already works for documents. You can make it look through thousands of PDFs. So the landscape is changing, but AI offers interesting opportunities.
I would say half of my research is on social media, which is the problem of content selection. And the other half is on AI, which is different. It's content summarization or content creation. And so I'm asking a lot of questions there about what LLM's effects on conflict, and trying to figure out what we want them to do to conflict, which is not an easy question, and then how to get there?
Guy: Well, that's one of the things that astonishes me, and my efforts to try to wrap my mind around this are still at the extremely primitive level. But the things that are possible, we have a plan for developing this Constructive Conflict Guide that we have been using AI to basically be a research assistant to do what would have cost us millions of dollars to do before. When I think about how good the graduate students are...
Jonathan: Maybe it depends on the student, but yeah.
Guy: But I wondered if you could just reflect a little bit on what kind of opportunities this rapidly advancing technology might open up to do things that were just simply too expensive to do before. Or simply, we don't have enough people trained to do them, and we can't train enough people fast enough.
Heidi: If I can add an asterisk to that question, when you answer it, also help me answer all the people that I talk to that say, "Oh, no, AI can't be trusted. You can't believe anything that it puts out." So we're running into this buzzsaw. He's super bullish on it, looking for all these things to do. I tell people I'm working with about it, and they're saying, "Oh, no, no, no. Don't do that."
Jonathan: It's an interesting moment in history. Yeah. So let me try the trust thing first. So it is absolutely true that sometimes the machine will just make stuff up. And that is a deep technical problem that is in the nature of how these things are put together. I would say many, many people are working on that problem. And as someone in my lab likes to say, the AI you're using today, that's the worst it's ever going to be, right? So I expect well, we are making progress on these problems. There are benchmarks that show that things are improving. So I expect that problem will improve.
There is a deeper issue, which is if you say, "Well, I want the machine to tell the truth. Well, what is the truth ?" And this can get very philosophical very fast, but I actually would say it's more of a pragmatic problem. I used to say that journalism is epistemology on deadline. And so at some level, you have to have some criteria for what is a trusted source. One of the things that fascinates me about conflict is that trust comes prior to information. Different sides in a conflict will trust different sources. And so just at a mechanical level, the truth is relative. And I'm not making a philosophical statement here. From an operational, practical, everyday perspective, you're going to get your information from sources you trust, and other people are going to get their information from sources they trust. And both of you is being completely reasonable and coming to different conclusions. And that's a very challenging problem.
So in terms of how AI can help with that, there's a bunch of ways. So again, there's two levels, right? One is if you're already in the conflict space and the other is everyone else going about their daily lives.
If you're in the conflict space, there's some really interesting experiments. So there's AI being used for consensus building for policy positions. You probably saw there was a very interesting paper last year called the Habermas Machine, where they showed that AI was better at synthesizing a common position among a bunch of participants than a human was. Not wildly better, but slightly better, but more to the point, much more scalable. So if you imagine that a lot of these conversations have to be had where people have to come up with a policy choice, then you can deploy that at scale, potentially.
There's also a bunch of work about using AI to mediate hard conversations. And there's various ways you can do that. You can have the AI be a participant in the conversation. You can have the AI whispering in your ear telling you maybe you want to respond this way or that. So there's different ways you can try that.
And all of these show some promise and are potentially scalable in the sense that we're very soon going to reach the point where if you want to have a mediated conversation, you can bring in a chat bot to help you with your difficult conversation. And this isn't just Israel-Palestine or whatever the political issue is, right? This is like I expect this to be relationship counseling, right? Not everybody has instant access to a marriage therapist. Or in an ideal world, you'd call your friend who was both available and wise. But we do not live in an ideal world. So, I don't see the machine as replacing calling your friend. I see the machine as extending the situations where you can get good advice to situations where you previously couldn't. And that's very interesting to me as well.
Now, all of this depends on having the machine understand conflict in some way. And this relates to the other things that I'm working on, which is when you ask an AI about a highly controversial topic, really loaded topic, right? So you ask it like, should abortion be legal? Or how many genders are there? Or whatever it is, right? Is Trump good for America? One of these questions. Or even how many people were killed in the Armenian genocide, which is actually controversial. Emillie de Keulenaar , who's a conflict scholar colleague of mine, tried this. She actually tested this. And what she found is that you got different answers if you asked an Armenian versus Azerbaijani, which were the two sides in that conflict, right? So it's very easy to say, well, you shouldn't get different answers if you ask in different languages. Okay, fine. But what answer should you get?
That's the very challenging question. And so I would say that the answer that you should get is, again, a pluralist answer. And so I've got this project to try to create something like that. We're calling it politically neutral AI because people understand what we mean by that. I think actually a more accurate description would be pluralistically fair. Or if you really want to get into it, it is multipartial, which is a word that you two will know, which people outside of conflict will have never heard. So it's a terrible word to describe it. But that's the idea, right? It should give an account of each side of the conflict in a way that people who hold those positions would agree is fair. Very simple idea. And as peacebuilding practitioners, you will be like, "Well, yeah, obviously." Not so easy to do in practice.
In particular, one of the things that we're doing is empirical research to prove that this is actually the case. So build these systems that we think are answering this way and then do huge survey experiments to put them in front of people who hold these positions and say, "What do you think?" And so that's the core of what we're trying to do. In particular, for a politically neutral AI project, we have three goals. We want a machine that a, tells the truth, which lots of people are working on. B, it doesn't manipulate people's opinions. So you've got to define what manipulation is. So people are working on that. And C is trusted across lines of conflict, and almost no one else is working on that. But that's the goals of this project.
Guy: It almost seems like something you could approach with a variation of the algorithm that you were talking about earlier with social media, something that's trusted by people on both sides.
Jonathan: Yep The difficulty is not coming up with ideas, right? The things that might work are fairly obvious. The difficulty is you have to then actually build it into a real system and prove that it works. And so I don't know. Maybe I'm doing interesting theoretical work, but mostly I'm doing very applied work. I'm building systems and testing them. And that's really the bottleneck right now.
Heidi: It seems to me, and maybe I'm being naive, but it seems to me that ChatGPT is fairly good at this already if you prod it properly.
Jonathan: Yeah
Heidi: So I often ask ChatGPT about something, and I say, "What are the predominant democratic arguments about gender? What are the predominant Republican arguments about gender?" And the answer is it comes back. And I always ask it to give me citations.
Jonathan: Right. That's really good practice.
Heidi: The answers that it generally comes back with, if I check the citations, they seem pretty solid. Now, if instead of asking explicitly for the Democratic and the Republican views, ask it how many genders are there and see what it comes up with. But maybe we're being naive, but we're pretty happy with the output that we've been managing to get out of ChatGPT.
Jonathan: It could be a lot worse. One of the things that we're building towards is not just building a system which has these nice conflict properties, but building evaluations, right? So we want an open-source method to test a black box model to try to understand its politics. And people have proposed various approaches. We think we have a better one, right? Moreover, there aren't really public evaluations for this. There's now internal evaluations that these companies are using. Some of them, both OpenAI and Anthropic, have written about how they view political neutrality, in part because of people complaining, and in part because in July, there was an executive order which says that federal contractors have to use "politically neutral and unbiased AI."
So now, the whole thing was a anti-woke screed. So it's clear why they're doing this. However, the actual legal language doesn't specify what neutrality means. And by definition, I'm trying to build something that is acceptable to all sides. And so I have some hope that it will be acceptable to, not just the current administration, but future administrations. So anyway, that's interesting executive pressure. We're getting the right outcome for the wrong reasons. People are paying a lot more attention.
But I also want to stress that the models most people are thinking about, in terms of neutrality for models with respect to the American culture war or the American domestic political conflict, whatever you want to call it now is important. But I think that way of thinking is very narrow, for a number of reasons. One is that you don't want to assume that there's only one axis of variation. That's what happened. That is a feature of polarization, but you want to be able to handle less polarized views as well, right?
Maybe someone is a staunch environmentalist, but they also are very pro-life, which is a discordant combination. But as you have written yourselves, diversity in narratives is actually really important to peace and stability. So you want to support that. So first, I want models that can handle conflict on a more granular level, not just the macro left versus right, but also can handle conflict in any language, in any political context, and at any scale around the world.
So you two have correctly pointed out that conflict happens at all kinds of scales. And so, if you ask this machine about town politics versus national versus global politics, it should do something sensible in each case. And if you define 'neutral' as it gives this answer, and this [different] answer and [yet a different] answer to the question "how many genders are there?"— and then just jump on all of them as if they are right, you're not giving it a principle to generalize to other conflict situations.
And no designer can foresee what's going to happen in the future and what these models are going to be called on to do. So I'm trying to embody a principle into these machines. And the principle is, as I said before, pluralistic fairness, give an answer that people who hold multiple views, who hold differing views would all agree is fair, which by the way, I stole from Wikipedia. That's the Wikipedia definition. They have a beautiful quote that says, "We're not bothered about philosophical questions of objectivity. What we care about is whether we can converge on a text that people who disagree all agree is okay." And I think that's a very important insight. We're not talking abstractly in the realm of truth and ideas up here. We're talking concretely in the realm of everyday relationships. Does this person and this person both agree that that answer is okay? And that's an empirical question, right? We can test whether that's true or not using surveys, which is what the research I'm doing does.
Guy: It seems there's another part of this that AI systems and the pre-AI scientific enterprise are basically systems that answer questions. And we worry about whether or not those are good answers. But it seems that and I think this is especially important with respect to AI, what also matters is that you ask good questions. If you ask a question that says "tell me how to do something really stupid," it's not the AI's problem. It's training people how to use it. And a lot of what we've been trying to think about with our feeble efforts to use this for BI, is to frame the questions carefully.
Jonathan: Yeah. I think it's really interesting. So when we started out doing this neutrality research, we started out taking survey questions from Pew Research, right? So should abortion be legal? And then we gradually shifted and realized nobody's going to type that. And if they're typing that, they're already probably a pretty reasonable person. They're going to type why are those leftist baby killers, right? That's what they're going to type, or they're going to type how come those conservatives don't believe that women have human rights? That's what they're going to type, right?
And so what we've actually done is we're scraping Reddit, and we're doing this huge data analysis project to pull out what is the language that people actually use to talk about this stuff and ensuring that the model lowers the temperature, right? It doesn't amplify the intensity or the outrage or the conflictual nature.
I want to be very careful here, right, because I almost said it "brings people back to the center." But you've really got to be careful about talking about this concept of the center, because it's not the right concept to think about this. What you want is not to change people's minds. You want to enforce certain types of relational norms. And so that's what I'm hoping the chatbot does when someone says, " Why are liberals baby killers?"
Heidi: I'm thinking about the stories that we've all heard and read about how ChatGPT, anyway, is designed to affirm whatever you say. So the story is that it has affirmed kids who have said that they want to commit suicide and said something like, "That's a great idea." Obviously, you've got to do something opposite that.
Jonathan: Yeah. And so now you run into an incentive problem, because the companies found when they went from GPT 4.1 to 5, 5 was less sycophantic. And they found that a lot of people preferred 4, because it's nice when people agree with you, right, or when machines agree with you. So now you've got this problem where if you just naively follow usage and consumer feedback, it's going to drive you in the direction of greater sickness. And it's very akin to the classic problem in journalism, which is sometimes called the 'eat your broccoli problem", which is like, "Yeah, yeah, you want to read about the latest celebrity gossip, but maybe you should know something about the war in Ukraine," which is going to be cognitively and emotionally more difficult. So the people building these things are aware of this problem. They've publicly discussed this problem. It remains to be seen how the economic and user incentives play out and where they end up in that way.
But I think there's also a very analogous problem in mediation, which is two sides come into a conflict and their opening position is some completely batshit hysterical thing, right? And you as a mediator have to sit there and go, "Yes, I accept your truth," right? And maybe through the course of the conversation, their position shifts, but you have to be okay with their opening bid, or they're just going to go somewhere else, right? So there's some very complicated questions here about should the machine try to move you to a healthier place over the long run? And is that paternalistic? Is that manipulative?
And on the question of manipulation and fairness, one of the ways that I think about this is, I don't think it's wrong for machines to persuade people, because if you tried to build a machine that didn't change anyone's mind ever, what would happen is that it would hide from you the most relevant factual information. It would lie to you about exactly what would cause you to learn, right? Which is terrible. You don't want that.
So instead, I propose a Rawlsian "veil of ignorance" type of thought experiment. Which is, if you didn't know what side of the argument you were on, would you say that the things that it said that persuaded someone were fair. And what that ensures is that the loser in the persuasive game is given a fair shot. And that, again, leads back to this idea of charitability. There's been this really big, very popular line of thought, which is that you shouldn't platform wrong and bad ideas, right? There are some things that are out of bound in democratic discourse, and we should just never say them aloud and never talk to anyone who has them. And I think that is, while I'm not an absolutist about this, I think that is mostly wrong. I think you need to represent wrong and bad ideas in a charitable fashion because not doing so is worse. It causes reactance. It is a violation of a basic fairness principle.
Guy: On some level, there's a truth in advertising issue, at least with what AI is, with large language models. It's a fancy system that compiles what we collectively know. And it has all the foibles of our collective knowledge.
Jonathan: And more.
Guy: That doesn't mean that it's a system that, if you just listen to what people know, that doesn't give you insight that goes beyond that, necessarily. This is why I think that for purposes of an engine for helping us listen to what lots of people are actually thinking, it's terrific. But that's very different from general intelligence that can go out and figure out what's wrong with this unified field theory stuff anyway.
Heidi: Except its ability to synthesize is greater than what we have. Yeah.
Guy: It certainly has vastly more bandwidth. And that goes to the order of magnitude problem we started talking about.
Jonathan: Yeah. And it has some interesting properties. So you're absolutely right that you're going to get a very conventional wisdom answer out of the machine. This can be good in some cases, because it tends to draw people away from the extremes in the way it answers. But it's not going to come up with a new theory of peacebuilding, right? Because if we as humans don't know how to articulate the peaceful future that we want, it's not in the training data. So we have to do that work.
On the other hand, there's a beautiful piece of work. Again, a paper that came out last year on using LLMs to talk people out of conspiracy theories. And I invited the author to come give a talk at my lab's conference. And he opened his talk by saying, " Why should this work?" Well, the machine has at least two properties that humans lack. First of all, it readd everything. So when the conspiracy theorist goes off and says, "Oh, but did you read the 1976 NIST subcommittee report?" Yes, the machine has read the 1976 NIST subcommittee report. And second, it has infinite patience. And those two properties alone, even if it's not a very intelligent machine really, are properties that may win the day as compared to humans. And then on top of that, it's scalable.
So I think the machine has both strengths and weaknesses. And if what you need is vast scope and infinite patience, then you win, right? I know the difference between me as a conflict theorist and a conflict scholar is that when I'm actually talking to people who are in the middle of a conflict, I get activated and I lose my shit, right? It's very hard to stay calm in the face of crazy stuff. But the machine has no such problem.
Heidi: We have kept you longer than I promised we would keep you. We could easily keep you for another two hours, and we wouldn't run out of stuff to talk about. The last thing I'll ask is, is there some area where you were hoping that we'd go that we didn't?
Jonathan: Well, there's a macro area which has, I would say, more in your expertise than mine, which is what are the strategic goals for transforming democracy in the United States, right? I told you I had a list, and this is what I'm really thinking through right now. And you've got this beautiful massively parallel peacebuilding taxonomy, and you've started to articulate who's working in each of these spaces. But I've been going through that and mapping them to strategic goals and trying to prioritize them. So for example, I think that training journalists to have non-polarizing responses to polarizing events is strategic. I think that ensuring that the groups that are doing movement organizing are training their members in nonviolent tactics and discipline is strategic. And I can list another half dozen. This is the level that I'm thinking at right now because I want to know can we get consensus on these goals, and then who's doing them? And has somebody's already thought of this and is there's a big movement happening?
So, for example, there's a huge mobilization in the bridge-building space, for example. So in some sense, I'm not worried about that. I am very worried about whether organizing movements are teaching nonviolent discipline. So I have this project to go one level down from your abstract taxonomy to what are the real goals, prioritize them, and then try to map the space in terms of who's doing them or not.
Because I think we're at a moment where strategic organizing is going to pay off. We have to do it now. And I just want to know that somebody's working on each of these things.
Heidi: Have you written up this list anywhere?
Jonathan: Yeah. I'm working on them. It's not public yet.
Guy: Okay. We're very interested.
Jonathan: Well, I'm sure you've got your own ideas, right?
Jonathan: Likewise, I would be very interested in what you would see.
Heidi: One of the interesting things that we're running into, and we're going to be doing a post on this soon, is that we think part of the reason why the strengthen democracy field, if you want to call it that, and Republicans tend to call it something else. But we don't have an agreement about what democracy is or what the goal is that we're working towards. And we've encountered a surprising amount of pushback when we raise that and when we try to get people to talk about what are the characteristics of a healthy democracy.
Guy: It's a divisive issue that people feel uncomfortable about.
Heidi: I still feel as if we're not going to be able to get there if we can't define that. So we're going to be raising our own ideas about what those elements are and then trying to get people to chime in alternatives and see if we can get a conversation going. Because it strikes me that we're not going to succeed in getting to where we want to go if we don't even know where that is, what it looks like.
Jonathan: Yeah. Exactly. And this is the curse of the expert, right? But I often meet people who just seem to be wildly disconnected from what the other side actually believes, right? So journalists tend to think democracy is very well-defined. And I'm just like, "No, it's not only not well-defined. It's become a dirty word for a lot of people. Because the language becomes left or right-coded, right? And so if it's outgroup coded, then you don't want to say it.
But, both sides are very concerned about the state of democracy, right? If you look at surveys, there is bipartisan concern. So everybody's worried about something. So can we articulate the something? And honestly, this is classic political coalition building, right? Let's find language and a goal. What is the political coalition that includes my lefty queer activists and also the Federalist Society? Because the Federalist Society is also very concerned about the erosion of rule of law, right? So what does that alliance look like? This is the thing that I'm really scratching my head over right now, because everything I know says that the way to preserve democracy is to build a large pro-democracy coalition. And that coalition is going to be larger than anybody is comfortable with, which is why the bridge building is such a key part of it.
Guy: One of the ideas that we've been kicking around, we tried something called the Constructive Conflict Initiative, I guess, in Trump's first term. And basically, trying to make the case that the conflict problem is similar to the climate change problem. I tell a story about when I worked at the National Center for Atmospheric Research 10 years before climate change made started as a public issue. And scientists were saying, "Well, how are we going to get the whole world to address this?" And what we were trying to talk up, and I think maybe the time this coming where we could do this, is what we've been calling a meta proposal instead of a proposal for a particular project, a proposal for a grand grant program that if people are really serious about trying to deal with the problems surrounding democracy and conflict, which is a precondition for dealing with pretty much any other big problem.
There are all these issues that we have to figure out. And one could imagine crafting a proposal to maybe a number of big foundations that say, "We need a research program that isn't just amorphous kind of we want to make the world nice," but gets down to these kind of specific objectives that you're talking about. So it really looks like and is a plan for seriously addressing the staggering scale and complexity of this. And then go find one of these funders that plunks down billion-dollar grants that they do from time to time. But I think it's something that we're getting to the point where it's possible to do. But that could create the kind of financial stream that makes a lot of the rest of this work possible. And it's got to involve something really different from partisan pro-democracy business as usual.
Jonathan: Yeah. There's a big coalition-building exercise and a goal-setting exercise. And I think that i think the money will come once the strategy is in place. I think that many people see raising the money as the hard part, and it is certainly challenging. But on the other hand, I can't answer the question, "What would I do for American democracy with $100 million?" I don't know yet. I know pieces of the answer. But I think that if we had a solid answer and we had consensus across all of the different actors in your map, in your 53 roles map, I think that would I think the money would come. I don't think the problem is that there aren't enough people who care about democracy who are willing to fund democracy. I think the problem is that nobody has a convincing high-level strategy yet.
And everybody has a different frame for talking about this, right? And some people think the frame is just, "Well, the Democrats have to win the next election." And I don't disagree. But it's not because I support progressive politics. I think they have to win the next election because they are the de facto opposition party. But for example, democratic politicians who are seen as opposing the Democratic Party establishment poll very well right now, right? So you need these outsiders.
Anyway, this is a huge conversation, and I'm sure that you and I and many other people will be wrapped up in it. But what I'm discovering is that there's a real openness to having this conversation that there wasn't even a year ago. And why waste a good crisis?
Heidi: Yeah. Well, that sounds like a good place to end, and we are 25 minutes past what I promised.
Jonathan: Edit to taste.
Heidi: Well, we won't because it's all too good. Thank you so much. I hope we can continue this conversation publicly or privately at another time because, obviously, we haven't quite solved everything yet. We're close, but we haven't quite got it.
Jonathan: Oh, yeah. Yeah. We're nearly there. Yeah. But thank you so much. It's really been a pleasure. And your work has been influential for me, personally. I've been reading BI for over a decade now.
Heidi: Wow.
Jonathan: And no one else was talking about it. It's crazy to me that it's so rare.
Heidi: Yeah. When we go to meetings, we often run into people who say, "Oh, yeah, I used that in grad school." But yeah, we don't have the depth of penetration that we would like, but we're continuing to work on it. Substack's helping some, I think. But, I have to send the compliment back. We read every episode of Better Conflict Bullet. I could probably write you every time and say, "Can we republish this one too?" But it gets old after a while. But you are really doing excellent stuff there. And I don't understand how you could possibly be doing all the projects that you're doing. It's crazy. Very impressive.
Jonathan: Well, a lot of people help. So the pro-social ranking challenge, the team is 50 people by the time you add it all up.
Heidi: So that's the answer. That does help. Yeah. Well, terrific.
Jonathan: Thank you very, very much.
Heidi: .Thank you very much. And we'll undoubtedly be talking soon.







