The Burgesses Talk With Jonathan Stray on Journalism, AI, and U.S. Democracy

Hyperpolarization Graphic

Newsletter #408 - December 10, 2025

Heidi Burgess and Guy Burgess

 

On November 17, 2025, Guy and I talked with Jonathan Stray, primary author and creator of The Better Conflict Bulletin.  Jonathan is also currently Senior Scientist at the Center for Human Compatible AI at University of California at Berkeley. Trained as a computer scientist, he first worked at Adobe. He then transitioned to journalism for many years, working for the Associated Press and ProPublica.  While he has not personally done peacebuilding (outside of his writings), he has worked with a large number of peacebuilders, and seems to understand conflict dynamics and how to effectively manage them as well as anybody — better than most. He is doing fascinating work on adapting AI for good purposes — particularly in terms of de-polarization and conflict transformation. We talked about all of that, and more.

 

Watch/Read Full Discussion

 

 

As we explained at the beginning of our conversation with Jonathan, we've been studying and writing about intractable conflicts for a long time. One of the big factors that determines whether a conflict is intractable or not is scale. Society-wide conflicts are dramatically different from interpersonal round-a-table conflicts. The number of people involved is 6 or 7 orders of magnitude bigger.  (For comparison, the difference between a human walking and the International Space Station is only 4 orders of magnitude!—which is to say it is a thousand times bigger a difference!)

So what that means is that almost all the conflict interactions that matter with respect to these big conflicts occur through the media. We simply can't get our information directly. In one way or another, either through what people read, listen to, or watch, how they interact on social media, how the social media algorithms work, how the news feed algorithms work, and now how AI works, determines what we believe is "true" and "good," "false," and "bad."  And as we all know, much of this tech is driving us apart, making polarization much worse than it was before all this "tech" existed.

Now, when we talk to most of our colleagues in the conflict and peacebuilding fields, they describe themselves as experts on "managing conversations around a table." Jonathan is one of the few people who have genuine expertise in trying to understand how these very large-scale conflicts work, and the technology and the social institutions through which they're mediated. And that's really where the solutions are. So we were particularly interested in learning what he had to say about all this.

We first asked Jonathan to describe his professional journey — from a computer scientist working at Adobe, to a world traveler, through which he met peacebuilders and got interested in conflict, to journalism school in Hong Kong, to working as a data journalist at AP and Pro-Publica, and finally putting it all together as Senior Scientist at the Center for Human Compatible AI at University of California at Berkeley. And creating the wonderful The Better Conflict Bulletin.

Jonathan then talked about a few of his AI projects. One was a project called the ProSocial Ranking Challenge. This was a competition in which people had to submit a prototype social media algorithm that would draw people together, rather than pulling them apart. 

The prize was that we paid to test it. So from July through February, we recruited almost 10,000 people and paid them to use a custom browser extension. We didn't tell them what it did, but what it actually did is it changed the content of the first 50 or so items every time you opened up Facebook, Reddit, or Twitter, according to one of the five algorithms that we eventually chose for testing.

We ran it for six months. And then early this year, we started doing the first data analyses, and we showed that we could, in fact, reduce polarization among the people who use this modified algorithm. That was the first time that anybody had shown that conflict-reducing social media algorithms are possible in a production environment. ... And so, now one of the projects that I'm involved in is turning that into a product that anyone can use.

Right now they are developing a feed for Bluesky, because Bluesky has an open social media protocol which no other social media platform has.  That means that you can use their protocol to write your own app with custom feeds.  So, as Jonathan explained,, 'you can just put it in front of people, and if people like it, they'll use it and nobody will stop you, which is completely impossible to do with other platforms."

Jonathan also said that they might be able to release something on X (formerly Twitter).  Following Charlie Kirk's assassination, Jonathan observed, 

a lot of people were expressing a conflict fatigue, so much so,  that actually Elon Musk noticed. And he had some posts where he talked about this and said, "we're working on the algorithm." [more on that below]

And so my view is that, while in the short term, you can get more eyeballs by showing highly conflictual, outrageous stuff for reasons that both of you understand and have talked about at length — our evolutionary instincts to attend to threats, basically. In the longer run, it's tiring. It's not a good experience. And so we may get there purely on product quality considerations. 

He said that they are testing both consumer demand and institutional demand for what he calls "bridging algorithms." He explained:

Bridging-based ranking is when some conflict-related event appears. The Charlie Kirk shooting would be a perfect example. So, rather than showing the post which just has the most clicks overall, which is probably outrage-inducing for one side or the other, you show the post that gets approval from people across the conflict divide, so in this case, from both liberals and conservatives. That's called bridging-based ranking. ... All of the algorithms that we tested in the pro-social ranking challenge were bridging-based algorithms. 

So Jonathan and his team are testing whether there is a market for such algorithms.  He thinks there is, both among individuals, and among institutions.

So let's say you're the Catholic Church or you're Harvard or you're a synagogue whose members are arguing about who's in the right after October 7th, which is, actually, happening. All of these institutions are involved in highly divisive conflicts internally. Maybe you want, on your own internal communications platforms, some facility to help people understand each other, get along better, however you want to express the goals of a conflict intervention.

We talked about the limitations of testing the algorithm on Bluesky which has a very liberal clientele. He agreed that it might have been better to test somewhere else, but Bluesky had the open platform which no other platform has. But, to his surprise, he thinks their first "normal" (non-test) user might be X. In addition to Elon Musk's concern about "conflict fatigue," Jonathan said that the Community Notes team at X also is developing a bridging-based ranking algorithm right now. 

They're ranking notes on posts, the misinformation labels. They're starting to play around with ranking posts on the main feed this way. So they now have a label that says, "liked by people who disagree." So that's the fundamental concept. You look for points of agreement, not disagreement. And it [his bridging algorithm] may actually end up rolling out on X first. But we're in conversations with the engineers at [all] these platforms, and they're thinking about this stuff. So we'll see what happens. 

Heidi agreed that people are sick of division, but she observed, most do not know a "way out," or they haven't thought about the possibility of using a different kind of social media algorithm.   "How do you get people to adopt something like this?" I asked. Jonathan said it depends on who you are talking about.

For those people who wake up in the morning and say, "I want to do something civic today. I want to make the American conflict better."  Which is not a lot of people, but they exist, and, perhaps, there's an increasing number of them. So for those people, there's all kinds of tools available. There's platforms for collective deliberation and finding consensus. There's AI-driven mediators that are being tested. There's conflict analytics tools that people use. And AI has made this easier than it ever has been. ...there are technical capabilities that were just unheard of even two years ago. 

But for the non-professionals, I think very strongly that you have to build this stuff into the product, right? It has to be in your everyday media systems. And from one point of view, the fact that there is a platform oligarchy, there's only really four or five platforms that everybody in the world uses, is a dangerous state of affairs because it's too easy to centralize control. But from another point of view, that means there's really only a few dozen people you have to convince if you want the platforms to change, right? So that's the plus side. 

Rightly or wrongly, I am more optimistic than most [people are] about that, in part because I'm having conversations with people at these platforms about these things. And at the product level or the team level, they are receptive. Hopefully, that will translate to actually shipping changes. But we'll see. I think the next couple of years are going to be very interesting. 

Jonathan then added that "journalism has to change too. And that's a much harder nut to crack in many ways."

"How?" I asked. He replied:

Many people feel that the media failed us, either failed to tell them the truth, and they don't trust it at all anymore, or failed to prevent this erosion of democracy, which was, after all, what they saw as their job. And I would say many on the liberal side, their criticism is that "you didn't fight hard enough against Trump or the Republicans or the conservatives."

I don't agree with that analysis. I think that the best explanation of what happened is— I'm stealing this from Jen McCoy, who is a polarization scholar who we both know. She's got this wonderful article on polarization where she's got a line that says something like "polarization happens when political entrepreneurs use divisive strategies and then elites respond in turn with divisive strategies or fail to develop non-polarizing responses."

So that little phrase, I think, is what happened with journalism. They were faced with an extremely polarizing actor, namely Trump. They didn't know how to respond in a way that would not increase polarization. And the way they responded was to say, "This guy's awful." I agree, he is awful. But the problem is, you can't go in fighting, if the strategy that the actor is using to gain political power is a divisive one, because that makes his strategy work, right?

So they didn't understand how to do this. And I don't blame them, because journalists are not trained to think about peacebuilding and conflict, with some exceptions. But generally, that's just not what they think about and the frame they use. 

So I've been thinking a lot about this, in combination with some other folks in the journalism and journalism training spaces and the bridge-building space: what would journalism have to look like to be non-polarizing in the face of divisive political actors?

And it's not very complicated, really. It has to be pluralist. You have to charitably represent the views on both sides, or all sides, which used to be a mainstream tenet of journalism, but actually disappeared for a number of reasons, including economic reasons.

There's a fundamental economic shift that happened in journalism over the last two decades or so, which is that news organizations went from being majority ad-supported to majority audience-supported subscriptions. This sounds great and in many ways it is a more sustainable model. However, advertisers don't really care about politics, right? They just want their products sold. They don't care which side the newspaper is on or whether it reports this viewpoint or that viewpoint. Audiences do.

We saw this when Bezos announced that The Washington Post wasn't going to run political endorsements anymore, and they lost a quarter of a million people in a week. So there is now an economic incentive against being pluralistic, because of, essentially, audience capture. If you look at the audience composition of all the major legacy news organizations, everything from The New York Times to Fox over the last 10 years, what you see is that the audiences have become farther apart. They've become increasingly unipolar. So the simplest thing you can do — I say simple. It's actually very hard to do — is to build a news organization that has an audience where people disagree with each other. 

And there's really only a few people who are trying to do this. The standard example is Tangle, but they are a news aggregator. They operate by collecting news that other people publish. One of the questions that I and others are really gnawing on now is, "How you do this within a single article?" How do you write one article on the Charlie Kirk shooting that people from across the political spectrum are going to say, "Yeah, that represents my view," talking about the way they've analyzed this.

I think one of the answers is internal adversarial collaboration. This was a phrase that Daniel Kahneman came up with about 20 years ago. He said, "Well, on highly contested scientific questions, what we should do, is we should get some scientists together who disagree with each other to work together." And you could do this in a newsroom. You could get people with different politics to work together on the reporting. And it almost never happens. And that's because not only have the audiences polarized, but journalists have polarized too. ... So structurally, the press is not equipped to try to speak to people across divides, because they don't have people inside the newsroom who are capable of representing those viewpoints charitably. ...That's the structural problem. [Heidi notes that Tangle does do this — one of the many reasons we like them.]

We went on to talk about the difference between local media and national media, how local media is more trusted, and is essential to the democratic workings of local entities — cities, counties, or school districts, for example. But local media is much harder to fund, and is therefore going extinct across much of the country. 

We also talked more about AI — what it is good at, what it isn't, what its benefits are, and what dangers it poses. Jonathan is working on a conflict-sensitive AI.

 One of the things that fascinates me about conflict is that trust comes prior to information. Different sides in a conflict will trust different sources. And so, just at a mechanical level, the truth is relative. And I'm not making a philosophical statement here. From an operational, practical, everyday perspective, you're going to get your information from sources you trust, and other people are going to get their information from sources they trust. And both of you are being completely reasonable and coming to different conclusions. And that's a very challenging problem.

As an answer to that, he's working to build what he calls a politically neutral AI. 

It should give an account of each side of the conflict in a way that people who hold those positions would agree is fair. Very simple idea. And as peacebuilding practitioners, you will be like, "Well, yeah, obviously." Not so easy to do in practice. ... that's the core of what we're trying to do. In particular, for a politically neutral AI project, we have three goals. We want a machine that A) tells the truth, which lots of people are working on. B) it doesn't manipulate people's opinions. So you've got to define what manipulation is. And people are working on that. And C), it is trusted across lines of conflict, and almost no one else is working on that. But those are the goals of this project. 

We are out of space here.  Check out our full discussion to read more about all the fascinating projects Jonathan is working on!

 

Read/Watch our full interview with Jonathan.

 

Subscribe to the Newsletter


Please Contribute Your Ideas To This Discussion!

In order to prevent bots, spammers, and other malicious content, we are asking contributors to send their contributions to us directly. If your idea is short, with simple formatting, you can put it directly in the contact box. However, the contact form does not allow attachments.  So if you are contributing a longer article, with formatting beyond simple paragraphs, just send us a note using the contact box, and we'll respond via an email to which you can reply with your attachment.  This is a bit of a hassle, we know, but it has kept our site (and our inbox) clean. And if you are wondering, we do publish essays that disagree with or are critical of us. We want a robust exchange of views.

Contact Us

About the MBI Newsletters

Two or three times a week, Guy and Heidi Burgess, the BI Directors, share some of our thoughts on political hyper-polarization and related topics. We also share essays from our colleagues and other contributors, and every week or so, we devote one newsletter to annotated links to outside readings that we found particularly useful relating to U.S. hyper-polarization, threats to peace (and actual violence) in other countries, and related topics of interest. Each Newsletter is posted on BI, and sent out by email through Substack to subscribers. You can sign up to receive your copy here and find the latest newsletter here or on our BI Newsletter page, which also provides access to all the past newsletters, going back to 2017.

NOTE! If you signed up for this Newsletter and don't see it in your inbox, it might be going to one of your other emails folder (such as promotions, social, or spam).  Check there or search for beyondintractability@substack.com and if you still can't find it, first go to our Substack help page, and if that doesn't help, please contact us

If you like what you read here, please ....

 

Subscribe to the Newsletter