What Does Claude Think About Future AI Policy? A Reader Asked --and Shared the Results

 

Newsletter #449 — April 27, 2026

 

 

Guy Burgess and Heidi Burgess

This is Part 3 of our Claude/Common Ground Series. On April 9, we published we published a newsletter entitled " Are Our Intractable Conflicts Really So Intractable? Claude (AI) Doesn't Think So." There we reported on Claude's response to an inquiry about how "a truly representative group of Americans" might answer the six questions in our Common Ground Exercise, as it applied to three highly contentious topics: climate, DEI, and immigration in the United States. As we explained in Newsletter 444, Claude pointed out that Americans agree on many more aspects of these conflicts than is generally recognized, while acknowledging that deep differences on all three topics remain. We reported on those differences, and more interestingly, on what Claude thinks we can do to address those differences constructively, in a second newsletter

Now we want to share a letter sent to us by Girish Sastry, an independent AI researcher, who wrote that he pasted the same prompt that we used in our climate, DEI, and immigration queries (drawn from our Common Ground Exercise) into Claude, this time asking it about frontier AI policy. He shared Claude's response, noting that "it seems like a good start to me."  

When we wrote him back, asking if we could share his letter, he added that he "only skimmed it [Claude's response] and there are a few things that were a bit "shallow" and it definitely stereotyped the various stakeholders." But, still, he gave us permission to share it. 

It seems to us that it would be impossible to do this exercise without stereotyping to some degree this topic (like the others we investigated) is so complex that the answers to these questions are necessarily highly nuanced; nuance that can be lost in general descriptions such as this.  However, particularly for those of us who are not experts in AI, but who are worried or confused about it, this seems to us to be a useful overview of the leading disputes and possible ways to address them. And we thought it was a useful illustration of how our common ground exercise and Claude's version of it is adaptable to many different topics. So, we think Claude's answers on frontier AI policy are worth sharing.  We also think that Claude's answers offer a better place to start than the apocalyptic (or occasionally utopian) rhetoric that generally characterizes AI articles. If we want to solve problems, rather than just complain about them, this is worth looking at. 

Common Ground Exercise: Frontier AI Policy


1. Where is there broad agreement?

Despite loud disagreements, there's more consensus across the spectrum than the discourse suggests:

  • AI is consequential. Almost no one across the spectrum thinks frontier AI is hype that will fizzle. Skeptics, boosters, doomers, and accelerationists all agree the technology matters and that decisions made now will shape the next few decades.
  • Some uses should be off-limits. Child sexual abuse material, bioweapon synthesis assistance, and non-consensual intimate imagery generation are nearly universally condemned. Disagreement is about how to prevent these, not whether to.
  • Transparency about AI-generated content has broad support. Labeling synthetic media, disclosing when people are talking to bots in certain contexts, and watermarking are popular across coalitions, even if implementation details divide people.
  • Children deserve special protection. Across the spectrum, people support stronger guardrails around minors' interactions with AI systems, whether the concern is mental health, sexual content, manipulation, or developmental harm.
  • The U.S. should not simply cede the frontier to China. Even most people who want strong regulation agree they don't want a world where the most capable systems are built exclusively under the Chinese Communist Party. Disagreement is about what "competing" requires.
  • Concentrated power is a risk worth taking seriously. Left, right, and libertarian critics all worry, in different vocabularies, about a small number of actors — whether labs, governments, or both fused together — wielding disproportionate control over a transformative technology.
  • Workers will be affected and deserve some response. Whether the answer is retraining, redistribution, slowing deployment, or something else, almost no one openly argues that displaced workers should simply be left to fend for themselves.

     

2. Where is there clear disagreement?

  • How dangerous are frontier models, really? Estimates of catastrophic and existential risk range from "negligible, this is sci-fi" to "the most likely cause of human extinction this century."
  • Who should regulate, and how much? Positions range from "no new regulation, existing law suffices" to "international treaty with hard compute caps and licensing of training runs."
  • Open weights: liberation or proliferation? Some see open-weight models as essential for competition, research, and resisting corporate/government capture. Others see them as handing dangerous capabilities to anyone, irreversibly. [Open-weight models are AI systems (particularly large language models—LLMs) that share their trained paramaters (called "weights") publically.]
  • Is the China race real or manufactured? Some treat U.S.–China competition as the central fact organizing all AI policy. Others see it as a narrative leveraged by labs to escape oversight. 
  • Should AI labs be trusted as partners or treated as adversaries? Some view leading labs as the most safety-conscious actors and natural collaborators with government. Others view them as conflicted parties whose safety claims should be treated like tobacco companies' health claims. 
  • What about jobs and economic disruption? Disagreements run from "AI will create more jobs than it destroys, as past tech did" to "this time really is different and we need UBI [universal basic income] or worse." [Not sure what "worse" means.]
  • Bias, fairness, and content moderation. What counts as a biased model, whose values it should reflect, and whether refusals are safety or censorship — all fiercely contested. 
  • AI consciousness and moral status. A small but growing fringe on multiple sides takes seriously that advanced models might warrant moral consideration; most dismiss this.

3 & 4. Factual disagreements and joint fact-finding

Several disagreements are at least partly empirical and could, in principle, be narrowed by shared investigation:

  • Capability trajectories. How fast are models actually improving on tasks that matter? Shared benchmarks, pre-registered forecasts, and independent evaluations (rather than lab self-reports) could narrow this. The catch: evaluations themselves are contested, and some dangerous capabilities are hard to measure without risking harm.
  • Marginal uplift from models for bioweapons, cyberattacks, etc. This is genuinely studiable through controlled red-teaming with appropriate safeguards. Some such studies exist; more independent ones with agreed-upon methodology would help.
  • Labor market effects. Economists can and do study this. Joint fact-finding here is feasible, though lagging indicators mean we'll often be debating projections, rather than data.
  • Open-weight risks vs. benefits. Both sides make empirical claims (e.g., "anything in an open model is already in a search engine" vs. "fine-tuning unlocks meaningful new capabilities") that are testable.
  • Compute and energy demands. The physical footprint of frontier AI is measurable. Disagreements here are mostly about projection and significance, not current facts.
  • Election and information-environment effects. Whether AI-generated content meaningfully shifted recent elections is studiable, and early evidence has surprised some on both sides (effects so far appear smaller than feared, though this could change). 

A serious joint fact-finding effort would probably need: independent evaluators with model access, pre-registered methodology, participation from researchers trusted by different camps, and a willingness on all sides to update. The track record here is mixed but not hopeless.

5. Value disagreements within the range of legitimate disagreement
 

  • Many disputes aren't really about facts — they're about values that reasonable people weigh differently:

     

  • Risk tolerance for transformative technology. How much potential downside is acceptable in exchange for potential upside? Reasonable people who agree on the facts can disagree here based on temperament, philosophy, and what they think is at stake. 
  • Precaution vs. permissionless innovation. Whether the burden of proof should fall on those deploying powerful new systems or those wanting to restrict them is a longstanding values question, not unique to AI. 
  • Centralization vs. decentralization. Whether it's better to have a few well-resourced, accountable actors or a diffuse, hard-to-control ecosystem reflects deep priors about power, trust, and resilience. 
  • National vs. cosmopolitan framing. Whether "winning" against China is the right frame, or whether AI governance should be global from the start, reflects values about nationalism, international cooperation, and moral circles. 
  • Whose values should models reflect? Reasonable people disagree about whether models should aim for neutrality, reflect majority views, reflect the values of their developers, or be customizable. None of these is obviously correct.
  • Acceptable concentrations of economic power. How much market dominance by AI companies is tolerable is a values question that long predates AI. 
  • How much weight to give speculative future harms vs. concrete present ones? Both matter; how to trade them off is contested and not purely factual.

These disagreements can coexist in a pluralistic society. People can and should argue them out without treating opponents as enemies.

6. Disagreements some consider morally intolerable
 

Here the exercise gets harder. Different participants will name different things as beyond the pale:

  • From safety-focused participants: Knowingly building or deploying systems that pose meaningful extinction or mass-casualty risk; treating such risks as an acceptable cost of competition or profit. 
  • From civil liberties participants: Building systems designed for mass surveillance, social scoring, or political repression; participating in such systems for any government.
  • From accelerationist participants: Restrictions that would, in their view, lock in incumbent power, prevent life-saving applications, or cede the future to authoritarian regimes — these are sometimes framed as morally intolerable too.
  • From labor and equity-focused participants: Deploying systems that knowingly displace workers without any social response, or that systematically disadvantage already-marginalized groups. 
  • From democracy-focused participants: Deploying systems that meaningfully degrade the information environment around elections or enable targeted manipulation at scale. 
  • From religious and humanist participants: Building systems that claim or simulate personhood in ways that degrade human dignity, or that are used to replace human relationships in domains (care, education, intimacy) where that feels like a category error.
How to constructively oppose things one finds intolerable, without collapsing the conversation:
  • Be specific about what is intolerable and why. "All AI development" is too broad to engage; "training runs above a certain capability threshold without third-party safety evaluation" is something one can argue about.
  • Distinguish opposition to actions from opposition to people. Engineers at frontier labs include people deeply worried about the risks; treating them as the enemy forecloses coalitions that might actually shift behavior.
  • Use the tools democracy provides. Legislation, litigation, shareholder action, journalism, whistleblower protection, professional norms, public pressure — these are legitimate even when the disagreement is fierce.
  • Be honest about uncertainty. Almost everyone in this debate is operating with significant uncertainty about how the technology will develop. Acting with appropriate humility — including about one's own moral certainty — keeps the conversation open to course correction.
  • Preserve the ability to talk to people you think are wrong. This is the hardest one. The history of technology policy suggests that the people who ultimately shape outcomes are often those who maintained relationships across the divide.

A note on what this exercise reveals: frontier AI policy doesn't divide cleanly along existing political lines. You find safety-focused people on the left and right; accelerationists on the left and right; civil-libertarian skeptics across the spectrum. That's actually a hopeful feature — coalitions are still forming, positions are still movable, and the conversation hasn't yet calcified into pure tribal signaling. The window for genuine common-ground work on this topic may be unusually open right now, even if it won't stay that way forever.

Guy and Heidi's Closing Note

While we think that there are many in the conflict field that could have produced a better list of suggestions for addressing this topic more constructively, Claude's list is a pretty good start.  We also like the fact that Claude (and other AI systems) make this kind of thinking much more accessible than it was in the past — all you need is an initial query (like our common ground exercise) that gets AI systems to start asking more constructive questions. Without such assistance, it would be prohibitively time-consuming for most people to get to this point. 

We do particularly agree with Claude's closing note.  If, indeed, the issues don't divide cleanly along existing left-right lines (which rings true to us), then that makes constructive engagement and even cooperation more likely than it would be if the conflict were divided along traditional party lines.  At the same time, Claude listed a lot of factors that are "beyond the pale," as it put it, for substantial numbers of people. So navigating these issues constructively, even with all the skills of our field utilized, is going to be very challenging.  A first step, however, is understanding what the issues are, and this summary seems to help with that considerably.


 

Please Contribute Your Ideas To This Discussion!

In order to prevent bots, spammers, and other malicious content, we are asking contributors to send their contributions to us directly. If your idea is short, with simple formatting, you can put it directly in the contact box. However, the contact form does not allow attachments.  So if you are contributing a longer article, with formatting beyond simple paragraphs, just send us a note using the contact box, and we'll respond via an email to which you can reply with your attachment.  This is a bit of a hassle, we know, but it has kept our site (and our inbox) clean. And if you are wondering, we do publish essays that disagree with or are critical of us. We want a robust exchange of views.

Contact Us

About the MBI Newsletters

Two or three times a week, Guy and Heidi Burgess, the BI Directors, share some of our thoughts on political hyper-polarization and related topics. We also share essays from our colleagues and other contributors, and every week or so, we devote one newsletter to annotated links to outside readings that we found particularly useful relating to U.S. hyper-polarization, threats to peace (and actual violence) in other countries, and related topics of interest. Each Newsletter is posted on BI, and sent out by email through Substack to subscribers. You can sign up to receive your copy here and find the latest newsletter here or on our BI Newsletter page, which also provides access to all the past newsletters, going back to 2017.

NOTE! If you signed up for this Newsletter and don't see it in your inbox, it might be going to one of your other emails folder (such as promotions, social, or spam).  Check there or search for beyondintractability@substack.com and if you still can't find it, first go to our Substack help page, and if that doesn't help, please contact us

If you like what you read here, please ....

Subscribe to the Newsletter