Steven, the Dean Ball quotes you cite from X and TechCrunch were just the preview. He went much further on Ezra Klein's show today (March 6). Three things that connect directly to your argument here:
First, on the legal definition of surveillance you're worried about: Ball walked through the statutory gap in detail. "Surveillance" under the law doesn't include commercially available data. The government can legally buy your location data, browsing history, purchase records, and analyze them. One intelligence agency alone collects so much data annually it would need 8 million analysts to process it all. AI eliminates that constraint overnight. Ball: "AI gives them that infinitely scalable workforce. Thus, every law can be enforced to the letter with perfect surveillance over everything."
Second, on why the administration is taking this stance now (your question near the end): Ball confirms the Trump administration itself agreed to the same usage restrictions in summer 2025. The conflict only began after Emil Michael's Senate confirmation. Ball says Michael's objection "is not so much to the substance of the restrictions but to the idea of usage restrictions in general."
Third, your cynicism about another company stepping in was, of course, correct. But Ball was blunt about why: "I'm not skeptical that Sam Altman and Greg Brockman, having given $25 million to the Trump Super PAC, have better relationships in the Trump administration."
Thanks for this, I'll be watching as well. My biggest takeaway from Leo's 'Situational awareness' was that race effects were already at play in the push toward AGI/ASI, and that the idea of nationalizing development so that we could 'win' against China was a poorly imagined take on how the government might intervene.
To me, it should be apparent that as the singularity approached, responsible parties in the USGov would be compelled to 'nationalize' AI development by seizing control of the major actors, before an ASI could seize control of the Gov.
Flash forward to 2025/6, and the 'government' is somewhat short of responsible parties, and will be until an election cycle or 2 pass. But still, I did not predict that they would ask for the worst case-use as their first move! It is becoming clear that this administration will not recognize the danger of ASI in the time they need to do something reasonable about it (although they might have time to do something really rash and possibly kinetic).
Worrying about China getting ASI before we do totally misses the point, right? What we should worry about is ASI.
In a way, Leo, the Pentagon and other accelerationists remind me of that onion article about the toddler plotting to eat a Tide pod: "there's nothing you can do to stop me."
Steven, your framing of AI-enabled authoritarianism as the deeper structural risk beneath this contract dispute is the analytical contribution that most commentary has missed. The Tension Transformation Framework would push this insight one level further.
What you're describing — surveillance at scale with fewer humans involved, power asymmetries that undermine democratic accountability — is precisely the scenario where identity determines everything. The same AI capability deployed by an Architect-identity government (one that asks "how do we build accountability structures for this?") produces categorically different outcomes than the same capability in Victim-identity hands (one that asks "how do we protect our power from challengers?").
Your concern about the chilling effect on investment is well-founded, but the deeper chilling effect may be on the institutional norms that constrain how AI gets deployed. Anthropic's guardrails weren't just contractual protections — they were an attempt to build identity-level commitments into the technology itself. The Pentagon's demand isn't just to remove the restrictions; it's to assert that no private actor has standing to impose principled constraints on state power. That's an identity claim, and it's the one that matters most.
Your closing hope — that the administration backs down — is the Reformist outcome. The Creative response would require the administration to ask a different question entirely: not "how do we get unrestricted access?" but "what governance architecture makes powerful AI safe enough to actually use effectively?"
Steven, the Dean Ball quotes you cite from X and TechCrunch were just the preview. He went much further on Ezra Klein's show today (March 6). Three things that connect directly to your argument here:
First, on the legal definition of surveillance you're worried about: Ball walked through the statutory gap in detail. "Surveillance" under the law doesn't include commercially available data. The government can legally buy your location data, browsing history, purchase records, and analyze them. One intelligence agency alone collects so much data annually it would need 8 million analysts to process it all. AI eliminates that constraint overnight. Ball: "AI gives them that infinitely scalable workforce. Thus, every law can be enforced to the letter with perfect surveillance over everything."
Second, on why the administration is taking this stance now (your question near the end): Ball confirms the Trump administration itself agreed to the same usage restrictions in summer 2025. The conflict only began after Emil Michael's Senate confirmation. Ball says Michael's objection "is not so much to the substance of the restrictions but to the idea of usage restrictions in general."
Third, your cynicism about another company stepping in was, of course, correct. But Ball was blunt about why: "I'm not skeptical that Sam Altman and Greg Brockman, having given $25 million to the Trump Super PAC, have better relationships in the Trump administration."
Full breakdown with sourced quotes from the episode: https://theaiblindspot.substack.com/p/a-country-of-stasi-agents-in-a-data
Thanks yeah, Dean's analysis of all this has been great, as has Zvi Mowshowitz's
I also found this explanation of the Pentagon's wordgames to be very useful: https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you
I guess we'll see where this lands on Friday but it's been a really interesting story to follow.
Thanks for this, I'll be watching as well. My biggest takeaway from Leo's 'Situational awareness' was that race effects were already at play in the push toward AGI/ASI, and that the idea of nationalizing development so that we could 'win' against China was a poorly imagined take on how the government might intervene.
To me, it should be apparent that as the singularity approached, responsible parties in the USGov would be compelled to 'nationalize' AI development by seizing control of the major actors, before an ASI could seize control of the Gov.
Flash forward to 2025/6, and the 'government' is somewhat short of responsible parties, and will be until an election cycle or 2 pass. But still, I did not predict that they would ask for the worst case-use as their first move! It is becoming clear that this administration will not recognize the danger of ASI in the time they need to do something reasonable about it (although they might have time to do something really rash and possibly kinetic).
Worrying about China getting ASI before we do totally misses the point, right? What we should worry about is ASI.
In a way, Leo, the Pentagon and other accelerationists remind me of that onion article about the toddler plotting to eat a Tide pod: "there's nothing you can do to stop me."
https://theonion.com/so-help-me-god-i-m-going-to-eat-one-of-those-multicolo-1819585017/
Steven, your framing of AI-enabled authoritarianism as the deeper structural risk beneath this contract dispute is the analytical contribution that most commentary has missed. The Tension Transformation Framework would push this insight one level further.
What you're describing — surveillance at scale with fewer humans involved, power asymmetries that undermine democratic accountability — is precisely the scenario where identity determines everything. The same AI capability deployed by an Architect-identity government (one that asks "how do we build accountability structures for this?") produces categorically different outcomes than the same capability in Victim-identity hands (one that asks "how do we protect our power from challengers?").
Your concern about the chilling effect on investment is well-founded, but the deeper chilling effect may be on the institutional norms that constrain how AI gets deployed. Anthropic's guardrails weren't just contractual protections — they were an attempt to build identity-level commitments into the technology itself. The Pentagon's demand isn't just to remove the restrictions; it's to assert that no private actor has standing to impose principled constraints on state power. That's an identity claim, and it's the one that matters most.
Your closing hope — that the administration backs down — is the Reformist outcome. The Creative response would require the administration to ask a different question entirely: not "how do we get unrestricted access?" but "what governance architecture makes powerful AI safe enough to actually use effectively?"
Pretty serious stuff. Working on an Op-ed about this can I quote you?
Hi! Sure, feel free - no need to ask to quote me on my public writing