6 Comments
User's avatar
Thomas Bertels's avatar

Thank you for this - for making the topic really tangible and for making a compelling case.

Steven Adler's avatar

You’re very welcome! Glad it was helpful

The AI Founder's avatar

The a16z framework requiring only basic disclosures ('who built this model?') as the ceiling for what counts as meaningful regulation is a concrete example of the problem you're diagnosing — 'federal framework' sets an expectation of rigor while the contents don't deliver it. The 40 AGs objecting to preemption without replacement is actually a market signal, not just political noise — state laws exist partly because enterprises operating in those states face liability for AI decisions, and they need legal clarity that a disclosure-only federal framework doesn't provide. The harder question: do you think the 'federal framework' framing has already won the narrative battle, or is there still a window where a different framing takes hold? Thinking about AI product liability from the builder angle at theaifounder.substack.com.

The AI Founder's avatar

Your reframe of 'federal framework' as a demand function — it's only real if it answers liability standards and mandatory safety requirements — is a useful litmus test. The a16z transparency-only framework (requiring 'Who built this model?' but not safety benchmarks) is a telling data point for what industry groups actually mean when they say they want regulation. What I find interesting is the enforcement asymmetry: 40 state AGs can object, but preemption moves mean their only tool is litigation, which is slow and uneven. Following how the federal vs. state tension plays out from the product side at theaifounder.substack.com. If the $75M in AI-favorable political donations didn't exist, do you think substantive federal AI legislation was achievable in this Congress, or was that window always structurally closed?

The AI Founder's avatar

Greg Brockman donating $25M to a Trump Super PAC plus $50M in commitments to a pro-AI PAC is the kind of detail that should appear in every AI policy conversation and mostly doesn't. Your argument that 'federal framework' functions as strategic ambiguity rather than regulatory intent is compelling — but it leaves the harder question unanswered: which industry players actually oppose the specific answers on liability and mandatory safety actions, versus which ones are just opposing state regulatory fragmentation? Those are different coalitions with different incentives, and conflating them lets the second group hide behind the first. What's your read on whether there's any version of federal AI legislation that OpenAI would actively support rather than just tolerate?

Sam Chase's avatar

This is a great overview and something I wish the public knew more about. I found some relevant polling which shows Americans are generally more concerned than excited about AI, but have mixed feelings on trusting the US to regulate the space: https://www.pewresearch.org/short-reads/2025/11/06/republicans-democrats-now-equally-concerned-about-ai-in-daily-life-but-views-on-regulation-differ/. I think most people would be shocked to learn how little has been done.