A high-dollar campaign over artificial intelligence governance has pushed past think tank panels into a bare-knuckle fight where strategy, law, and electoral math converge to decide whether Washington will override state rules or let them run until Congress catches up. The stakes are concrete: roughly $150 million already funds dueling coalitions that disagree less about whether to regulate AI than about who gets to do it, how fast it should happen, and how much protection consumers deserve while innovation stays on track. Instead of an abstract debate, the outcome will shape compliance budgets, venture bets, chip exports, and how quickly safety requirements touch actual products. The battleground no longer sits solely in committees; it stretches across the National Defense Authorization Act (NDAA), an executive order under discussion, and a string of hotly contested races where AI has jumped to the top tier of issues.
the stakes and the timeline
Congress sits at an inflection point, floating whether to slip preemption into the NDAA and force a decisive answer about who holds the pen. That maneuver would do more than shave duplicative red tape; it would codify whether state experiments can continue in the absence of national law or whether Washington will set uniform boundaries now and test the details later. The White House, meanwhile, has explored an executive order that could knock out selected state rules in domains judged inherently interstate, such as model deployment or cross-border compute. Either lever would reset the policy map in one move, determining whether New York- or California-style disclosures become de facto standards or stepping stones to a single national rulebook.
Timing now drives leverage. As states advance their own laws, every new filing requirement and fine schedule changes companies’ risk calculus and adds political cost to sweeping preemption. Supporters of state authority argue that early enforcement yields evidence about what works, and showings of harm and compliance feasibility will matter in federal drafting. Preemption advocates counter that delay carries its own price: diverging mandates raise costs, complicate audits, and weaken competitiveness against rivals that can scale under one regime. The NDAA clock compresses those arguments into a narrow window, inviting late-night amendments with outsize consequences for the direction of AI governance and the speed of rulemaking.
the coalitions at a glance
Two highly organized networks now anchor the fight. On one side sit Public First, a bipartisan initiative led by former Representatives Chris Stewart and Brad Carson, and Americans for Responsible Innovation (ARI), which Carson co-founded. Their case marries oversight and pragmatism: keep states in play while Congress remains stalled, require transparency and risk disclosures for high-impact systems, and build a factual record to inform national law. Fundraising projections for the 2026 cycle approach at least $50 million, drawing from donors aligned with AI safety and effective altruism, plus employees at safety-focused labs, notably Anthropic. The pitch is simple: states act as laboratories, generating enforcement data that can later be federated into durable national standards without sacrificing protection in the interim.
The counterweight is Leading the Future (LTF), launched with $100 million from a group that includes Marc Andreessen, OpenAI cofounder Greg Brockman, and Perplexity. LTF blends federal and state Super PACs with nonprofit advocacy to elect pro-innovation candidates, defeat champions of state-heavy regulation, and shepherd a national framework that overrides conflicting state rules. The America First Policy Institute (AFPI) bolsters the effort with an agenda linking AI deployment to energy and permitting reforms, while Meta’s new state and national political committees add corporate heft. Together, they argue that patchwork compliance will push investment offshore, cost jobs, and squander momentum against China—risks they say justify a single, predictable rulebook grounded in interstate commerce.
what each side wants on policy
Anti-preemption strategists frame their agenda around substantive safeguards first, jurisdiction second. They promote tougher export controls on advanced chips and compute, stronger transparency for labs, and mandated risk assessments and disclosures for frontier or high-risk systems. They also back more funding and authority for the National Institute of Standards and Technology (NIST) to mature testing and reporting alike. Their case leans on tangible harms—scams that empty bank accounts, risks to minors from generative content, national security exposure from model misuse—and cites public demand for guardrails, pointing to polling that claims 97% of Americans want AI protections. State laws provide the template: New York’s RAISE Act, championed by Assemblymember Alex Bores, and California’s Transparency in Frontier Artificial Intelligence Act (SB 53) require safety documentation and accountability, with hefty penalties for noncompliance.
Pro-preemption advocates invert the order: establish uniformity now, refine strength through federal rulemaking, and keep compliance costs from spiraling across fifty jurisdictions. They argue that AI systems, models, and data flows are inherently interstate, making the Commerce Clause an obvious legal basis for one framework. Their platform pairs rapid deployment with streamlined permitting and energy policy, asserting that reliable power and predictable rules directly support productivity and jobs. The machine behind this message borrows the crypto playbook: targeted spending in state races, legislative scorecards, grassroots mobilization, and rapid-response media that paints fragmentation as a competitiveness tax. Build American AI, LTF’s advocacy arm, invokes the internet era as precedent: national coherence, not a collection of state regimes, powered scale and success.
money, networks, and the electoral front
Both sides now operate like national campaigns, not issue clubs. Public First and ARI balance Super PACs with nonprofit policy work, amplifying research that spotlights harms while arguing independence from industry capture. Critics counter that effective altruism–aligned donors tilt toward overregulation; the coalition replies that industry-led preemption would dilute safety safeguards and shield risky deployment. Across the field, LTF and allied committees fuse war chests, candidate pipelines, and media production to compress complex regulatory tradeoffs into digestible frames: jobs versus patchwork, China versus drift, innovation versus red tape. The speed of professionalization underscores how quickly AI left the niche and entered the mainstream of political strategy and voter messaging.
Those networks already shape ballots. Public First and ARI are backing candidates who favor oversight and protect state authority, positioning statehouses as early guardians against emergent harms. LTF has targeted figures it labels anti-innovation and championed pro-preemption contenders who promise uniform rules. The congressional bid by Alex Bores, the RAISE Act author, turned into an early litmus test when an LTF Super PAC engaged his race, signaling that AI policy will not stay relegated to tech committees. In pivotal states, mailers and TV spots now tie AI directly to kitchen-table themes: wages, energy prices, and the risk that American firms fall behind. As primaries accelerate, these narratives create incentives for candidates to pick sides on preemption rather than punt.
cross-pressures, overlaps, and where authority might land
The politics do not cleave neatly along party lines. Inside the right, an illustrative tension sits in plain view: Chris Stewart co-leads Public First’s oversight push yet also serves on AFPI’s AI team, which endorses national preemption. That dual role reflects a broader divide between national security conservatives who favor robust safeguards and pro-business conservatives who prioritize rapid deployment and deregulation. Cross-pressures also surface in boardrooms, where companies tempted by a single national rule worry that weak federal standards could invite backlash or litigation. Meanwhile, states push forward—New York and California foremost—creating disclosure regimes and penalties that function as experiments, but also as leverage in federal negotiations over how strong national rules should be.
Despite sharp contrasts, there is overlap: both camps concede AI’s centrality to competitiveness, defense, and the labor market, and both expect federal action. They diverge on theory of change. One side backs iterative, bottom-up learning that federal law can later consolidate, using state enforcement to build an empirical record. The other seeks top-down clarity to guide long-term infrastructure and hiring, betting that coherent national policy accelerates safe deployment more effectively than a moving target. With the NDAA and a possible executive order in play, a near-term decision could settle the locus of authority. If preemption rides through, Washington would consolidate control; if not, states would continue to fill the vacuum until a comprehensive statute passed. Either path would carry distinct tradeoffs for speed, strength, and public trust.
next steps for an unsettled map
The most practical way forward involved sequencing rather than stalemate: define a federal floor with real teeth—transparency baselines, incident reporting, risk assessments for frontier models, export controls—and leave room for states to act above it until enforcement data justified tightening or harmonization. That approach respected interstate realities without erasing laboratories of democracy, and it signaled to companies that evidence, not rhetoric, would calibrate obligations. In parallel, boosting NIST funding and authority to formalize testing, benchmarks, and documentation would have turned today’s guidelines into verifiable obligations. Finally, Congress could have tasked agencies to map preemption narrowly to clearly interstate concerns while preserving state capacity against clear consumer harms, a compromise likely to reduce fragmentation without inviting a race to the bottom.
