$150 Million AI Policy Battle Moves to Campaign Trail
AI & Technology

Competing Super PACs Target 2026 Midterms Over AI Regulation

The debate over artificial intelligence regulation is shifting from Congress to electoral politics, with two opposing groups planning to spend at least $150 million in the 2026 midterms. Leading the Future, backed by OpenAI President Greg Brockman, venture capital firm a16z, and Palantir co-founder Joe Lonsdale, will spend up to $100 million supporting candidates who favor federal AI regulation over state-level laws. The group's position is that state-by-state regulation will undermine America's ability to compete with China in AI development.

Former Representatives Chris Stewart (R-Utah) and Brad Carson (D-Okla.) are launching competing bipartisan super PACs aiming to raise $50 million to support pro-regulation candidates. Carson argues that most Americans are anxious about AI and that the tech industry's "accelerationist YOLO agenda" will not resonate with voters. Their PACs will back candidates supporting strong export controls on AI chips to China and those who believe states should retain regulatory authority, particularly absent federal standards.

Unlike the crypto industry's 2024 campaign spending, which largely ran unopposed, the AI policy battle features organized opposition on both sides. Leading the Future has already announced plans to spend against New York State Assemblymember Alex Bores, who championed state AI regulation and is running for Congress. The groups will compete in House and Senate races, with the pro-regulation side also considering state legislative and gubernatorial contests.

Read the full Punchbowl News article

Commentary

Industry Political Spending Precedents: The $150 million combined spending represents a significant escalation in tech industry political involvement. For comparison, pharmaceutical companies spent approximately $135 million on federal elections in 2020, while the oil and gas industry spent around $140 million in 2022. The tech sector's willingness to match or exceed these figures for a single policy issue (AI regulation) signals how critical regulatory outcomes are perceived to be for business models and competitive positioning.

The Crypto Blueprint and Its Limits: Crypto's Fairshake super PAC spent roughly $100 million in 2024 with notable success, helping elect pro-crypto candidates and defeating some regulatory hawks. However, crypto faced a fragmented opposition with limited resources. The AI battle differs fundamentally: well-funded, bipartisan opposition led by former lawmakers with political credibility. This dynamic more closely resembles traditional lobbying conflicts (e.g., renewable energy vs. fossil fuels) than the one-sided crypto push.

Federal Preemption as the Core Issue: The substantive policy disagreement centers on whether states should be allowed to regulate AI independently. Leading the Future argues that a patchwork of state laws creates compliance burdens that disadvantage American companies against Chinese competitors operating under unified national policy. The opposing view holds that federal regulation is unlikely to materialize quickly, and in its absence, states must protect their citizens. This mirrors historical debates over environmental regulation, consumer protection, and data privacy.

The China Competition Frame: Both sides invoke U.S.-China AI competition, but with opposite conclusions. Pro-industry groups argue that regulatory fragmentation weakens American competitiveness. Pro-regulation groups argue that allowing unfettered AI development or weak export controls on advanced chips enables China's AI capabilities. This framing battle matters because national security arguments tend to be politically potent and can override other considerations.

Voter Attitudes Toward AI: Carson's claim that Americans are "anxious" about AI rather than opposed is supported by polling data. Most surveys show majorities concerned about job displacement, misinformation, privacy violations, and AI safety risks, while simultaneously believing AI will have positive impacts. This creates a political environment where both pro-innovation and pro-regulation messaging can find receptive audiences, making the campaign spending potentially decisive in close races.

State Regulatory Momentum: Multiple states have active AI legislation. California's SB 1047 (which would have required safety testing for large AI models) was vetoed in 2024 but demonstrated legislative appetite for AI regulation. New York, Colorado, Illinois, and others have various AI bills under consideration. The industry's campaign spending represents an attempt to change the political calculus before state regulatory frameworks become entrenched and difficult to preempt.

Export Controls and National Security: The pro-regulation PACs' focus on AI chip export controls reflects ongoing policy debates about Nvidia's sales to China. The Trump administration has taken a more permissive stance than the Biden administration on certain chip exports, arguing that overly restrictive policies hurt American companies without meaningfully slowing Chinese AI development. This creates a cross-cutting political dynamic where some Republicans favor stronger controls while some Democrats support industry positions.

Electoral Targeting and Effectiveness: Leading the Future's decision to target Alex Bores specifically signals a strategy of making examples early to deter other potential state-level AI regulation champions. Whether this succeeds depends on whether the spending can credibly be portrayed as decisive in electoral outcomes. If candidates perceived as anti-industry lose in races where these PACs spent heavily, it creates a deterrent effect for other lawmakers considering similar positions.

The Absence of Consensus: Unlike some tech policy issues where broad agreement exists (e.g., breaking up illegal robocall operations), AI regulation features genuine disagreement about fundamental questions: appropriate speed of deployment, acceptable risk levels, role of government oversight, and balance between innovation and safety. This lack of consensus makes these electoral campaigns more consequential, as they will influence which perspective gains political ascendancy.

Long-Term Regulatory Trajectory: Whichever side succeeds in 2026 could lock in a regulatory approach for years. If pro-industry candidates prevail and pass federal preemption without substantive regulation, states lose their ability to experiment with AI governance. If pro-regulation candidates win and states continue legislating, companies face the compliance complexity and costs the industry fears. Given AI's rapid development trajectory, the regulatory framework established in the next few years may prove difficult to revise once entrenched.