Why AI Regulation Will Come From the States, Not Congress

If you're waiting for the US Congress to pass thorough AI legislation, don't hold your breath. While federal lawmakers continue to hold hearings, publish white papers, and make vague commitments, state legislatures across the country are actually passing laws. The future of AI regulation in America is being written in Sacramento, Albany, Austin, and Springfield — not Washington, DC.

This isn't surprising to anyone who's watched American governance. Congress is slow by design, and AI is moving too fast for the federal legislative process. State legislatures are smaller, more agile, and often more willing to take bold positions. The result is a patchwork of state-level AI laws that will shape how AI is developed, deployed, and used across the country.

The States Leading the Charge

Several states have emerged as leaders in AI regulation. California, as usual, is at the forefront with legislation addressing AI transparency, deepfakes, and automated decision-making. Colorado has passed one of the most thorough AI bills, focusing on high-risk AI systems and requiring impact assessments. Illinois has been forward-thinking on AI in employment, with laws governing AI use in hiring and workplace surveillance.

California: Multiple bills addressing AI-generated content labeling, automated decision-making transparency, and AI in healthcare.

  • Colorado: Comprehensive AI governance framework requiring risk assessments for high-stakes AI decisions.
  • Illinois: BIPA (Biometric Information Privacy Act) and AI Video Interview Act set early precedents for AI regulation.
  • New York City: Local Law 144 regulates AI in employment decisions, requiring bias audits.
  • Texas: Established an AI advisory council and has introduced legislation on AI in government services.

Why This Patchwork Approach Has Precedent

This isn't the first time states have led on technology regulation. California's CCPA (California Consumer Privacy Act) became the de facto national standard for data privacy because companies couldn't afford to treat California users differently. The same dynamic is likely to play out with AI regulation. When California or Colorado passes a law, companies often adopt it nationwide rather than maintaining different practices for different states.

This "California effect" has been powerful in privacy, environmental, and automotive regulation. There's every reason to believe it will work the same way for AI. Companies building AI products will design for the most restrictive state requirements, effectively making state law the national standard.

The Federal Vacuum

The lack of federal AI legislation isn't just about Congressional dysfunction. It's also about genuine disagreement on fundamental questions. Should AI regulation be sector-specific (different rules for healthcare, finance, etc.) or horizontal (same rules for all AI)? Should regulation focus on the technology itself or its applications? How do you regulate AI without stifling innovation? These are hard questions, and Congress hasn't found answers.

The Biden administration's Executive Order on AI was a significant step, but executive orders are limited in scope and can be reversed by future administrations. True legislation requires Congressional action, and the prospects for thorough AI legislation in the current political environment are slim. The states aren't waiting.

The Compliance Challenge

For companies operating nationally, the state-by-state approach creates real compliance challenges. Different states may have different requirements for AI transparency, bias testing, data handling, and user consent. Maintaining compliance across 50 different regulatory frameworks is expensive and complex.

This complexity may ultimately drive demand for federal legislation. When companies face a genuine compliance burden from state-level patchwork, they'll lobby harder for federal standards that preempt state laws. But that dynamic takes years to develop. In the meantime, companies need to track state-level developments carefully and build flexible compliance systems.

What This Means for the AI Industry

The state-level regulatory approach has both advantages and disadvantages. On the plus side, it creates regulatory laboratories where different approaches can be tested. States that get it right will attract AI companies. states that get it wrong will see them leave. On the downside, it creates uncertainty and complexity for companies operating across state lines.

For AI startups and companies, the practical advice is simple: watch Colorado and California closely. These states are setting the template that others will follow. If your AI product can comply with emerging Colorado and California requirements, you're probably in good shape for whatever comes next. The era of operating in a regulatory vacuum is ending — and that's probably a good thing for the long-term health of the industry.


Related reading: Shipsy Launches AgentFleet — AI Workforce for Logistics · Europe's AI Dilemma: Regulate Now or Fall Behind? · Trump's AI Policy: Light Touch or Dangerously Lax?