
The United States has no comprehensive federal AI statute. What it has instead is the most active patchwork of state-level AI laws, sectoral federal rules, and executive branch policy of any major jurisdiction. The picture in 2026 is also distinctly different from the picture in 2024: a change of administration in January 2025 led to the rescission of Biden's Executive Order 14110, the rollback of EEOC and Department of Labor AI guidance, the publication of a new America's AI Action Plan in July 2025, and a December 2025 executive order on state preemption that has set up a constitutional confrontation between federal and state authority over AI regulation.
For businesses operating in or with the US, the practical compliance question in April 2026 is no longer "what is the US federal AI law" (there is none) but "which combination of state AI laws, sectoral federal rules, and executive branch directives applies to this deployment, and how stable is the regulatory environment likely to be?" This article maps the actual current framework.
Federal executive branch policy
Trump Executive Order 14179 (23 January 2025)
On 20 January 2025, his first day in office, President Trump rescinded Biden's Executive Order 14110 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence). Three days later, on 23 January 2025, he signed Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence."
EO 14179 articulates a national AI policy "to sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security." Key provisions:
- Suspend, revise, or rescind Biden-era AI policies, directives, regulations, and orders that act as obstacles to AI innovation.
- Develop an AI Action Plan within 180 days (deadline 22 July 2025).
- Direct OMB to revise Memoranda M-24-10 and M-24-18 within 60 days to align with the new policy.
- Coordinate through the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs.
Following EO 14179, the EEOC removed its 2023 AI hiring guidance on 27 January 2025. The Department of Labour pulled its November 2024 AI Hiring Framework. Several Biden-era OMB memoranda were replaced by new memoranda focused on accelerating innovation and reducing procedural friction.
America's AI Action Plan (July 2025)
The Office of Science and Technology Policy published the America's AI Action Plan in July 2025, organised around three pillars:
- Accelerating innovation: reducing federal regulatory friction, expanding R&D investment, supporting domestic AI infrastructure, including data centres and compute capacity.
- Building American AI infrastructure: prioritising energy, semiconductor manufacturing, and skilled workforce.
- Leading international AI diplomacy and security: maintaining export controls on advanced chips, advancing US AI standards globally, and addressing national security risks.
The Action Plan also calls on federal agencies to limit funding to states with "burdensome" AI laws and urges the FCC to evaluate potential federal preemption authority under the Communications Act.
The state preemption executive order (11 December 2025)
On 11 December 2025, President Trump signed the executive order "Ensuring a National Policy Framework for Artificial Intelligence" (often referred to by its earlier draft title, "Eliminating State Law Obstruction of National AI Policy"). The EO aims to establish a uniform federal standard for AI regulation, with several mechanisms:
- AI Litigation Task Force (Section 3): Within 30 days, the Attorney General must establish a task force at the Department of Justice focused on challenging state AI laws on grounds including interstate commerce, federal preemption, and constitutional infirmity.
- FTC policy statement (Section 7): within 90 days, the FTC Chairman must issue a policy statement explaining when state laws requiring "alterations to the truthful outputs of AI models" are preempted by the FTC Act's prohibition on unfair and deceptive practices (15 U.S.C. § 45).
- Federal legislative recommendation (Section 8): the Special Advisor for AI and Crypto must jointly prepare a recommendation for a uniform federal AI regulatory framework, with carve-outs for (i) child safety protections, (ii) AI compute and data centre infrastructure (other than generally applicable permitting reforms), (iii) state government procurement and use of AI, and (iv) other topics to be determined.
- Federal funding leverage: the Commerce Department is directed to evaluate state AI laws and may withhold certain federal funds from states with laws the administration deems inconsistent with national AI policy.
The constitutional basis for the EO has been disputed. Several commentators, including the Centre for Democracy and Technology, EPIC, and Yale Journal on Regulation, have argued that preemption of state law is a question for Congress, not the executive branch. Congress has so far failed to pass federal AI preemption: the One Big Beautiful Bill Act proposed a 10-year moratorium on state AI enforcement, but the Senate stripped the moratorium 99-1 on 1 July 2025. The NDAA FY2026 included preemption discussions but did not result in enactment. Litigation challenging the EO's preemption mechanisms is likely as the AI Litigation Task Force begins operating.
State AI laws now in force or imminent
Notwithstanding federal preemption efforts, state AI regulation continues to expand. The most consequential state laws in or near force in 2026:
Colorado AI Act (SB 24-205)
Colorado SB 24-205 was signed in May 2024 as the first comprehensive US state AI statute, addressing algorithmic discrimination in consequential decisions. The original effective date of 1 February 2026 was delayed to 30 June 2026 by SB 25B-004, signed 28 August 2025, to allow more time for guidance development. The Act imposes risk management, impact assessment, and notice obligations on developers and deployers of high-risk AI systems used in employment, education, housing, financial services, healthcare, legal services, government services, and other consequential domains. Colorado provides a rebuttable presumption defence for organisations implementing NIST AI RMF or ISO/IEC 42001 frameworks.
Texas TRAIGA (HB 149)
The Texas Responsible Artificial Intelligence Governance Act was signed by Governor Abbott on 22 June 2025 and took effect on 1 January 2026. The enacted version differs significantly from earlier drafts that proposed an EU-style risk-tiered framework. TRAIGA's final structure focuses on:
- Prohibitions on specific harmful AI practices, including behavioural manipulation, intentional discrimination against protected classes, creation of child sexual abuse material, unlawful deepfakes, and infringement of constitutional rights.
- Intent-based liability: violation requires intentional development or deployment for prohibited purposes, distinguishing TRAIGA from Colorado's impact-based framework.
- Texas AI Advisory Council has policy and oversight responsibilities.
- Regulatory sandbox program allowing supervised testing of innovative AI systems.
- Preemption of local AI ordinances: TRAIGA nullifies any city or county AI ordinances.
- Texas Attorney General has exclusive enforcement with no private right of action and a 60-day cure period before action.
Penalties include $10,000-$12,000 per curable violation, $80,000-$200,000 per uncurable violation, and $2,000-$40,000 per day for continuing violations.
California: CCPA/CPRA, ADMT regulations, and AI-specific statutes
California has the most layered state AI regulatory framework. Key instruments:
- CCPA/CPRA: amended California privacy law, including biometric information and sensitive personal information protection.
- Automated Decision-Making Technology (ADMT) Regulations: finalised by the California Privacy Protection Agency in 2025. Risk assessment obligations effective 1 January 2026; full ADMT-specific obligations from 1 January 2027. Businesses using ADMT for significant decisions about California residents must conduct risk assessments, provide pre-use notices, and offer opt-out rights subject to specified exceptions.
- SB 942 (AI Transparency Act): originally effective 1 January 2026, delayed to 2 August 2026 by AB 853 (signed 13 October 2025). Requires AI-generated content disclosure for large covered providers.
- AB 2013 (training data transparency): signed September 2024, takes effect 1 January 2026. Requires generative AI developers to disclose training data information on their websites.
- SB 53 (California Transparency in Frontier Artificial Intelligence Act): signed September 2025. Requires safety evaluations for frontier AI models trained with above-threshold compute resources.
- AB 2655, AB 2839, AB 2355: election deepfake disclosure laws, some facing First Amendment challenges in federal court.
New York
NYC Local Law 144: in force since July 2023. Requires bias audits of Automated Employment Decision Tools (AEDTs) used to evaluate NYC candidates and employees, with annual independent audits and candidate notification. Penalties: $375 first violation, $500 same-day additional, and $500-$1,500 subsequent violations.
SB 8420-A: signed 11 December 2025, effective 9 June 2026. Requires conspicuous disclosure when synthetic performers appear in commercial advertising in New York. Penalties: $1,000 first violation, $5,000 subsequent.
RAISE Act: separate from SB 8420-A, addresses frontier AI safety with a distinct regulatory scope.
Illinois BIPA
The Illinois Biometric Information Privacy Act (740 ILCS 14) remains the leading US biometric statute. SB 2979 (effective 2 August 2024) restructured the per-scan damages framework into a single-recovery-per-person-per-violation-type model. The Seventh Circuit ruled in April 2026 (Clay v. Union Pacific) that the amendment applies retroactively to pending cases.
Other state laws
More than 20 states have enacted comprehensive privacy laws covering AI-relevant provisions, including Virginia (VCDPA), Connecticut (CTDPA), Utah (UCPA), Oregon (OCPA), and others. More than 30 states have enacted deepfake-specific laws. The Tennessee ELVIS Act (effective 1 July 2024) provides voice replication protection. Utah's AI Policy Act (effective 1 May 2024) was the first state AI consumer protection law.
Federal sectoral rules
The US has no horizontal federal AI law, but sector-specific federal rules apply across AI deployments:
- HIPAA: governs protected health information processed by clinical AI, healthcare predictive analytics, and decision support tools.
- FCRA and ECOA: govern consumer reporting and credit decisions, including AI-assisted underwriting and adverse action notices.
- GLBA: governs financial institutions' handling of nonpublic personal information.
- COPPA: restricts data collection from children under 13.
- Title VII, ADA, ADEA: federal employment discrimination laws apply to AI-assisted hiring, promotion, and termination decisions regardless of EEOC guidance status.
- FTC Section 5: covers unfair or deceptive AI practices, the most active enforcement tool for the FTC against consumer-facing AI services.
- SEC, CFTC, OCC, and FDIC: financial regulators apply existing rules on model risk management, algorithmic trading, and consumer protection to AI in financial services.
- FDA: regulates AI-based software as a medical device (SaMD), with specific guidance on AI/ML in medical devices and clinical decision support.
- NHTSA: oversees automated driving systems.
The TAKE IT DOWN Act (signed 19 May 2025)
The Tools to Address Known Exploitation by Immobilising Technological Deepfakes on Websites and Networks Act is the first federal statute specifically targeting AI-generated harmful content. Signed by President Trump on 19 May 2025, it amends Section 223 of the Communications Act to:
- Criminalise sharing or threatening to share non-consensual intimate imagery, including AI-generated "digital forgeries," with penalties up to 2 years (adult) or 3 years (minor) imprisonment. Criminal provisions effective immediately.
- Require covered platforms to establish notice-and-removal processes by 19 May 2026, with 48-hour removal windows after valid notice and FTC enforcement under the FTC Act.
The first conviction under the Act was announced in April 2026 against an Ohio defendant.
Federal voluntary frameworks
The NIST AI Risk Management Framework (AI RMF) remains the most widely referenced US AI governance standard. NIST has continued issuing supplementary guidance, including the Generative AI Profile (AI 600-1) released in 2024. While the AI RMF itself is voluntary, several state laws (notably Colorado's AI Act) provide affirmative defence or rebuttable presumption protection for organisations implementing it.
The US AI Safety Institute, established at NIST under the Biden administration, continues operating with adjusted priorities under the new administration's framework. International coordination with the UK AISI, Japan AISI, and Korea AISI continues through the AI Safety Institute Network.
The federal-state tension
The most distinctive feature of US AI regulation in 2026 is the active confrontation between the federal executive's preemption agenda and state legislative authority. The Trump December 2025 EO targets state laws, including California's SB 53 and Colorado's AI Act, explicitly. State attorneys general, including those in California, Colorado, New York, and Illinois, have signalled willingness to defend state authority. Litigation challenges to the AI Litigation Task Force's interventions are anticipated as the FTC policy statement (due ~March 2026) and the federal preemption legislative recommendation move forward.
The constitutional questions are unresolved. Existing Supreme Court precedent on preemption (Murphy v. NCAA, Hines v. Davidowitz) provides limited support for executive preemption absent congressional authorisation. The forthcoming FTC policy statement and any DOJ task force litigation will produce the first formal tests of the framework. For businesses, this creates substantial regulatory uncertainty: the state laws are operational and must be complied with, but the federal preemption claim could change which laws apply on any given day during 2026.
A practitioner's compliance plan
Step 1: Map AI deployments against state geographic reach
List every AI system, identify which US states the system operates in or where the affected individuals reside, and determine which state AI laws apply. The most consequential mappings: Colorado (high-risk AI), Texas (TRAIGA prohibitions), California (ADMT, AB 2013, SB 942, SB 53), Illinois (BIPA), New York (Local Law 144 and SB 8420-A), Utah (UCPA-AI).
Step 2: Build the federal compliance baseline
Apply federal sectoral rules wherever they intersect with the deployment: HIPAA for healthcare, FCRA/ECOA for credit, Title VII/ADA for employment, FDA for medical devices, NHTSA for autonomous vehicles, SEC/CFTC/OCC for financial services. Note that EEOC guidance has been rescinded but Title VII and ADA continue to apply by statute.
Step 3: Implement NIST AI RMF or ISO/IEC 42001 alignment
Both frameworks are voluntary but offer affirmative defence or rebuttable presumption protection under several state laws (notably Colorado). Both also align with international frameworks (EU AI Act, Japan AI Guidelines for Business, Singapore Model AI Governance Framework, Korea AI Framework Act), reducing duplicative effort for multinational deployments.
Step 4: Build TAKE IT DOWN Act compliance for platforms
If your service is a "covered platform" under the Act, implement notice-and-removal procedures by 19 May 2026 with 48-hour removal capability. Build clear submission interfaces, validation procedures, and documentation for FTC enforcement defence.
Step 5: Track federal preemption developments
Monitor the FTC policy statement (due approximately March 2026), AI Litigation Task Force actions, and any congressional movement on federal AI legislation. State law compliance remains required until preemption is judicially or legislatively confirmed.
Compliance FAQ
Is there a federal US AI law I should comply with?
No comprehensive federal AI statute exists. Federal AI policy operates through executive orders (which can change with administrations), sectoral federal laws (HIPAA, FCRA, ECOA, etc.), and a single AI-specific federal statute (TAKE IT DOWN Act, 2025) addressing AI-generated NCII. Most binding US AI obligations come from state laws.
Does the December 2025 Executive Order eliminate state AI laws?
No. State AI laws remain in force. The EO directs federal agencies to challenge state laws through litigation, FTC policy statements, and federal funding leverage, and to prepare a federal preemption legislative recommendation. None of these mechanisms automatically invalidates state law. Constitutional questions about executive-branch preemption authority will likely be tested in court during 2026.
How is the Trump approach different from the Biden approach?
The Biden EO 14110 emphasised AI safety, civil rights protections, equity considerations, mandatory red-teaming for high-risk models, and federal interagency coordination on AI risks. The Trump EO 14179 emphasises innovation acceleration, deregulation, removal of "ideological bias," and US AI dominance. The two approaches share virtually no substantive overlap beyond the statutory definition of AI. Most Biden-era agency AI guidance has been pulled or substantively revised.
What is the practical effect of TRAIGA on companies?
TRAIGA prohibits the development or deployment of AI systems for specific harmful purposes (manipulation, discrimination, CSAM, unlawful deepfakes, constitutional rights infringement) under an intent-based liability standard. It does not impose a Colorado-style high-risk AI risk management framework on private deployers generally. Companies should establish internal AI policies sufficient to demonstrate non-intentional compliance, but the operational burden is lower than under Colorado's framework.
How does the EU AI Act affect US companies?
The EU AI Act applies extraterritorially to US companies whose AI systems are placed on the EU market or whose outputs are used in the EU. Article 5 prohibitions have been in force since 2 February 2025; Article 50 transparency and high-risk obligations apply from 2 August 2026 (subject to potential Digital Omnibus delay). US companies serving EU customers must address both the EU AI Act and applicable US state laws.
What should businesses prioritise in 2026?
For multi-state US operators: a state-by-state compliance map covering Colorado, Texas, California, Illinois, New York, and Utah at a minimum. For employers using AI in hiring: continued Title VII/ADA compliance regardless of EEOC guidance status, plus NYC Local Law 144 if hiring in NYC. For platform operators: TAKE IT DOWN Act notice-and-removal by 19 May 2026. For all operators: NIST AI RMF or ISO/IEC 42001 alignment as a portable compliance baseline.
The bottom line
US AI regulation in 2026 is best understood as three concurrent layers operating in tension: an executive branch pushing federal preemption and innovation acceleration, a Congress that has so far declined to enact comprehensive federal AI law, and an active set of state legislatures continuing to fill the regulatory space. The result for businesses is operational complexity that no other major jurisdiction matches. State law compliance is mandatory and consequential. Federal sectoral rules continue to apply. The TAKE IT DOWN Act introduces the first federal AI-specific statutory obligations. Executive policy can change quickly and is currently optimised for innovation rather than constraint.
The durable compliance posture is to build governance programmes against the most demanding state laws (Colorado, California), align with NIST AI RMF or ISO/IEC 42001 as a portable foundation, and treat the federal preemption fight as background uncertainty rather than as an excuse to defer state compliance. Watch the FTC policy statement, AI Litigation Task Force actions, and any congressional movement, but do not bet the compliance programme on federal preemption succeeding. The state regimes are operational. The constitutional questions will take years to resolve. Businesses that build for the harder regime now will be best positioned regardless of how the federal-state tension is ultimately resolved.
Last updated: April 2026. This article is educational content and is not legal advice. US AI regulation is in active flux, including ongoing federal preemption litigation and state law implementation. Consult qualified counsel before making compliance decisions.