Wednesday, November 12, 2025
spot_img
HomeAI - Artificial IntelligenceAI and U.S. Government Regulations: What to Expect in 2026

AI and U.S. Government Regulations: What to Expect in 2026

Federal guidance hardens, state rules arrive, and enforcement shifts from principles to documentation and audits.

The Federal Reset: What Changed in 2025
The biggest headline: in January 2025, President Trump revoked President Biden’s 2023 AI order (EO 14110), undoing requirements for pre-release safety testing and reporting. Days later, the White House issued a new order emphasizing AI “free from ideological bias.” Translation: tone changed toward innovation.

What Still Stands: OMB, NIST, and the Safety Institute
Even with the policy swing, the operating rails stayed. OMB Memorandum M-24-10 (Mar. 2024) set governance for federal AI use—Chief AI Officers, inventories, risk impact assessments, and minimum practice baselines. NIST’s AI Risk Management Framework (plus the Generative AI Profile) gives agencies and vendors a common language for risk identification and controls. NIST also stood up the U.S. AI Safety Institute and its 280-member consortium to push testing methods and shared benchmarks.

Financial and Consumer Regulators: Targeted Rules and Cases
Sector watchdogs are moving with existing authority. The SEC’s 2023 proposal on predictive data analytics (PDA) would require firms to address conflicts when algorithms steer investors—an AI-adjacent rule with wide implications for wealth apps. The FTC is policing “AI washing” and misuse via enforcement sweeps like Operation AI Comply. Expect targeted actions over broad federal AI statutes.

States Step In: Colorado and California
With Congress divided, states are setting rules. Colorado passed the first broad AI accountability statute (SB 24-205), requiring developers and deployers of high-risk systems to use “reasonable care,” run impact assessments, notify consumers of consequential decisions, and report discovered discrimination. Lawmakers later delayed the effective date to June 30, 2026. California’s privacy regulator finalized rules on automated decision-making technologies (ADMT), with obligations starting Jan. 1, 2026, and phased audits and risk assessments to follow.

Sector Guidance You’ll See in Contracts
Regulators are also issuing guidance that, while not statute, shows up in exams and RFPs. New York’s Department of Financial Services published AI-specific cybersecurity guidance for banks and insurers—leadership oversight, third-party vetting, MFA, annual AI training, and deepfake-aware incident plans. Expect more of this “soft law.”

International Coordination and Safety Testing
NIST’s U.S. AI Safety Institute also coordinates internationally through a new network of AI safety institutes. The aim is to align testing methods and avoid duplicative compliance across borders. For national security, NIST convened the TRAINS taskforce to address emerging risks—expect formalized evaluation suites.

What It Means for Builders and Buyers in 2026
• Documentation will be audited. System cards, model provenance, data lineage, and evaluation reports tied to NIST RMF move from “nice to have” to entry ticket.
• Impact assessments become routine. If your system influences lending, hiring, housing, healthcare, insurance, education, or critical services, plan pre-deployment and annual reviews plus notices and appeal pathways (Colorado—and likely copycats).
• Human oversight returns. Expect a right to review for adverse consequential decisions and channels to correct data.
• Vendor diligence expands. RFPs will probe bias testing, training-data IP, model security, red-teaming, and rate-limits for agents using “computer use.”

The Congressional Picture
A comprehensive federal AI law remains unlikely. Expect oversight hearings and targeted bills (critical infrastructure, elections, deepfakes) rather than a single omnibus. In 2026, NIST/OMB guidance and state laws define the runway.

Closing Thoughts
The U.S. is regulating by layers. White House rhetoric may swing, but OMB governance, NIST evaluations, SEC conflict rules, FTC cases, and state statutes set the baseline. Translate that into product: document what the model is, measure how it behaves, explain it to non-experts, and give people notice and recourse when the decision matters.

References

Authors
Serge Boudreaux – AI Hardware Technologies
Montreal, Quebec

Peter Jonathan Wilcheck – Co-Editor
Miami, Florida

Post Disclaimer

The information provided in our posts or blogs are for educational and informative purposes only. We do not guarantee the accuracy, completeness or suitability of the information. We do not provide financial or investment advice. Readers should always seek professional advice before making any financial or investment decisions based on the information provided in our content. We will not be held responsible for any losses, damages or consequences that may arise from relying on the information provided in our content.

RELATED ARTICLES
- Advertisment -spot_img

Most Popular

Recent Comments