Executive Insight
Global AI regulation is moving at full speed — but not in one direction. While Europe locks in the AI Act timeline, U.S. states are experimenting with divergent rules, and Japan is testing copyright boundaries. Meanwhile, Illinois has drawn a red line against AI “therapy,” showing regulators’ willingness to ban entire classes of applications when human risk is deemed too high. For financial institutions, this signals a world where AI oversight must be both adaptive and anticipatory.
Key Updates & Signals
- EU AI Act proceeds as planned: The European Commission confirmed there will be no pause; general-purpose AI rules start August 2025, with high-risk system requirements following in 2026 (Reuters).
- European CEOs push back – Leaders at Airbus, BNP Paribas, Carrefour, and Philips urged Brussels to “stop the clock,” warning that compliance burdens risk stifling innovation (FT).
- California advances AI audits: New CCPA regulations mandate risk assessments, audits, and opt-out mechanisms for automated decision-making systems (Fisher Phillips).
- Colorado considers delay: Governor Polis convened a special session to push AI Act compliance back to 2027 amid industry pushback (Axios).
- Japan's copyright test case: Yomiuri Shimbun sues Perplexity AI for using 119,000+ articles without permission (NiemanLab).
- Illinois bans AI therapy: The "Wellness and Oversight for Psychological Resources Act" (WOPR) prohibits AI from providing psychotherapy services, with fines up to $10,000 per violation (Washington Post, Axios).
Deep Dive Spotlight
EU AI Act: No Grace Period
The EU Commission has ruled out any delays: high-risk rules will apply in 2026. The new GPAI Code of Practice, effective from August 2025, provides interim guidance on transparency and accountability. For banks, this means building classification, documentation, and audit-readiness into every AI system today: not tomorrow (King & Spalding).
Secondary Spotlight
Illinois AI Therapy Ban: A Regulatory Line in the Sand
- AI cannot provide therapy, generate treatment plans, or detect emotions.
- AI can be used only for administrative support or transcription: with informed consent.
- Enforced by the IDFPR with penalties up to $10,000 per violation.
This law reflects regulators’ growing willingness to prohibit AI entirely where stakes are too high, setting a precedent for other sectors like finance and healthcare (Washington Post).
RiskAI Corner: Practical Takeaway
Cross-Jurisdiction AI Governance Checklist
- Map AI use cases across EU, U.S. states, Japan.
- Build workflows to flag high-risk vs. general-purpose AI.
- Anticipate state-level audits & delays in the U.S.
- Add IP & copyright risk clauses to contracts.
- Train teams in AI literacy, audit readiness, and human-rights impacts.
Industry Voices
- “There will be no pause… the law’s deadlines are legally binding.” — EU Commission (Reuters)
- “When everyone uses AI to cheat, it’s no longer cheating.” — Cluely founder, on AI’s disruptive role (Business Insider)
- “More than 70% of bank AI use cases don’t report ROI.” — Evident Insights (Finadium)
Closing
From Europe’s AI Act to Illinois’s AI therapy ban, regulators are tightening their grip on AI. Financial institutions should expect sector-specific restrictions, more disclosure mandates, and even outright prohibitions in sensitive areas. The winners will be those who embed governance as a competitive advantage.
👉 Talk to an Expert: Discover how RiskAI reduces governance overhead by 50% and keeps your institution audit-ready across global regulations.