RiskAI
  • Home
  • Industry Solutions
    • Banking & Credit
    • Insurance
    • Wealth & Asset Management
  • Resources
    • Using RiskAI
    • Blog
    • Newsletter
    • Research
  • About
    • Technology
    • Meet Our Team
  • RiskAI Login
  1. Home
  2. Newsletter
  3. CRO/CDO Briefing
Bi-Weekly • Banking

C-Suite Briefing: AI Governance & Management

SPECIAL EDITON • 8 September 2025 • Issue 003 • 10-minute read • By RiskAI Team

Bank-relevant AI governance intelligence: what changed, why it matters, and what to do before your next ExCo.

Who Pays When AI Fails? Emerging AI Liability Trends

Make no mistake, AI will be regulated. AI is moving from a compliance question to a boardroom risk. Regulation will set the rules, but liability will determine whether your safeguards protect you or expose you. For leaders, this is where governance proves its worth. Companies that fail to anticipate liability risks face legal uncertainty, costly disputes, and fractured stakeholder trust.

This piece is the first in a three-part series on AI liability.

  • Today: How liability has evolved and why AI makes it different.
  • November: The unique liability challenges AI creates, and what leaders must do now.
  • December: What Europe's trajectory teaches us about global liability trends.

A further reading section at the tail end of the article gives suggestions for more interested readers.

Executive Summary (The 2 minute version)

  • AI liability is moving center stage. Regulation is only half the story; liability determines whether your governance and safeguards actually protect the organization.
  • Journey from negligence to strict liability. Product liability law has evolved from requiring proof of fault to embracing strict liability shifting the risk burden from consumers to producers. AI is accelerating this trend.
  • Why AI is different. Three features: autonomy, opacity, and diffuse responsibility make it hard for traditional liability rules to handle AI harms.
  • Key trends to watch:
    • Shift from fault to strict liability.
    • Ex ante safety merges with ex post liability.
    • Burden of proof rebalanced.
    • Ecosystem accountability.
    • Foreseeable misuse.
    • Continuous update duties.
    • Narrower safe harbors.

Business Impact

AI liability is no longer theoretical, it hits the bottom line. Organizations face:

  • Financial risk from lawsuits, fines, and settlements.
  • Operational strain from stricter update and monitoring duties.
  • Ecosystem exposure via vendor models and third-party data.
  • Insurance gaps as traditional policies fail to cover AI harms.
  • Reputational fallout that erodes trust and market confidence.

AI liability is now a board-level risk.

Firms that act early will turn governance into an advantage; those that wait risk costly disputes and lasting damage.

So what can you do?

Organizations should:

  • Map liability across the AI supply chain.
  • Strengthen governance (logging, monitoring, interpretability).
  • Revisit contracts (indemnities, SLAs, liability caps).
  • Align insurance coverage with emerging AI risks.
  • Track regulatory trends as strict liability expansion is inevitable.

The Background: what is Product Liability and where does AI Liability fit in?

At its core, liability is about accountability and being held responsible when something goes wrong. While the concept has deep legal roots, its business relevance is straightforward: when a product or service causes harm, someone must pay.

Modern product liability law was built to achieve three objectives that still apply today:

  • Protect customers and victims by ensuring fair and consistent compensation, even when they lack information about how a product works.
  • Drive safer innovation by forcing producers to absorb the true costs of harm rather than pushing them onto society.
  • Share risk fairly across the supply chain, so it isn't carried disproportionately by end users.

In today's business environment, businesses must understand that this is about recognizing that liability shapes incentives, governance, and ultimately the trust customers place in your products and services.

Fault based Product Liability

The foundation of product liability lay in fault based liability. Under this approach, a consumer harmed by a product had to prove four things together in order to hold the Producer liable for harm caused by their products:

  1. Harm occurred
  2. The producer owed a duty of care.
  3. The producer breached that duty via negligence/fault.
  4. The breach caused harm.

Hence, harm from a product was not a sufficient condition in and by itself to establish the producer's liability; the consumer also had to demonstrate that the producer was at fault, whether through negligent design, manufacturing errors, or failure to provide adequate warnings or instructions, and that the producer owed a duty of care.

Evolution of No-fault/Strict Liability

While the above proposition seemed theoretically sound, it gave rise to some practical challenges which mainly stemmed from the fact that consumers had little realistic chance of proving negligence or other fault without access to the producer's internal processes (Information asymmetry).

At the same time, since producers profited from placing products on the market, shifting the cost burden onto them was seen as a fairer allocation of risk.

A secondary motivation for the ensuing legal innovation of no-fault liability was to incentivize safer product development; If liability only arose when faults were proven, some risks remained externalized. It was felt that a stricter liability regime would more effectively internalize the risks and promote safer product development.

These challenges, together with the underlying rationale of fairness and consumer protection, paved the way for the wider adoption of no-fault, or strict liability, regimes. This shift, reinforced by U.S. and EU legal milestones such as the 1965 Restatement (Second) of Torts §402A in the United States and the 1985 EU Product Liability Directive, still shape today's rules which, by extension, are now applied to AI.

Are these legal instruments sufficient to address AI liability?

While we will cover the unique challenges that AI-driven products pose for liability in the second article, the basic challenges stem from three characteristics of modern AI systems:

  1. The increasing autonomy of AI systems, which heightens the risk of harm occurring directly by the product itself, rather than through an immediate human action. Indeed, traditional liability assumes products are static tools. When a product causes harm, it's because of a design/manufacture defect or misuse. AI systems act with a degree of independence, so harm can occur in ways not envisioned by designers, making it harder to fit into existing categories.
  2. The opaqueness of modern AI systems, which makes causal links between product operation and harm more difficult to establish; and
  3. The nature of AI development, which creates a more diffuse structure of responsibility than is typical for non-AI products. Product liability frameworks usually identify a clear "producer or manufacturer." In AI, responsibility is spread across data suppliers, developers, integrators, deployers making it unclear who bears liability.

The three taken together make it challenging for traditional liability frameworks (fault based and strict product liability) to adequately address harms arising from AI.

Key Trends Shaping AI Liability

From an organizational perspective, it is essential to understand the key trends shaping AI liability, in order to build internal preparedness and align policies and practices in the right direction. We list here eight key trends in Product liability which specifically affect harms arising from AI.

1. Shift from Fault based to Strict liability

Jurisdictions have increasingly leaned on strict/no-fault product liability regimes to address technology related harms, while keeping general liability rules (tort in common law or delict in civil law) primarily fault-based.

This trend is now playing out in AI. Strict product liability is being extended to AI when it is treated as a product, particularly when embedded in physical goods or devices, while fault based rules continue to govern stand alone, service centric, or intangible AI harms. Yet across jurisdictions, we can already see signs of convergence towards broader liability coverage.

2. Ex ante safety and ex post liability convergence.

Regulatory obligations such as safety-by-design, risk assessment, and oversight now sit alongside tort and product liability, effectively tightening lifecycle duties around monitoring, patching, and recall. In practice, this means regulatory compliance increasingly shapes liability outcomes. While this convergence is not unique to AI, the rapid growth of AI focused legislation, which we can expect across jurisdictions over the coming decade, makes it an important area for companies to monitor and provides strong incentives to ensure full compliance with emerging AI laws and regulations.

3. Re-balancing the burden of proof.

Because of AI opacity, there is a trend toward disclosure/logging duties, evidential presumptions, and shifted burdens (even if uneven across countries) to help claimants prove causation/defect. This means companies are well advised to ensure AI interpretability mechanisms are added, monitored and enforced as part of their AI management and governance requirements.

4. Ecosystem accountability replaces single-manufacturer focus.

Courts and contracts increasingly apportion responsibility across developers, data providers, integrators, and deployers, with indemnities and SLAs becoming central risk-allocation tools. A case in point is that nearly all AI-as-a-Service providers license their systems under terms of service containing indemnity clauses and liability caps, reflecting the industry-wide trend of spreading liability contractually across the ecosystem.

5. Foreseeable misuse becomes a design obligation.

An important implication for AI has been the design obligation of producers for not only foreseeable use, but also consideration of foreseeable misuse in design of systems. This is particularly impactful in the context of AI given their dual/multiple use (i.e a single AI model can be used for multiple purposes) and has consequences not only in terms of how Terms of Use and Service articles are designed but also more fundamentally as part of the design obligation.

6. Continuous update duties.

Software updates and model revisions create ongoing duties; pushing a risky update can be treated like placing a new product on the market. AI systems need frequent updates, not just to improve performance, but also to stay relevant as data and conditions change. This constant iteration makes liability risks even more challenging than with traditional software.

7. Narrowing of "development risk/state-of-the-art" safe harbors in high-risk AI.

Policymakers, including the 1985 EU Product Liability Directive made provisions exempting Producers from liability if they could show that current technology and knowhow made the risks resulting in harm unknowable at the time the product was placed in the market. The trend has been a gradual tightening of this defence for what are deemed high risk cases where producers are responsible for a higher duty of care obligation. Coupled with dual/multiple use of AI, this presents organizations with challenges.

Business Impact

For organizations, the emerging AI liability landscape is not an abstract legal debate; it translates directly into financial, operational, and reputational consequences. Banks, insurers, asset managers, and technology driven enterprises face a growing exposure across multiple fronts:

Financial Exposure: Litigation costs, regulatory penalties, and settlement payouts can erode profitability. Even a single liability case linked to an AI-driven decision (e.g., credit scoring, claims handling, trading algorithms) could carry multimillion dollar consequences.

Operational Disruption: Continuous update duties and compliance driven monitoring will require new governance processes. Failure to manage these obligations may trigger product recalls, system suspensions, or forced rollbacks of critical AI models.

Ecosystem Risk: Liability no longer rests with a single producer. Firms are increasingly accountable for vendor models, third-party data, and integrated AI services. This amplifies supply chain risk and makes contract management a strategic priority.

Insurance Gaps: Traditional professional indemnity or cyber liability policies may not fully cover AI related harms. Without tailored coverage, companies risk facing uncovered exposures in high-stakes claims.

Reputational Fallout: Beyond monetary penalties, publicized AI failures erode consumer and stakeholder trust. For financial institutions in particular, where trust is the foundation of market confidence, liability-driven reputational damage can be systemic.

In short, AI liability is emerging as a board-level risk category. Organizations that delay adaptation may find themselves at a competitive disadvantage, burdened by disputes, regulatory scrutiny, and reputational harm, while better-prepared peers convert governance into a differentiator.

So what can you do?

While the next article (November) will dive deeper, here is a sneak peek at where organizations must act now:

  • Map liability across the AI supply chain and understand exactly where risks sit among data providers, developers, integrators, and deployers.
  • Strengthen governance: enforce logging, monitoring, and interpretability to withstand scrutiny and shift the burden of proof in your favor.
  • Revisit contracts, update indemnities, SLAs, and liability caps to reflect AI's unique risk profile.
  • Align insurance coverage and ensure policies explicitly cover AI-related harms, not just generic cyber or professional liability.
  • Track regulatory trends as strict liability regimes are expanding; anticipating changes will be cheaper than reacting under pressure.

Worried about AI liability? 🚨 Book a slot with me today, or DM me for your free AI Liability Checklist.

How RiskAI Helps

RiskAI turns AI liability into a managed risk, mapping exposures across your supply chain, strengthening governance, and ensuring your safeguards actually protect the business.

  • For Banking: Banking AI Governance & Liability Readiness: audit-ready controls, lifecycle evidence, and underwriting artifacts
  • For Insurance: Insurance AI Governance & Liability Readiness: EIOPA-aligned controls and underwriting readiness pack
  • Book a conversation: AI Liability Checklist & governance review (free)

Further Reading

Foundational U.S. Sources

  • Greenman v. Yuba Power Products, Inc., 59 Cal. 2d 57 (1963): Landmark California Supreme Court case adopting strict liability for defective products.
  • Restatement (Second) of Torts §402A (1965): Landmark case which Introduced strict liability for unreasonably dangerous defective products in U.S. law.
  • Restatement (Third) of Torts: Products Liability (1998): Modernized product liability.

European Sources

  • Council Directive 85/374/EEC of 25 July 1985: the 1985 EU Product Liability Directive.
  • Directive (EU) 2024/2853 of 13 June 2024: the revised Product Liability Directive (PLDr)
  • Artificial Intelligence and Civil Liability, A European Perspective. Policy Department for Justice, Civil Liberties and Institutional Affairs Directorate-General for Citizens' Rights, Justice and Institutional Affairs PE 776.426 - July 2025

Comparative / Academic References

  • W. Page Keeton et al., Prosser and Keeton on Torts (5th ed. 1984): classic U.S. tort treatise with an influential discussion of product liability.
  • Micklitz, H.-W. (1986). Perspectives on a European directive on the safety of technical consumer goods

Ready to Address AI Liability?

Don't wait for liability to become a crisis. Get your free AI Liability Checklist and start building governance that actually protects your organization.

Subscribe to Newsletter Get Free AI Liability Checklist

Related Resources

  • Banking AI Governance & Liability Readiness
  • Insurance AI Governance & Liability Readiness
  • Wealth Management AI
  • Blog
RiskAI
Tel: +49 152 2156 2267
Email: info@riskai.tech
Germany: Dorothee-Sölle-Platz 2, 50672 Köln, Germany
United States: 3 Columbus Circle, 15th Floor, New York, USA

Legal & Trust

  • Privacy Policy
  • Terms of Service
  • Security & Compliance
  • Responsible AI
  • Data Processing Addendum (DPA)

Company

  • Technology
  • Our Team
  • Blog
  • Careers

Solutions

  • Banking & Credit
  • Insurance
  • Wealth & Asset Management
  • Consulting
© 2025 RiskAI Technologies GmbH. All rights reserved.
GDPR Compliant ISO 27001 EU AI Act Ready