SPECIAL EDITON • 8 September 2025 • Issue 003 • 10-minute read • By RiskAI Team
Bank-relevant AI governance intelligence: what changed, why it matters, and what to do before your next ExCo.
Make no mistake, AI will be regulated. AI is moving from a compliance question to a boardroom risk. Regulation will set the rules, but liability will determine whether your safeguards protect you or expose you. For leaders, this is where governance proves its worth. Companies that fail to anticipate liability risks face legal uncertainty, costly disputes, and fractured stakeholder trust.
This piece is the first in a three-part series on AI liability.
A further reading section at the tail end of the article gives suggestions for more interested readers.
AI liability is no longer theoretical, it hits the bottom line. Organizations face:
AI liability is now a board-level risk.
Firms that act early will turn governance into an advantage; those that wait risk costly disputes and lasting damage.
Organizations should:
At its core, liability is about accountability and being held responsible when something goes wrong. While the concept has deep legal roots, its business relevance is straightforward: when a product or service causes harm, someone must pay.
Modern product liability law was built to achieve three objectives that still apply today:
In today's business environment, businesses must understand that this is about recognizing that liability shapes incentives, governance, and ultimately the trust customers place in your products and services.
The foundation of product liability lay in fault based liability. Under this approach, a consumer harmed by a product had to prove four things together in order to hold the Producer liable for harm caused by their products:
Hence, harm from a product was not a sufficient condition in and by itself to establish the producer's liability; the consumer also had to demonstrate that the producer was at fault, whether through negligent design, manufacturing errors, or failure to provide adequate warnings or instructions, and that the producer owed a duty of care.
While the above proposition seemed theoretically sound, it gave rise to some practical challenges which mainly stemmed from the fact that consumers had little realistic chance of proving negligence or other fault without access to the producer's internal processes (Information asymmetry).
At the same time, since producers profited from placing products on the market, shifting the cost burden onto them was seen as a fairer allocation of risk.
A secondary motivation for the ensuing legal innovation of no-fault liability was to incentivize safer product development; If liability only arose when faults were proven, some risks remained externalized. It was felt that a stricter liability regime would more effectively internalize the risks and promote safer product development.
These challenges, together with the underlying rationale of fairness and consumer protection, paved the way for the wider adoption of no-fault, or strict liability, regimes. This shift, reinforced by U.S. and EU legal milestones such as the 1965 Restatement (Second) of Torts §402A in the United States and the 1985 EU Product Liability Directive, still shape today's rules which, by extension, are now applied to AI.
While we will cover the unique challenges that AI-driven products pose for liability in the second article, the basic challenges stem from three characteristics of modern AI systems:
The three taken together make it challenging for traditional liability frameworks (fault based and strict product liability) to adequately address harms arising from AI.
From an organizational perspective, it is essential to understand the key trends shaping AI liability, in order to build internal preparedness and align policies and practices in the right direction. We list here eight key trends in Product liability which specifically affect harms arising from AI.
Jurisdictions have increasingly leaned on strict/no-fault product liability regimes to address technology related harms, while keeping general liability rules (tort in common law or delict in civil law) primarily fault-based.
This trend is now playing out in AI. Strict product liability is being extended to AI when it is treated as a product, particularly when embedded in physical goods or devices, while fault based rules continue to govern stand alone, service centric, or intangible AI harms. Yet across jurisdictions, we can already see signs of convergence towards broader liability coverage.
Regulatory obligations such as safety-by-design, risk assessment, and oversight now sit alongside tort and product liability, effectively tightening lifecycle duties around monitoring, patching, and recall. In practice, this means regulatory compliance increasingly shapes liability outcomes. While this convergence is not unique to AI, the rapid growth of AI focused legislation, which we can expect across jurisdictions over the coming decade, makes it an important area for companies to monitor and provides strong incentives to ensure full compliance with emerging AI laws and regulations.
Because of AI opacity, there is a trend toward disclosure/logging duties, evidential presumptions, and shifted burdens (even if uneven across countries) to help claimants prove causation/defect. This means companies are well advised to ensure AI interpretability mechanisms are added, monitored and enforced as part of their AI management and governance requirements.
Courts and contracts increasingly apportion responsibility across developers, data providers, integrators, and deployers, with indemnities and SLAs becoming central risk-allocation tools. A case in point is that nearly all AI-as-a-Service providers license their systems under terms of service containing indemnity clauses and liability caps, reflecting the industry-wide trend of spreading liability contractually across the ecosystem.
An important implication for AI has been the design obligation of producers for not only foreseeable use, but also consideration of foreseeable misuse in design of systems. This is particularly impactful in the context of AI given their dual/multiple use (i.e a single AI model can be used for multiple purposes) and has consequences not only in terms of how Terms of Use and Service articles are designed but also more fundamentally as part of the design obligation.
Software updates and model revisions create ongoing duties; pushing a risky update can be treated like placing a new product on the market. AI systems need frequent updates, not just to improve performance, but also to stay relevant as data and conditions change. This constant iteration makes liability risks even more challenging than with traditional software.
Policymakers, including the 1985 EU Product Liability Directive made provisions exempting Producers from liability if they could show that current technology and knowhow made the risks resulting in harm unknowable at the time the product was placed in the market. The trend has been a gradual tightening of this defence for what are deemed high risk cases where producers are responsible for a higher duty of care obligation. Coupled with dual/multiple use of AI, this presents organizations with challenges.
For organizations, the emerging AI liability landscape is not an abstract legal debate; it translates directly into financial, operational, and reputational consequences. Banks, insurers, asset managers, and technology driven enterprises face a growing exposure across multiple fronts:
Financial Exposure: Litigation costs, regulatory penalties, and settlement payouts can erode profitability. Even a single liability case linked to an AI-driven decision (e.g., credit scoring, claims handling, trading algorithms) could carry multimillion dollar consequences.
Operational Disruption: Continuous update duties and compliance driven monitoring will require new governance processes. Failure to manage these obligations may trigger product recalls, system suspensions, or forced rollbacks of critical AI models.
Ecosystem Risk: Liability no longer rests with a single producer. Firms are increasingly accountable for vendor models, third-party data, and integrated AI services. This amplifies supply chain risk and makes contract management a strategic priority.
Insurance Gaps: Traditional professional indemnity or cyber liability policies may not fully cover AI related harms. Without tailored coverage, companies risk facing uncovered exposures in high-stakes claims.
Reputational Fallout: Beyond monetary penalties, publicized AI failures erode consumer and stakeholder trust. For financial institutions in particular, where trust is the foundation of market confidence, liability-driven reputational damage can be systemic.
In short, AI liability is emerging as a board-level risk category. Organizations that delay adaptation may find themselves at a competitive disadvantage, burdened by disputes, regulatory scrutiny, and reputational harm, while better-prepared peers convert governance into a differentiator.
While the next article (November) will dive deeper, here is a sneak peek at where organizations must act now:
Worried about AI liability? 🚨 Book a slot with me today, or DM me for your free AI Liability Checklist.
RiskAI turns AI liability into a managed risk, mapping exposures across your supply chain, strengthening governance, and ensuring your safeguards actually protect the business.
Don't wait for liability to become a crisis. Get your free AI Liability Checklist and start building governance that actually protects your organization.