8 résultats pour « artificial intelligence »

Regulating Algorithmic Harms

This paper examines the rise of algorithmic harms from AI, such as privacy erosion and inequality, exacerbated by accountability gaps and algorithmic opacity. It critiques existing legal frameworks in the US, EU, and Japan as insufficient, and proposes refined impact assessments, individual rights, and disclosure duties to enhance AI governance and mitigate harms.

The European Union's AI Act: beyond motherhood and apple pie?

“... we argue there are good reasons for skepticism, as many of its key operative provisions delegate critical regulatory tasks to AI providers themselves, without adequate oversight or redress mechanisms. Despite its laudable intentions, the AI Act may deliver far less than it promises.”

Digital Innovation and Banking Regulation

The EU aims to foster digital transformation across sectors by 2030 through legislation on AI, cloud computing, and crypto-assets. However, compared to ESG, banking regulation lacks a clear framework for managing digital risks and supervisory assessment. This paper discusses digital innovation in banking, proposing risk-based Pillar 2 prudential framework and harmonized Pillar 3 disclosures to address this gap.

Comments on the Final Trilogue Version of the AI Act

“This paper provides a comprehensive analysis of the recent EU AI Act, the regulatory framework surrounding Artificial Intelligence (AI), focusing on foundation models, open-source exemptions, remote biometric identification (RBI), copyright, high-risk classification, innovation, and the implications for fundamental rights and employment.”

AI Fairness in Practice

This workbook addresses the challenge of defining AI fairness, proposing a context-based and society-centered approach. It emphasizes equality and non-discrimination as core principles and identifies various types of fairness concerns across the AI project lifecycle. It advocates for bias identification, mitigation, and management through self-assessment, risk management, and fairness criteria documentation.

Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence

This paper addresses the inadequacy of the current U.S. tort liability system in handling the catastrophic risks posed by advanced AI systems. The author proposes punitive damages to incentivize caution in AI development, even without malice or recklessness. Additional suggestions include recognizing AI as an abnormally dangerous activity and requiring liability insurance for AI systems. The paper concludes by acknowledging the limits of tort liability and exploring complementary policies for mitigating catastrophic AI risk.

Knightian Uncertainty

In 1921, Keynes and Knight stressed the distinction between uncertainty and risk. While risk involves calculable probabilities, uncertainty lacks a scientific basis for probabilities. Knightian uncertainty exists when outcomes can't be assigned probabilities. This poses challenges in decision-making and regulation, especially in scenarios like AI, urging caution for eliminating worst-case scenarios due to potential high costs and missed benefits.

The Divergence of Auditors’ Stated Risk Assessments to Clients’ Use of AI

This study explores how AI affects auditors' judgments on complex estimates. It finds that when clients use AI for estimates, auditors' planned responses don't match their risk assessments. Auditors tend to plan less (more) work if AI-generated estimates were more (less) accurate previously, potentially posing concerns about audit effectiveness due to automation bias.