The research shows that AI biases often stem from organizational pressures like cost, risk, competition, and compliance, influencing development before technical factors are considered. These biases reflect broader societal and commercial contexts, with ethical considerations often sidelined. Recommendations focus on assessing technology's impact and organizational influences on AI biases.
This paper examines the rise of algorithmic harms from AI, such as privacy erosion and inequality, exacerbated by accountability gaps and algorithmic opacity. It critiques existing legal frameworks in the US, EU, and Japan as insufficient, and proposes refined impact assessments, individual rights, and disclosure duties to enhance AI governance and mitigate harms.
“... we analyse the regulatory necessity in introducing a coercive regulatory framework, and second, present the regulatory concept of the AI Act with its fundamental decisions, core provisions and risk typology. Lastly, a critical analysis points to shortcomings, tensions and watered down assessments of the Act.”
“This paper looks at global and regional efforts to come up with strategies and regulatory frameworks for AI governance. Chief amongst them include the OECD AI Principles; the EU AI Act; and the NIST AI RMF. The common thread among these frameworks or legislations is identifying and categorizing AI developments and deployments according to their risk levels and providing guidelines for ethical and trustworthy AI with considerations for human safety and innovation.”
“This paper provides a comprehensive analysis of the recent EU AI Act, the regulatory framework surrounding Artificial Intelligence (AI), focusing on foundation models, open-source exemptions, remote biometric identification (RBI), copyright, high-risk classification, innovation, and the implications for fundamental rights and employment.”
This paper outlines the need for AI risk regulation due to documented harms caused by AI systems. It cites examples of proposed and enacted laws aimed at mitigating these risks but highlights challenges in quantifying harms. It criticizes a bias towards technocorrectionism and advocates for a broader regulatory approach to address AI's impacts effectively.
This paper addresses the inadequacy of the current U.S. tort liability system in handling the catastrophic risks posed by advanced AI systems. The author proposes punitive damages to incentivize caution in AI development, even without malice or recklessness. Additional suggestions include recognizing AI as an abnormally dangerous activity and requiring liability insurance for AI systems. The paper concludes by acknowledging the limits of tort liability and exploring complementary policies for mitigating catastrophic AI risk.
The paper explores how advanced technologies like AI pose both potential and complexity in risk and safety applications. It delves into explainability and interpretability within risk science, emphasizing their role in enhancing assessment, management, and communication of risks, illustrated with autonomous vehicles examples. Aimed at stakeholders navigating tech's impact on risk.