66 résultats pour « ai »

Taking AI Risks Seriously: A Proposal for the AI Act

Date : Tags : , , , , , , , , ,
"... we propose applying the #risk categories to specific #ai #scenarios, rather than solely to fields of application, using a #riskassessment #model that integrates the #aia [#eu #aiact] with the risk approach arising from the Intergovernmental Panel on Climate Change (#ipcc) and related literature. This model enables the estimation of the magnitude of AI risk by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We use large language models (#llms) as an example."

Regulating AI at work: labour relations, automation, and algorithmic management

These papers examine the role of #collectivebargaining and #governmentpolicy in shaping strategies to deploy new #digital and #ai-based technologies at work. The authors argue that efforts to better #regulate the use of AI and #algorithms at work are likely to be most effective when underpinned by social dialogue and collective #labourrights. The articles suggest specific lessons for #unions and policymakers seeking to develop broader strategies to engage with AI and #digitalisation at work.

Rationalizing AI Governance: A Cross‑Disciplinary Perspective

Date : Tags : , , , ,
The study emphasizes the need for a better understanding of #ai to avoid policies that may hinder its benefits. It argues for a cross-disciplinary approach to AI #governance and clarifying its core concepts to build trust. The paper addresses two key questions: 1) What is the best way to safely introduce AI to maximize well-being and #sustainability in light of its potential #risks? and 2) What specific policy steps should be taken to implement it?

Regulation Priorities for Artificial Intelligence Foundation Models

This article discusses the need for high-level frameworks to guide the #regulation of #artificialintelligence (#ai) technologies. It adapts a #fintechinnovation Trilemma framework to argue that regulators can prioritize only two of three aims when considering AI oversight: promoting #innovation, mitigating #systemicrisk, and providing clear #regulatoryrequirements.

Reasonable AI and Other Creatures: What Role for AI Standards in Liability Litigation?

This paper discusses the relationship between standards and private law in the context of #liability #litigation and #tortlaw for damage caused by #ai systems. The paper highlights the importance of #standards in supporting policies and legislation of the #eu, particularly in the #regulation of #artificialintelligence. The paper assesses the role of AI standards in private law and argues that they contribute to defining the duty of care expected from developers and professional operators of AI systems.

Getting AI Innovation Culture Right

Date : Tags : , , , , , , ,
This paper discusses the role of public policy in #regulating the development of #ai, #ml, and #robotics, and the potential #risks of different approaches to #governance. It explores the tension between precautionary principles that prioritize risk avoidance and permissionless innovation that encourages entrepreneurship, and advocates for a more flexible, #bottomup governance approach that can address risks without hindering innovation.

Suggestions for a Revision of the European Smart Robot Liability Regime

This article discusses the need for #regulation of #robots and #ai in #europe, focusing on the issue of #civil #liability. Despite multiple attempts to harmonize #eu#tort #law, only the liability of producers for defective products has been successfully harmonized so far. The #aiact, published by the #europeancommission in 2021, aims to #regulate AI at the European level by classifying #smartrobots as "high risk systems", but does not address liability rules. This article explores liability issues related to AI and robots, particularly when using #deeplearning #machinelearning techniques that challenge the traditional liability paradigm.

The (Un)Limited Use of AI Segmentation in the Insurance Sector

This study examines the use of #artificialintelligence (#ai) and #bigdata data analytics by #insurers in #belgium for segmentation purposes to determine #claims#probability for prospective policyholders. The implementation of AI and big data analytics can benefit insurers by increasing the accuracy of #riskassessment. However, pervasive segmentation can have negative implications and potentially harm policyholders if their risk is incorrectly calculated. Existing restrictions in #insurance#regulations fall short of protecting policyholders from inaccuracies in risk assessments, potentially resulting in incorrect #premiums or conditions.