68 résultats pour « ai »

Acceptable Risks in Europe’s Proposed AI Act

This paper critically assesses the proposed #euaiact regarding #riskmanagement and acceptability of #highrisk #ai systems. The Act aims to promote trustworthy AI with proportionate #regulations but its criteria, "as far as possible" (AFAP) and "state of the art," are deemed unworkable and lacking in proportionality and trustworthiness. The Parliament's proposed amendments, introducing "reasonableness" and cost-benefit analysis, are argued to be more balanced and workable.

Building a Culture of Safety for AI: Perspectives and Challenges

Date : Tags : , , , , , , ,
The paper explores the challenges of building a #safetyculture for #ai, including the lack of consensus on #risk prioritization, a lack of standardized #safety practices, and the difficulty of #culturalchange. The authors suggest a comprehensive strategy that includes identifying and addressing #risks, using #redteams, and prioritizing safety over profitability.

How to Evaluate the Risks of Artificial Intelligence: A Proportionality‑Based, Risk Model

Date : Tags : , , , , , , ,
"The #eu proposal for the #artificialintelligenceact (#aia) defines four #risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of #ai systems (#ais), the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. Our suggestion is to apply the four categories to the risk #scenarios of each AIs, rather than solely to its field of application."