"The #eu proposal for the #artificialintelligenceact (#aia) defines four #risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of #ai systems (#ais), the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. Our suggestion is to apply the four categories to the risk #scenarios of each AIs, rather than solely to its field of application."
"... we propose applying the #risk categories to specific #ai #scenarios, rather than solely to fields of application, using a #riskassessment #model that integrates the #aia [#eu #aiact] with the risk approach arising from the Intergovernmental Panel on Climate Change (#ipcc) and related literature. This model enables the estimation of the magnitude of AI risk by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We use large language models (#llms) as an example."
This report presents the findings and recommendations of the Open Loop's policy prototyping program on the #eu#artificialintelligence Act (#aia ), which involved 53 AI companies participating in an online platform to provide feedback on selected articles of the AIA. While the majority of the participants found the provisions to be clear and feasible, there were areas for improvement to ensure the effectiveness of the AIA. The report provides the legislator with nine recommendations, including revising the taxonomy of AI actors, providing guidance on #riskassessment, concrete guidance for technical documentation and #dataquality requirements, ensuring qualified staff for human oversight of AI, and maximizing the potential of #regulatorysandboxes.
"The unacceptable risks are those that are deemed to contravene Union values, and they are therefore considered as “prohibited AI practices” by Article 5 AIA. The proposed prohibition covers four categories: 1) AI systems deploying subliminal techniques, 2) AI practices exploiting vulnerabilities, 3) social scoring systems, and 4) “real-time” remote biometric identification systems. "
"... we contribute both empirically and conceptually to a better understanding of the nexus of AI and regulation and the underlying normative decisions. A comparison of the scientific proposals with the proposed European AI regulation illustrates the specific approach of the regulation, its strengths and weaknesses."