68 résultats
pour « ai »
This paper critically assesses the proposed #euaiact regarding #riskmanagement and acceptability of #highrisk #ai systems. The Act aims to promote trustworthy AI with proportionate #regulations but its criteria, "as far as possible" (AFAP) and "state of the art," are deemed unworkable and lacking in proportionality and trustworthiness. The Parliament's proposed amendments, introducing "reasonableness" and cost-benefit analysis, are argued to be more balanced and workable.
Private sector #ai applications can lead to unfair results and loss of informational #privacy, such as increasing #insurancepremiums. Addressing this involves exploring the philosophical theory of fairness as equality of opportunity.
The paper promotes better #riskmanagement and the fair allocation of #liability in #ai-related accidents.
This study explores the use of #ai #foundationmodels, specifically #chatgpt, in #auditing #esgreporting for #compliance with the #eu #taxonomy.
This paper discusses the potential of #ai #largelanguagemodels (#llms) in the #legal and #regulatorycompliance domain.
"This concise philosophical essay explores the hypothetical scenario of an #artificialintelligence (#ai) gaining agency capabilities and the relative impossibility of its shutdown due to the constraints of #network#infrastructure."
The paper explores the challenges of building a #safetyculture for #ai, including the lack of consensus on #risk prioritization, a lack of standardized #safety practices, and the difficulty of #culturalchange. The authors suggest a comprehensive strategy that includes identifying and addressing #risks, using #redteams, and prioritizing safety over profitability.
The article discusses the limitations of current #ai technologies such as #chatgpt, #largelanguagemodels, and #generativeai, and argues that we need to advance #researchanddevelopment beyond these limitations.
"The #eu proposal for the #artificialintelligenceact (#aia) defines four #risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of #ai systems (#ais), the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. Our suggestion is to apply the four categories to the risk #scenarios of each AIs, rather than solely to its field of application."
"In the rapidly evolving world of #ai technology, creating a robust #regulatoryframework that balances the benefits of AI #chatbots [like #chatgpt] with the prevention of their misuse is crucial."