66 résultats
pour « ai »
The paper promotes better #riskmanagement and the fair allocation of #liability in #ai-related accidents.
This study explores the use of #ai #foundationmodels, specifically #chatgpt, in #auditing #esgreporting for #compliance with the #eu #taxonomy.
This paper discusses the potential of #ai #largelanguagemodels (#llms) in the #legal and #regulatorycompliance domain.
"This concise philosophical essay explores the hypothetical scenario of an #artificialintelligence (#ai) gaining agency capabilities and the relative impossibility of its shutdown due to the constraints of #network#infrastructure."
The paper explores the challenges of building a #safetyculture for #ai, including the lack of consensus on #risk prioritization, a lack of standardized #safety practices, and the difficulty of #culturalchange. The authors suggest a comprehensive strategy that includes identifying and addressing #risks, using #redteams, and prioritizing safety over profitability.
The article discusses the limitations of current #ai technologies such as #chatgpt, #largelanguagemodels, and #generativeai, and argues that we need to advance #researchanddevelopment beyond these limitations.
"The #eu proposal for the #artificialintelligenceact (#aia) defines four #risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of #ai systems (#ais), the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. Our suggestion is to apply the four categories to the risk #scenarios of each AIs, rather than solely to its field of application."
"In the rapidly evolving world of #ai technology, creating a robust #regulatoryframework that balances the benefits of AI #chatbots [like #chatgpt] with the prevention of their misuse is crucial."
This paper explores the use of #generativeai models in financial analysis within the Rumsfeldian framework of "known knowns, known unknowns, and unknown unknowns." It discusses the advantages of using #ai #models, such as their ability to identify complex patterns and automate processes, but also addresses the #uncertainties associated with generative AI, including #accuracy concerns and #ethical considerations.
This paper addresses the challenges associated with the adoption of #machinelearning (#ml) in #financialinstitutions. While ML models offer high predictive accuracy, their lack of explainability, robustness, and fairness raises concerns about their trustworthiness. Furthermore, proposed #regulations require high-risk #ai systems to meet specific #requirements. To address these gaps, the paper introduces the Key AI Risk Indicators (KAIRI) framework, tailored to the #financialservices industry. The framework maps #regulatoryrequirements from the #euaiact to four measurable principles (Sustainability, Accuracy, Fairness, Explainability). For each principle, a set of statistical metrics is proposed to #measure, #manage, and #mitigate #airisks in #finance.