10 résultats
pour « chatgpt »
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.
This study explores the use of #ai #foundationmodels, specifically #chatgpt, in #auditing #esgreporting for #compliance with the #eu #taxonomy.
"... its ability to provide relevant #riskmitigation strategies was identified as its strongest aspect. However, the research also revealed that #chatgpt's consistency in #riskassessment and prioritization was the least effective aspect. This research serves as a foundation for future studies and developments in the field of #ai-driven #riskmanagement, advancing our theoretical understanding of the application of #aimodels like ChatGPT in #realworld #risk scenarios."
The article discusses the limitations of current #ai technologies such as #chatgpt, #largelanguagemodels, and #generativeai, and argues that we need to advance #researchanddevelopment beyond these limitations.
"In the rapidly evolving world of #ai technology, creating a robust #regulatoryframework that balances the benefits of AI #chatbots [like #chatgpt] with the prevention of their misuse is crucial."
"This paper presents an intellectual exchange with #chatgpt, … , about correlation pitfalls in #riskmanagement. … Our findings indicate that ChatGPT possesses solid knowledge of basic and mostly non-technical aspects of the topic, but falls short in terms of the mathematical goring needed to avoid certain pitfalls or completely comprehend the underlying concepts."
"We provide guidance on the types of applications for which consulting #chatgpt can be useful to enhance your knowledge about #quantitative #riskmanagement in #actuarial practice, and point out those for which ChatGPT should better not be invoked."
#ethical dilemmas and #regulatory considerations associated with #ai and #chatgpt adoption in financial analysis are ... addressed, emphasizing the need for responsible AI usage and human oversight in critical #financial judgments.
"We assess the ability of #GPT … to serve as a financial robo-advisor for the masses, by combining a financial literacy test and an advice-utilization task (the Judge-Advisor System). #davinci and #chatgpt (variants of GPT) score 58% and 67% on the #financialliteracy literacy test, respectively, compared to a baseline of 31%. However, people overestimated GPT's performance (79.3%), and in a savings dilemma, they relied heavily on advice from GPT (WOA = 0.65). Lower subjective financial knowledge increased advice-taking. We discuss the risk of overreliance on current large #languagemodels models and how their utility to laypeople may change."
The paper discusses the risks posed by #artificialintelligence (#ai) systems, from biased lending algorithms to chatbots that spew violent #hatespeech. The author argues that policymakers have a responsibility to consider broader, longer-term #risks from #aitechnology, such as #systemicrisk and the potential for misuse. While #regulatory proposals like the #eu #aiact and the #whitehouse AI Bill of Rights focus on immediate risks, they do not fully address the need for #algorithmicpreparedness. It proposes a roadmap for algorithmic preparedness, which includes five forward-looking principles to guide the development of regulations that confront the prospect of algorithmic black swans and mitigate the harms they pose to society. This approach is particularly important for general purpose systems like #chatgpt, which can be used for a wide range of applications, including ones that may have unintended consequences. The article emphasizes the need for #governance and #regulation to ensure that #aisystems are developed and used in ways that minimize risk and maximize benefit, and it references the #nist AI #riskmanagement Framework as a potential tool for achieving this goal.