10 résultats pour « chatgpt »

Lessons from GDPR for AI Policymaking

Date : Tags : , , , , , ,
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.

Expert Evaluation of ChatGPT Performance for Risk Management Process based on ISO 31000 Standard

"... its ability to provide relevant #riskmitigation strategies was identified as its strongest aspect. However, the research also revealed that #chatgpt's consistency in #riskassessment and prioritization was the least effective aspect. This research serves as a foundation for future studies and developments in the field of #ai-driven #riskmanagement, advancing our theoretical understanding of the application of #aimodels like ChatGPT in #realworld #risk scenarios."

Correlation Pitfalls With ChatGPT: Would You Fall for Them?

Date : Tags : , ,
"This paper presents an intellectual exchange with #chatgpt, … , about correlation pitfalls in #riskmanagement. … Our findings indicate that ChatGPT possesses solid knowledge of basic and mostly non-technical aspects of the topic, but falls short in terms of the mathematical goring needed to avoid certain pitfalls or completely comprehend the underlying concepts."

Gpt as a Financial Advisor

Date : Tags : , , , , ,
"We assess the ability of #GPT … to serve as a financial robo-advisor for the masses, by combining a financial literacy test and an advice-utilization task (the Judge-Advisor System). #davinci and #chatgpt (variants of GPT) score 58% and 67% on the #financialliteracy literacy test, respectively, compared to a baseline of 31%. However, people overestimated GPT's performance (79.3%), and in a savings dilemma, they relied heavily on advice from GPT (WOA = 0.65). Lower subjective financial knowledge increased advice-taking. We discuss the risk of overreliance on current large #languagemodels models and how their utility to laypeople may change."

Algorithmic Black Swans

The paper discusses the risks posed by #artificialintelligence (#ai) systems, from biased lending algorithms to chatbots that spew violent #hatespeech. The author argues that policymakers have a responsibility to consider broader, longer-term #risks from #aitechnology, such as #systemicrisk and the potential for misuse. While #regulatory proposals like the #eu #aiact and the #whitehouse AI Bill of Rights focus on immediate risks, they do not fully address the need for #algorithmicpreparedness. It proposes a roadmap for algorithmic preparedness, which includes five forward-looking principles to guide the development of regulations that confront the prospect of algorithmic black swans and mitigate the harms they pose to society. This approach is particularly important for general purpose systems like #chatgpt, which can be used for a wide range of applications, including ones that may have unintended consequences. The article emphasizes the need for #governance and #regulation to ensure that #aisystems are developed and used in ways that minimize risk and maximize benefit, and it references the #nist AI #riskmanagement Framework as a potential tool for achieving this goal.