The paper discusses the risks posed by #artificialintelligence (#ai) systems, from biased lending algorithms to chatbots that spew violent #hatespeech. The author argues that policymakers have a responsibility to consider broader, longer-term #risks from #aitechnology, such as #systemicrisk and the potential for misuse. While #regulatory proposals like the #eu #aiact and the #whitehouse AI Bill of Rights focus on immediate risks, they do not fully address the need for #algorithmicpreparedness. It proposes a roadmap for algorithmic preparedness, which includes five forward-looking principles to guide the development of regulations that confront the prospect of algorithmic black swans and mitigate the harms they pose to society. This approach is particularly important for general purpose systems like #chatgpt, which can be used for a wide range of applications, including ones that may have unintended consequences. The article emphasizes the need for #governance and #regulation to ensure that #aisystems are developed and used in ways that minimize risk and maximize benefit, and it references the #nist AI #riskmanagement Framework as a potential tool for achieving this goal.
Comments