This paper examines the rise of algorithmic harms from AI, such as privacy erosion and inequality, exacerbated by accountability gaps and algorithmic opacity. It critiques existing legal frameworks in the US, EU, and Japan as insufficient, and proposes refined impact assessments, individual rights, and disclosure duties to enhance AI governance and mitigate harms.
top of page
Rechercher
Posts récents
Voir tout“As analysts are primary recipients of these reports, we investigate whether and how analyst forecast properties have changed following...
00
This study proposes a new method for detecting insider trading. The method combines principal component analysis (PCA) with random forest...
00
Cyber risk classifications often fail in out-of-sample forecasting despite their in-sample fit. Dynamic, impact-based classifiers...
30
bottom of page
コメント