4 résultats pour « euaiact »

AI Regulations in the Context of Natural Language Processing Research

Date : Tags : , , , , , , , , ,
Recent #ai developments, particularly in Natural Language Processing (#nlp) like #gpt3, are widely used. Ensuring safety and trust with increasing NLP use requires robust guidelines. Global AI #regulations are evolving through initiatives like the #euaiact, #unesco recommendations, #us AI Bill of Rights, and others. The EU AI Act's comprehensive regulation sets a potential global benchmark. NLP models are subject to existing rules, such as #gdpr. This paper explores AI regulations, GDPR's application to AI, the EU AI Act's #riskbasedapproach, and NLP's role within these frameworks.

Lessons from GDPR for AI Policymaking

Date : Tags : , , , , , ,
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.

Acceptable Risks in Europe’s Proposed AI Act

This paper critically assesses the proposed #euaiact regarding #riskmanagement and acceptability of #highrisk #ai systems. The Act aims to promote trustworthy AI with proportionate #regulations but its criteria, "as far as possible" (AFAP) and "state of the art," are deemed unworkable and lacking in proportionality and trustworthiness. The Parliament's proposed amendments, introducing "reasonableness" and cost-benefit analysis, are argued to be more balanced and workable.

Measuring Ai Safety

This paper addresses the challenges associated with the adoption of #machinelearning (#ml) in #financialinstitutions. While ML models offer high predictive accuracy, their lack of explainability, robustness, and fairness raises concerns about their trustworthiness. Furthermore, proposed #regulations require high-risk #ai systems to meet specific #requirements. To address these gaps, the paper introduces the Key AI Risk Indicators (KAIRI) framework, tailored to the #financialservices industry. The framework maps #regulatoryrequirements from the #euaiact to four measurable principles (Sustainability, Accuracy, Fairness, Explainability). For each principle, a set of statistical metrics is proposed to #measure, #manage, and #mitigate #airisks in #finance.