66 résultats
pour « ai »
Generative AI (GAI) is transforming banking risk management, improving fraud detection by 37%, credit risk accuracy by 28%, and regulatory compliance efficiency by 42%. GAI enhances stress testing but faces challenges in privacy, explainability, and skills gaps. Its adoption, led by larger banks, demands holistic strategies for equitable industry impact.
AI adoption in finance introduces risks like model inaccuracies, data security issues, and cyber threats. FINMA notes many institutions are at early development stages for AI governance. It urges better risk management to protect business models and enhance the financial center's reputation.
This paper examines AI's transformative impact on banking and insurance, enhancing efficiency, risk management, and customer experience. It highlights generative AI's unique risks, such as hallucination, while existing frameworks address most AI risks. Key regulatory gaps include governance, model risk management, data governance, and oversight of non-traditional players and third-party providers.
The essential role of #ai in #banking holds promise for efficiency, but faces challenges like the opaque "black box" issue, hindering #fairness and #transparency in #decisionmaking #algorithms. Substituting AI with Explainable AI (#xai) can mitigate this problem, ensuring #accountability and #ethical standards. Research on XAI in finance is extensive but often limited to specific cases like #frauddetection and credit #riskassessment.
Companies use #ai tools for #hr decisions, but they face a balance between benefits and #risks. With limited federal #regulation and complex state laws, employers seek guidance. The #model#riskmanagement#mrm framework, adapted from #finance, aids in managing #airisks for #employment choices. Proportionality lets employers adjust validation to risks and tech changes. Objective analysis and a competent MRM team ensure AI tools align with design and legal requirements, fostering trust and #compliance.
Recent #ai developments, particularly in Natural Language Processing (#nlp) like #gpt3, are widely used. Ensuring safety and trust with increasing NLP use requires robust guidelines. Global AI #regulations are evolving through initiatives like the #euaiact, #unesco recommendations, #us AI Bill of Rights, and others. The EU AI Act's comprehensive regulation sets a potential global benchmark. NLP models are subject to existing rules, such as #gdpr. This paper explores AI regulations, GDPR's application to AI, the EU AI Act's #riskbasedapproach, and NLP's role within these frameworks.
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.
The submission suggests strategies for regulating #ai in #australia, including examining the rate of take-up of #automated #decisionmaking systems, and regulating specific applications of underlying AI technologies. It also suggests altering the definition of AI, creating a set of guiding principles, and adopting a #risk-based approach to #regulation.
This paper critically assesses the proposed #euaiact regarding #riskmanagement and acceptability of #highrisk #ai systems. The Act aims to promote trustworthy AI with proportionate #regulations but its criteria, "as far as possible" (AFAP) and "state of the art," are deemed unworkable and lacking in proportionality and trustworthiness. The Parliament's proposed amendments, introducing "reasonableness" and cost-benefit analysis, are argued to be more balanced and workable.
Private sector #ai applications can lead to unfair results and loss of informational #privacy, such as increasing #insurancepremiums. Addressing this involves exploring the philosophical theory of fairness as equality of opportunity.