The EU AI Act's implementation begins after a 3-year legislative journey, requiring national authorities to clarify and enforce it. This policy brief outlines Belgium's tasks under the Act, including scope application, exemptions, and the designation of competent authorities to manage AI-related responsibilities.
AI is transforming finance, enhancing efficiency while introducing risks like cyber threats and bias. The EU’s AI Act regulates high-risk AI in credit and insurance. Financial institutions must integrate AI responsibly, ensuring transparency and fairness. Supervisors like ACPR will enforce compliance, fostering trust and innovation through collaboration and governance.
The EU's AI Act is a pioneering, risk-based law designed to regulate AI. It balances promoting AI adoption with protecting fundamental rights and democratic values. The Act uses pre-emptive risk assessments to categorize AI technologies and apply corresponding legal requirements, drawing from existing EU product safety laws.
This paper examines the interplay of the AI Act and GDPR regarding explainable AI, focusing on individual safeguards. It outlines rules, compares explanations under both, and reviews EU frameworks. The paper argues that current laws are insufficient, necessitating broader, sector-specific regulations for explainable AI.
“This paper looks at global and regional efforts to come up with strategies and regulatory frameworks for AI governance. Chief amongst them include the OECD AI Principles; the EU AI Act; and the NIST AI RMF. The common thread among these frameworks or legislations is identifying and categorizing AI developments and deployments according to their risk levels and providing guidelines for ethical and trustworthy AI with considerations for human safety and innovation.”