21 résultats
pour « gdpr »
Recent #ai developments, particularly in Natural Language Processing (#nlp) like #gpt3, are widely used. Ensuring safety and trust with increasing NLP use requires robust guidelines. Global AI #regulations are evolving through initiatives like the #euaiact, #unesco recommendations, #us AI Bill of Rights, and others. The EU AI Act's comprehensive regulation sets a potential global benchmark. NLP models are subject to existing rules, such as #gdpr. This paper explores AI regulations, GDPR's application to AI, the EU AI Act's #riskbasedapproach, and NLP's role within these frameworks.
“The origins of the discussion concerning the role of #risk in #datatransfers are difficult to trace. Despite this, #schrems II, a recent decision of the European Court of Justice (#cjeu), has given the topic new traction. This paper explores the risk-based approach (#rba) hypothesis for data transfers from a different perspective: the consequences of applying the 'two-step test' stated in Article 44 of #gdpr. The main goal is to present the challenges of applying this test and the various questions it raises.”
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.
This is a note on the #gdpr and the use of #us-based #cloudservers. The note raises concerns about the #risk of US #intelligenceagencies having access to #data transferred to any US cloud from the #eu, or directly accessed by US agencies, even while still in the EU / #eea or while in transit. The note discusses cases in #france, the #netherlands, and #germany that have addressed these issues, concluding that the legality of the use of US cloud servers and solutions remains problematic.
This paper explores the #uncertainty around when #data is considered "#personaldata" under #dataprotection#laws. The authors propose that by focusing on the specific #risks to #fundamentalrights that are caused by #dataprocessing, the question whether data falls under the scope of the #gdpr becomes clearer.
"By employing Big Data and Artificial Intelligence (AI), personal data that is categorized as sensitive data according to the GDPR Art. 9 can often be extracted. Art. 9(1) GDPR initially forbids this kind of processing. Almost no industrial control system functions without AI, even when considering the broad definition of the EU AI Regulation (EU AI Regulation-E)."
"... nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does. Personal data is harmful when its use causes harm or creates a risk of harm. It is not harmful if it is not used in a way to cause harm or risk of harm."
"Dark Patterns are ubiquitous: deliberate choices in website- or app-design that exploit unobservant or irrational behavior of users, tricking them into reaching agreements or consenting with settings that are not in line with the users’ actual preferences."
"... we have performed a detailed study which includes: GDPR-compliance, provided functionality, security and privacy issues, and the cost ... of the different operations to be run on the blockchain."
"The EU’s GDPR and proposed AI Act tend toward a sustainable environment of AI systems. However, they are still too lenient and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition. This paper proposes a pre-approval model in which some AI developers, before launching their systems into the market, must perform a preliminary risk assessment of their technology followed by a self-certification."