152 résultats
pour « Résilience numérique »
Insurance Europe responded to EIOPA's draft Opinion on AI governance in insurance, supporting clarity on existing rules but raising concerns over potential new obligations. It cautioned that the draft's language might lead to supervisory expectations being misinterpreted as binding requirements, conflicting with the EU's simplification goals for smaller firms. Insurance Europe also highlighted risks of dual supervision in some regions and emphasized the need for clear distinctions between different AI types and user roles. It urged EIOPA to focus on aligning the Opinion with established frameworks like Solvency II and GDPR for effective oversight.
Researchers proposed a new risk metric for evaluating security threats in Large Language Model (LLM) chatbots, considering system, user, and third-party risks. An empirical study using three chatbot models found that while prompt protection helps, it's not enough to prevent high-impact threats like misinformation and scams. Risk levels varied across industries and user age groups, highlighting the need for context-aware evaluation. The study contributes a structured risk assessment methodology to the field of AI security, offering a practical tool for improving LLM-powered chatbot safety and informing future research and regulatory frameworks.
The European Union’s AI Act significantly reshapes corporate governance, imposing new responsibilities on directors, compliance officers, in-house counsels, and corporate lawyers. It demands transparency, risk management, and regulatory oversight for AI systems, particularly high-risk ones. These professionals must integrate AI oversight into governance, manage liability, conduct impact assessments, and ensure cross-border compliance. With its extraterritorial reach, the Act influences non-EU entities and sets global standards for AI governance. This paper aims to offer strategic guidance on aligning corporate policies with these emerging legal requirements, emphasizing proactive risk management and ethical AI adoption.
As all transactions become digital, any involvement with EU users-even minor-triggers complex compliance risks, shifting the landscape from predictable “risk” to broader “uncertainty.” Compliance now dominates, reducing litigable individual rights and increasing disputes, but with a trend toward alternative and online dispute resolution (ADR/ODR). Traditional contract and litigation strategies are less effective, as mandatory compliance overrides forum or law choices. Future disputes will increasingly involve digital elements, requiring new approaches and cooperation between parties, especially regarding AI, data, and cybersecurity. Litigation will not decrease, but its nature will fundamentally change, demanding innovative risk management in international commercial litigation.
The Cyber Due Diligence Object Model (CDDOM) is a structured, extensible framework designed for SMEs to manage cybersecurity due diligence in digital supply chains. Aligned with regulations like NIS2, DORA, CRA, and GDPR, CDDOM enables continuous, automated, and traceable due diligence. It integrates descriptive schemas, role-specific messaging, and decision support to facilitate supplier onboarding, risk reassessment, and regulatory compliance. Validated in real-world scenarios, CDDOM supports automation, transparency, and interoperability, translating compliance and trust signals into machine-readable formats. It fosters resilient, decision-oriented cyber governance, addressing modern cybersecurity challenges outlined in recent research.
This study extends the Gordon–Loeb model for cybersecurity investment by incorporating a Hawkes process to model temporally clustered cyberattacks, reflecting real-world attack bursts. Formulated as a stochastic optimal control problem, it maximizes net benefits through adaptive investment policies that respond to attack arrivals. Numerical results show these dynamic strategies outperform static and Poisson-based models, which overlook clustering, especially in high-risk scenarios. The framework aids risk managers in tailoring responsive cybersecurity strategies. Future work includes empirical calibration, risk-averse loss modeling, cyber-insurance integration, and multivariate Hawkes processes for diverse attack types.
The World Economic Forum (WEF) and the University of Oxford’s GCSCC released the *Cyber Resilience Compass* to help organizations strengthen cyber resilience. Based on global expert input, it outlines seven key areas: leadership, governance, people and culture, business processes, technical systems, crisis management, and ecosystem engagement. It stresses that cyber resilience requires more than technical fixes; it demands aligning strategies with business goals, continuous learning, and collaboration. Tailored approaches are essential, given differing organizational risks and structures. The Compass aims to foster knowledge-sharing and build a scalable, adaptable framework for long-term, effective cyber resilience.
Integrating Cyber Security (CS) with Enterprise Architecture (EA) offers a holistic approach to managing complex cyber risks. This study, through literature review, focus groups, and interviews, identified four key integration strategies: embedding CS in EA frameworks, leveraging agile secure development, enhancing knowledge exchange, and aligning CS/EA functions. Implementing these can improve Cyber Risk Management efficiency and reliability.
The EU prioritizes cybersecurity and data protection due to rising cyber threats and digital transformation. It employs regulations like GDPR for personal data and the NIS Directive for critical infrastructure resilience. This study analyzes their impact, challenges, and interplay, also comparing them globally to assess effectiveness in safeguarding digital security and fostering trust.
This study analyzes resource provisioning with strict reliability demands. It characterizes optimal cost scaling in chance-constrained problems as reliability increases. It reveals limitations of common distributionally robust optimization methods, proposes improvements using marginal distributions or f-divergences, and offers a line search for near-optimal solutions, overcoming data sample limitations.