“As analysts are primary recipients of these reports, we investigate whether and how analyst forecast properties have changed following the provision of Solvency II information. Using a sample of EEA insurers and a difference-in-differences design, we find reductions in analysts’ earnings forecast errors at the consensus and individual levels, as well as a decrease in forecast dispersion.”
This study proposes a new method for detecting insider trading. The method combines principal component analysis (PCA) with random forest (RF) algorithms. The results show that this method is highly accurate, achieving 96.43% accuracy in classifying transactions as lawful or unlawful. The method also identifies important features, such as ownership and governance, that contribute to insider trading. This approach can help regulators identify and prevent insider trading more effectively.
Cyber risk classifications often fail in out-of-sample forecasting despite their in-sample fit. Dynamic, impact-based classifiers outperform rigid, business-driven ones in predicting losses. Cyber risk types are better suited for modeling event frequency than severity, offering crucial insights for cyber insurance and risk management strategies.
Insurance typically benefits risk-averse individuals by pooling finite-mean risks. However, with infinite-mean distributions (e.g., Pareto, Fréchet), risk sharing can backfire, creating a "nondiversification trap." This applies to highly skewed distributions like Cauchy or catastrophic risks with infinite losses. Open questions remain about these complex scenarios.
The main vulnerability in data protection is ineffective risk management, often subjective and superficial. GDPR outlines what to achieve but not how, leading to inconsistent compliance. This paper advocates a quantitative approach for data protection, emphasizing analytics, quantitative risk analysis, and expert opinion calibration to enhance impact assessments.
This paper introduces a dynamic, proactive cyber risk assessment methodology that combines internal and external data, converting qualitative inputs into quantitative measures within a Bayesian network. Using the Exploit Prediction Scoring System, it dynamically estimates attack success probabilities and asset impact, validated through a Supervisory Control and Data Acquisition (SCADA) environment case study.
Cybersecurity investment models often mislead practitioners due to unreliable data, unverified assumptions, and false premises. These models work under idealized conditions rarely seen in real-world settings, so practitioners should carefully adapt them, recognizing their limitations and avoiding strict reliance on their recommendations.
Explainable AI (XAI) is becoming increasingly important, especially in fields like fraud detection. Differentiable Inductive Logic Programming (DILP) is an XAI method that can be used for this purpose. While DILP has scalability issues, data curation can make it more applicable. While it might not outperform traditional methods in terms of processing speed, it can provide comparable results. DILP's potential lies in its ability to learn recursive rules, which can be beneficial in certain use cases.
This study analyzes tone consistency in bank risk disclosures from regulatory Pillar 3 reports and annual IFRS reports. Findings indicate that optimistic P3 tones enhance annual report informativeness, while pessimistic tones can obscure it.
The paper examines climate litigation's growing impact on banks, noting limited current effects but a projected increase. Key risks include reputational damage and influences on risk management and investment decisions. Banks are urged to address climate litigation risks proactively to enhance resilience, with future research suggested on mitigation strategies.