This paper addresses actuarial challenges in insurance by developing a user-friendly algorithm for optimal reinsurance decisions, balancing capital efficiency and asset/liability management. It combines expert judgment with quantitative methods, overcoming computational barriers for non-specialists. The techniques can be applied to broader risk management problems in insurance.
This paper introduces a novel multivariate dependence model to better represent cyber breach risks by capturing temporal and cross-group dependencies. Using a semi-parametric and copula approach, it improves predictive performance and generates more profitable insurance contracts, outperforming existing models in managing cyber risk and insurance pricing.
The study explores an insurance company managing financial risk through reinsurance, aiming to optimize terminal wealth and minimize ruin probability. Using neural networks, it finds the optimal reinsurance strategy based on expected utility and a modified Gerber-Shiu function, illustrated by a numerical example involving a Cramér-Lundberg surplus model.
The paper explores Pareto optimality in decentralized peer-to-peer risk-sharing markets using robust distortion risk measures. It characterizes optimal risk allocations, influenced by agents' tail risk assessments. Using flood risk insurance as an example, the study compares decentralized and centralized market structures, highlighting benefits and drawbacks of decentralized insurance.
Date : Tags : , , , , ,
This study explores large-scale, technology-driven consumer fraud over the next decade, offering four forward-looking scenarios. Using trends, scenarios, and narratives, feedback was gathered to identify strategies for tackling fraud. A systems approach modeled criminal and anti-fraud systems, leading to key "challenge" themes to guide future anti-fraud efforts.
This study examines how organizations conceptualize and manage cyber risk, finding a gap between the normative risk-based management approach and actual practices. Organizations often use qualitative assessments masked as quantitative, creating an illusion of precision. The study proposes "qualculation" as the highest standard for aligning cyber risk measurement and management.
Elicitable functionals and consistent scoring functions aid in optimal forecasting but assume correct distributions, which is unrealistic. To address this, robust elicitable functionals account for small misspecifications using Kullback-Leibler divergence. These robust functionals maintain statistical properties and are applied in reinsurance and robust regression settings.