91 résultats pour « Quantification des risques »
The RNN-HAR model, integrating Recurrent Neural Networks with the heterogeneous autoregressive (HAR) model, is proposed for Value at Risk (VaR) forecasting. It effectively captures long memory and non-linear dynamics. Empirical analysis from 2000 to 2022 shows RNN-HAR outperforms traditional HAR models in one-step-ahead VaR forecasting across 31 market indices.
This report uses UK fire statistics to model insurance claims for a company next year. It estimates the total sum of claims by modeling both the number and size of fires as random variables from statistical distributions. Monte Carlo simulations in R are used to predict the probability distribution of total claim costs.
"We study the general properties of robust convex risk measures as worst-case values under uncertainty on random variables. We establish general concrete results regarding convex conjugates and sub-differentials. We refine some results for closed forms of worstcase law invariant convex risk measures under two concrete cases of uncertainty sets for random variables: based on the first two moments and Wasserstein balls."
The paper proposes a novel approach using Monte Carlo Simulation to quantitatively prioritize project risks based on their impact on project duration and cost, addressing limitations of traditional risk matrices and enabling project managers to differentiate critical risks according to their specific impact on time or cost objectives.
"The risk measures contain some premium principles and shortfalls based on entropy. The shortfalls include the Gini shortfall, extended Gini shortfall, shortfall of cumulative residual entropy and shortfall of cumulative residual Tsallis entropy with order α."
New estimators for generalized tail distortion (GTD) risk measures are proposed, based on first-order asymptotic expansions, offering simplicity and comparable or better performance than existing methods. A reinsurance premium principle using GTD risk measure is tested on car insurance claims data, suggesting its effectiveness in embedding safety loading in pricing to counter statistical uncertainty.
Cyber risk presents significant challenges to society, yet its statistical behavior remains insufficiently understood. This paper analyzes three databases to study cyber risk dynamics. It identifies increasing frequency and severity, particularly in malicious events since 2018. Persistent heavy-tailedness across risk categories implies lower insurance demand and potentially heightened risk levels for firms.
The study explores optimal decision-making for agents minimizing risks with extremely heavy-tailed, possibly dependent losses. Focused on super-Pareto distributions, including heavy-tailed Pareto, it finds non-diversification preferred with well-defined risk measures. Equilibrium analysis in risk exchange markets indicates agents with such losses avoid risk sharing. Empirical data confirms real-world heavy-tailed distributions.
The objective of this paper is to compare the most common available Risk quantification models: Fault Tree Analysis, Failure Mode Effective Analysis, and FAIR (Factor Analysis of Information Risk) Model.
Operational risk modeling requires flexible distributions for non-negative values, particularly those exhibiting heavy-tail behavior. Composite or spliced models, like composite Tukey-type distributions, are gaining attention for their ability to handle extreme and ordinary observations effectively. This paper explores the flexibility of such distributions, offering empirical validation with operational risk data.