67 résultats
pour « Quantification des risques »
The study introduces partial law invariance, a novel concept extending law-invariant risk measures in decision-making under uncertainty. It characterizes partially law-invariant coherent risk measures with a unique formula, deviating from classical approaches. Strong partial law invariance is introduced, proposing new risk measures like partial versions of Expected Shortfall for risk assessment under model uncertainty.
The paper introduces a new robust estimation technique, the Method of Truncated Moments (MTuM), tailored for estimating the tail index of a Pareto distribution from grouped data. It addresses limitations in existing methods for grouped loss severity data, providing inferential justification through the central limit theorem and simulation studies.
This paper explores risk factor distribution forecasting in finance, focusing on the widely used Historical Simulation (HS) model. It applies various deep generative methods for conditional time series generation and proposes new techniques. Evaluation metrics cover distribution distance, autocorrelation, and backtesting. The study reveals HS, GARCH, and CWGAN as top-performing models, with potential future research directions discussed.
This research develops a mathematical model using Extreme Value Theory and Risk Measures to estimate and forecast major fire insurance claims, enhancing insurers' understanding of potential risks. Utilizing a three-parameter Generalized Pareto Distribution in the Extreme Value Theory framework, the study effectively models large losses, aiding in risk management and pricing strategies for insurance firms.
In 1921, Keynes and Knight stressed the distinction between uncertainty and risk. While risk involves calculable probabilities, uncertainty lacks a scientific basis for probabilities. Knightian uncertainty exists when outcomes can't be assigned probabilities. This poses challenges in decision-making and regulation, especially in scenarios like AI, urging caution for eliminating worst-case scenarios due to potential high costs and missed benefits.
The paper addresses challenges in risk assessment from limited, non-stationary historical data and heavy-tailed distributions. It introduces a novel method for scaling risk estimators, ensuring robustness and conservative risk assessment. This approach extends time scaling beyond conventional methods, facilitates risk transfers, and enables unbiased estimation in small sample settings. Demonstrated through value-at-risk and expected shortfall estimation examples, the method's effectiveness is supported by an empirical study showcasing its impact.
This study explores cyber risk in businesses, suggesting cybersecurity investment and insurance as key strategies. Using a network model, it examines firms' interconnected decisions, defining a Nash equilibrium where firms optimize cybersecurity and insurance. Findings highlight their interdependence and how network structures affect choices, reinforced by numerical analyses.
The paper explores optimal insurance contracts using decision makers' preferences, combining expected loss with a deviation measure like Gini coefficient or standard deviation. It reveals that using expected value principle favors stop-loss indemnities, defining precise deductibles. The optimal indemnity structure remains consistent even with a capped insurance premium. Multiple examples based on Gini coefficient and standard deviation illustrate these findings.
We study risk processes with level dependent premium rate. Assuming that the premium rate converges, as the risk reserve increases, to the critical value in the net-profit condition, we obtain upper and lower bounds for the ruin probability. In contrast to existing in the literature results, our approach is purely probabilistic and based on the analysis of Markov chains with asymptotically zero drift.
#bayesian data imputation holds significant importance in a variety of fields including #riskmanagement. Incomplete or missing data can hinder a thorough analysis of risks, making accurate decision-making challenging. By employing imputation techniques to fill in the gaps, risk managers can obtain a more comprehensive and reliable understanding of the underlying risk factors. This, in turn, enables them to make informed decisions and develop effective strategies for #riskmitigation.