69 résultats
pour « Quantification des risques »
“We lay a theoretical foundation for the choice of an exponential–Pareto combined distribution to model the severity of the operational risk. We derive, on a theoretical basis, the functional form of the operational risk severity distribution. The resulting loss severity distribution, in theory, is consistent with the parametric distribution that previous empirical works suggest is the best fit for loss data.”
The paper investigates two topics in game theory and decision-making. In the first part, it explores the concept of delegation within a Bayesian persuasion framework. In the second part, the paper focuses on the process of equilibrium selection between the Pareto dominant equilibrium and the risk dominant equilibrium.
The study introduces partial law invariance, a novel concept extending law-invariant risk measures in decision-making under uncertainty. It characterizes partially law-invariant coherent risk measures with a unique formula, deviating from classical approaches. Strong partial law invariance is introduced, proposing new risk measures like partial versions of Expected Shortfall for risk assessment under model uncertainty.
The paper introduces a new robust estimation technique, the Method of Truncated Moments (MTuM), tailored for estimating the tail index of a Pareto distribution from grouped data. It addresses limitations in existing methods for grouped loss severity data, providing inferential justification through the central limit theorem and simulation studies.
This paper explores risk factor distribution forecasting in finance, focusing on the widely used Historical Simulation (HS) model. It applies various deep generative methods for conditional time series generation and proposes new techniques. Evaluation metrics cover distribution distance, autocorrelation, and backtesting. The study reveals HS, GARCH, and CWGAN as top-performing models, with potential future research directions discussed.
This research develops a mathematical model using Extreme Value Theory and Risk Measures to estimate and forecast major fire insurance claims, enhancing insurers' understanding of potential risks. Utilizing a three-parameter Generalized Pareto Distribution in the Extreme Value Theory framework, the study effectively models large losses, aiding in risk management and pricing strategies for insurance firms.
In 1921, Keynes and Knight stressed the distinction between uncertainty and risk. While risk involves calculable probabilities, uncertainty lacks a scientific basis for probabilities. Knightian uncertainty exists when outcomes can't be assigned probabilities. This poses challenges in decision-making and regulation, especially in scenarios like AI, urging caution for eliminating worst-case scenarios due to potential high costs and missed benefits.
The paper addresses challenges in risk assessment from limited, non-stationary historical data and heavy-tailed distributions. It introduces a novel method for scaling risk estimators, ensuring robustness and conservative risk assessment. This approach extends time scaling beyond conventional methods, facilitates risk transfers, and enables unbiased estimation in small sample settings. Demonstrated through value-at-risk and expected shortfall estimation examples, the method's effectiveness is supported by an empirical study showcasing its impact.
This study explores cyber risk in businesses, suggesting cybersecurity investment and insurance as key strategies. Using a network model, it examines firms' interconnected decisions, defining a Nash equilibrium where firms optimize cybersecurity and insurance. Findings highlight their interdependence and how network structures affect choices, reinforced by numerical analyses.
The paper explores optimal insurance contracts using decision makers' preferences, combining expected loss with a deviation measure like Gini coefficient or standard deviation. It reveals that using expected value principle favors stop-loss indemnities, defining precise deductibles. The optimal indemnity structure remains consistent even with a capped insurance premium. Multiple examples based on Gini coefficient and standard deviation illustrate these findings.