69 résultats
pour « Quantification des risques »
This paper focuses on predicting #corporate #default #risk using frailty correlated default #models with subjective judgments. The study uses a #bayesian approach with the Particle Markov Chain #montecarlo algorithm to analyze data from #us public non-financial firms between 1980 and 2019. The findings suggest that the volatility and mean reversion of the hidden factor have a significant impact on the default intensities of the firms.
This study proposes a new approach to the analysis of #systemicrisk in #financialsystems, which is based on the #probability amount of exogenous shock that can be absorbed by the system before it deteriorates, rather than the size of the impact that exogenous events can exhibit. The authors use a linearized version of DebtRank to estimate the onset of financial distress, and compute localized and uniform exogenous shocks using spectral graph theory. They also extend their analysis to heterogeneous shocks using #montecarlo#simulations. The authors argue that their approach is more general and natural, and provides a standard way to express #failure#risk in financial systems.
"In this paper we propose efficient #bayesian Hamiltonian #montecarlo method for estimation of #systemicrisk#measures , LRMES, SRISK and ΔCoVaR, and apply it for thirty global systemically important banks and for eighteen largest #us#financialinstitutions over the period of 2000-2020. The systemic risk measures are computed based on the Dynamic Conditional Correlations model with generalized asymmetric #volatility. A policymaker may choose to rank the firms using some quantile of their systemic risk distributions such as 90, 95, or 99% depending on #risk preferences with higher quantiles being more conservative."
Effective #riskmanagement, including #operationalriskmanagement, is crucial for minimizing #financialrisks posed by #operationalrisk. Risk evaluation, which includes assessing potential risks and their #probabilities, is also vital. #bibliometric analysis using #metrics such as citations, networks, co-authorship, and region-based #publications can provide insights into the quality of #research on operational risk and identify gaps. Such analysis reveals a growing interest in the study of operational risk, but also highlights research gaps that need to be addressed for effective risk management.
This paper proposes a novel mixed-frequency quantile vector autoregression (MF-QVAR) model that uses a #bayesian framework and multivariate asymmetric Laplace distribution to estimate missing low-frequency variables at higher frequencies. The proposed method allows for timely policy interventions by analyzing conditional quantiles for multiple variables of interest and deriving quantile-related #riskmeasures at high frequency. The model is applied to the US economy to #nowcast conditional quantiles of #gdp, providing insight into #var, Expected Shortfall, and distance among percentiles of real GDP nowcasts.
"#risksharing is one way to pool risks without the need for a #thirdparty. To ensure the attractiveness of such a system, the rule should be accepted and understood by all participants. A desirable risk-sharing rule should fulfill #actuarial fairness and #pareto optimality while being easy to compute. This paper establishes a one-to-one correspondence between an actuarially fair #paretooptimal (AFPO) risk-sharing rule and a fixed point of a specific function."
This paper focuses on the development of #bayesian classification and regression tree (#cart) models for claims frequency modeling in non-life #insurance pricing. The authors propose the use of the zero-inflated #poisson distribution to address the issue of imbalanced claims data and introduce a general MCMC algorithm for posterior tree exploration. Additionally, the deviance information criterion (DIC) is used for model selection. The paper discusses the applicability of these models through simulations and real insurance data.
"We present a novel approach to quantify the uncertainty and sensitivity of risk estimates, using the CLIMADA open-source climate risk assessment platform. This work builds upon a recently developed extension of CLIMADA, which uses statistical modelling techniques to better quantify climate model ensemble uncertainty. Here, we further analyse the propagation of hazard, exposure and vulnerability uncertainties by varying a number of input factors based on a discrete, scientifically justified set of options."
"We respond to Tetlock et al. (2022) showing 1) how expert judgment fails to reflect tail risk, 2) the lack of compatibility between forecasting tournaments and tail risk assessment methods (such as extreme value theory). More importantly, we communicate a new result showing a greater gap between the properties of tail expectation and those of the corresponding probability."
"Bayesian estimates from experimental data can be influenced by highly diffuse or "uninformative" priors. This paper discusses how practitioners can use their own expertise to critique and select a prior that (i) incorporates our knowledge as experts in the field, and (ii) achieves favorable sampling properties. I demonstrate these techniques using data from eleven experiments of decision-making under risk, and discuss some implications of the findings."