"We develop an approach for solving time-consistent risk-sensitive stochastic optimization problems using model-free reinforcement learning (RL). Specifically, we assume agents assess the risk of a sequence of random variables using dynamic convex risk measures. We employ a time-consistent dynamic programming principle to determine the value of a particular policy, and develop policy gradient update rules. We further develop an actor-critic style algorithm using neural networks to optimize over policies. Finally, we demonstrate the performance and flexibility of our approach by applying it to optimization problems in statistical arbitrage trading and obstacle avoidance robot control."
top of page
Rechercher
Posts récents
Voir tout“As analysts are primary recipients of these reports, we investigate whether and how analyst forecast properties have changed following...
00
This study proposes a new method for detecting insider trading. The method combines principal component analysis (PCA) with random forest...
00
Cyber risk classifications often fail in out-of-sample forecasting despite their in-sample fit. Dynamic, impact-based classifiers...
30
bottom of page
תגובות