Using Reinforcement Learning in the Algorithmic Trading Problem
- Authors: Ponomarev E.S.1, Oseledets I.V.1,2, Cichocki A.S.1
-
Affiliations:
- Skolkovo Institute of Science and Technology
- Marchuk Institute of Numerical Mathematics, Russian Academy of Sciences
- Issue: Vol 64, No 12 (2019)
- Pages: 1450-1457
- Section: Mathematical Models and Computational Methods
- URL: https://ogarev-online.ru/1064-2269/article/view/201694
- DOI: https://doi.org/10.1134/S1064226919120131
- ID: 201694
Cite item
Abstract
Abstract—The development of reinforced learning methods has extended application to many areas including algorithmic trading. In this paper trading on the stock exchange is interpreted into a game with a Markov property consisting of states, actions, and rewards. A system for trading the fixed volume of a financial instrument is proposed and experimentally tested; this is based on the asynchronous advantage actor-critic method with the use of several neural network architectures. The application of recurrent layers in this approach is investigated. The experiments were performed on real anonymized data. The best architecture demonstrated a trading strategy for the RTS Index futures (MOEX:RTSI) with a profitability of 66% per annum accounting for commission. The project source code is available via the following link: http://github.com/evgps/a3c_trading.
About the authors
E. S. Ponomarev
Skolkovo Institute of Science and Technology
Author for correspondence.
Email: Evgenii.Ponomarev@skoltech.ru
Russian Federation, Moscow
I. V. Oseledets
Skolkovo Institute of Science and Technology; Marchuk Institute of Numerical Mathematics, Russian Academy of Sciences
Email: Evgenii.Ponomarev@skoltech.ru
Russian Federation, Moscow; Moscow
A. S. Cichocki
Skolkovo Institute of Science and Technology
Email: Evgenii.Ponomarev@skoltech.ru
Russian Federation, Moscow
Supplementary files
