Using Reinforcement Learning in the Algorithmic Trading Problem


如何引用文章

全文:

开放存取 开放存取
受限制的访问 ##reader.subscriptionAccessGranted##
受限制的访问 订阅存取

详细

Abstract—The development of reinforced learning methods has extended application to many areas including algorithmic trading. In this paper trading on the stock exchange is interpreted into a game with a Markov property consisting of states, actions, and rewards. A system for trading the fixed volume of a financial instrument is proposed and experimentally tested; this is based on the asynchronous advantage actor-critic method with the use of several neural network architectures. The application of recurrent layers in this approach is investigated. The experiments were performed on real anonymized data. The best architecture demonstrated a trading strategy for the RTS Index futures (MOEX:RTSI) with a profitability of 66% per annum accounting for commission. The project source code is available via the following link: http://github.com/evgps/a3c_trading.

作者简介

E. Ponomarev

Skolkovo Institute of Science and Technology

编辑信件的主要联系方式.
Email: Evgenii.Ponomarev@skoltech.ru
俄罗斯联邦, Moscow

I. Oseledets

Skolkovo Institute of Science and Technology; Marchuk Institute of Numerical Mathematics, Russian Academy of Sciences

Email: Evgenii.Ponomarev@skoltech.ru
俄罗斯联邦, Moscow; Moscow

A. Cichocki

Skolkovo Institute of Science and Technology

Email: Evgenii.Ponomarev@skoltech.ru
俄罗斯联邦, Moscow

补充文件

附件文件
动作
1. JATS XML

版权所有 © Pleiades Publishing, Inc., 2019